content
stringlengths 7
2.61M
|
---|
<gh_stars>1-10
package kube
import (
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func toDockerConfigSecret(secretName, auth string) *v1.Secret {
return &v1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: secretName,
},
Type: "kubernetes.io/dockerconfigjson",
StringData: map[string]string{
".dockerconfigjson": auth,
},
}
}
|
#pragma once
#include <dpp/dpp.h>
/**
* Case Sensitive Implementation of endsWith()
* It checks if the string 'mainStr' ends with given string 'toMatch'
*/
bool endsWith(const std::string &mainStr, const std::string &toMatch) {
if (mainStr.size() >= toMatch.size() &&
mainStr.compare(mainStr.size() - toMatch.size(), toMatch.size(), toMatch) == 0)
return true;
else
return false;
}
/**
* Make a readable time-string from seconds
* @param seconds The amount of seconds to create a time-string for
* @return The formatted time-string
*/
std::string stringifySeconds(uint32_t seconds) {
uint32_t days = 0, hours = 0, minutes = 0;
while (seconds >= 86400) {
seconds -= 86400;
days++;
}
while (seconds >= 3600) {
seconds -= 3600;
hours++;
}
while (seconds >= 60) {
seconds -= 60;
minutes++;
}
std::string result;
if (days == 1) {
result += std::to_string(days) + " day ";
} else if (days > 1) {
result += std::to_string(days) + " days ";
}
if (hours == 1) {
result += std::to_string(hours) + " hour ";
} else if (hours > 1) {
result += std::to_string(hours) + " hours ";
}
if (minutes == 1) {
result += std::to_string(minutes) + " minute ";
} else if (minutes > 1) {
result += std::to_string(minutes) + " minutes ";
}
if (seconds == 1) {
result += std::to_string(seconds) + " second ";
} else if (seconds > 1) {
result += std::to_string(seconds) + " seconds ";
}
return result.erase(result.find_last_not_of(' ') + 1); // remove the ending space
}
|
THE IMPACT OF SAVINGS WITHDRAWALS ON A BANKERS CAPITAL HOLDINGS SUBJECT TO BASEL III ACCORD In this paper, we analyze the impact of savings withdrawals on a banks capital holdings under Basel III capital regulation. We examine the interplay between savings withdrawals and the investment strategies of a bank, by extending the classical meanvariance paradigm to investigate the bankers optimal investment strategy. We solve this via an optimization problem under a meanvariance paradigm, subject to a quadratic optimization function which incorporates a running penalization cost alongside the terminal condition. By solving the HamiltonJacobiBellman (HJB) equation, we derive the closed-form expressions for the value function as well as the bankers optimal investment strategies. Our study provides a novel insight into the way banks allocate their capital holdings by showing that in the presence of savings withdrawals, banks will increase their risk-free asset holdings to hedge against the forthcoming deposit withdrawals whilst facing short-selling constraints. Moreover, we show that if the savings depositors of the bank are more stock-active, an economic expansion will imply a greater reduction in bank savings. As a result, the banker will reduce his/her loan portfolio and will depend on high stock returns with short-selling constraints to conform to Basel III capital regulation.
|
Gender gap among second-generation students in higher education: the Italian case Italy is experiencing a structural and multigenerational migratory presence in which new generations are increasingly obtaining access to the highest social and educational levels, including university. The presence of foreign students in Italian secondary schools has been extensively covered by research (especially regarding their presence in technical and vocational institutes, which formally open up to a university career but often cause a sort of school marginalisation that frequently results in social disadvantage) but little is known about their presence at the university level. It would be simplistic to assume that those students who enrolled at university had never experienced any trouble in their pre-university or university career. In this chapter, the phenomenon of second-generation immigrant students will be quantitatively contextualised, with specific regard to foreign students in Italian universities, and with a descriptive analysis on the impact of gender on education. The aim of the chapter is to analyse the multifaceted educational paths of young people, those under 35 years old, born in Italy to foreign parents (or who moved to Italy later), their expectations and the real opportunities offered to them.
|
Introducing the Proceed Ventral Patch as a new device in surgical management of umbilical and small ventral hernias: preliminary results. Surgical treatment of umbilical and small ventral hernias ranges from a simple suture repair to the placement of large intra-abdominal or retromuscular meshes. Several articles report a lower incidence of recurrence after mesh repair, whether this is positioned onlay, retromuscular, or intraperitoneally. Often, a simple suture repair fails in the longterm, whereas a laparoscopic or retromomuscular approach seems too extensive for these rather small hernias. In between those two treatment options exists a go-between repair that carries the idea of posterior repair without being so aggressive in its approach. In this study, the authors examined a new device called the Proceed Ventral Patch (PVP) (Ethicon, Inc., Sommerville, NJ, USA). It is a self-expanding, partially absorbable, flexible laminate mesh device that allows an easy, quick and minimal invasive, tension-free, and standardized approach to umbilical hernia treatment. No data nor publication exist on this new device. Reported herein is our early and first experience with this novel technique.
|
Inexactness of SDP Relaxation for Optimal Power Flow over Radial Networks and Valid Inequalities for Global Optimization It has been recently proven that the semidefinite programming (SDP) relaxation of the optimal power flow problem over radial networks is exact in the absence of generation lower bounds. In this paper, we investigate the situation where generation lower bounds are present. We show that even for a two-bus one-generator system, the SDP relaxation can have all possible approximation outcomes, that is SDP relaxation may be exact or SDP relaxation may be inexact or SDP relaxation may be feasible while the OPF instance may be infeasible. We provide a complete characterization of when these three approximation outcomes occur and an analytical expression of the resulting optimality gap for this two-bus system. In order to facilitate further research, we design a library of instances over radial networks in which the SDP relaxation has positive optimality gap. Finally, we propose valid inequalities and variable bound tightening techniques that significantly improve the computational performance of a global optimization solver. Our work demonstrates the need of developing efficient global optimization methods for the solution of OPF even in the simple but fundamental case of radial networks.
|
/*
* Copyright (c) 2021 Huawei Device Co., Ltd.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import * as ts from "typescript";
import { Compiler } from "../compiler";
import * as jshelpers from "../jshelpers";
export function compileMetaProperty(expr: ts.MetaProperty, compiler: Compiler) {
let curScope = compiler.getCurrentScope();
let id = jshelpers.getTextOfIdentifierOrLiteral(expr.name);
if (id == "target") {
let { scope, level, v } = curScope.find("4newTarget");
if (!v) {
throw new Error("fail to access new.target");
} else {
compiler.loadTarget(expr, { scope, level, v });
}
return;
}
// support meta.property further
}
|
/*
* Copyright © 2017 <NAME>, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*/
package co.cask.cdap.app.runtime.spark.classloader;
import co.cask.cdap.api.data.format.StructuredRecord;
import co.cask.cdap.api.data.schema.Schema;
import co.cask.cdap.app.runtime.spark.SparkPackageUtils;
import co.cask.cdap.app.runtime.spark.SparkRuntimeEnv;
import co.cask.cdap.common.lang.ClassRewriter;
import co.cask.cdap.internal.asm.Classes;
import co.cask.cdap.internal.asm.Signatures;
import com.google.common.base.Function;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableSet;
import com.google.common.reflect.TypeToken;
import org.objectweb.asm.ClassReader;
import org.objectweb.asm.ClassVisitor;
import org.objectweb.asm.ClassWriter;
import org.objectweb.asm.FieldVisitor;
import org.objectweb.asm.MethodVisitor;
import org.objectweb.asm.Opcodes;
import org.objectweb.asm.Type;
import org.objectweb.asm.commons.AdviceAdapter;
import org.objectweb.asm.commons.GeneratorAdapter;
import org.objectweb.asm.commons.Method;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Modifier;
import java.net.URL;
import java.util.ArrayList;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicReference;
import javax.annotation.Nullable;
/**
* A {@link ClassRewriter} for rewriting Spark related classes.
*/
public class SparkClassRewriter implements ClassRewriter {
private static final Logger LOG = LoggerFactory.getLogger(SparkClassRewriter.class);
// Define some of the class types used for bytecode rewriting purpose. Cannot be referred with .class since
// those classes may not be available to the ClassLoader of this class (they are loadable from this ClassLoader).
private static final Type SPARK_RUNTIME_ENV_TYPE =
Type.getObjectType("co/cask/cdap/app/runtime/spark/SparkRuntimeEnv");
private static final Type SPARK_RUNTIME_UTILS_TYPE =
Type.getObjectType("co/cask/cdap/app/runtime/spark/SparkRuntimeUtils");
private static final Type SPARK_CONTEXT_TYPE = Type.getObjectType("org/apache/spark/SparkContext");
private static final Type SPARK_STREAMING_CONTEXT_TYPE =
Type.getObjectType("org/apache/spark/streaming/StreamingContext");
private static final Type SPARK_CONF_TYPE = Type.getObjectType("org/apache/spark/SparkConf");
// SparkSubmit is a companion object, hence the "$" at the end
private static final Type SPARK_SUBMIT_TYPE = Type.getObjectType("org/apache/spark/deploy/SparkSubmit$");
private static final Type SPARK_YARN_CLIENT_TYPE = Type.getObjectType("org/apache/spark/deploy/yarn/Client");
private static final Type SPARK_DSTREAM_GRAPH_TYPE = Type.getObjectType("org/apache/spark/streaming/DStreamGraph");
private static final Type SPARK_BATCHED_WRITE_AHEAD_LOG_TYPE =
Type.getObjectType("org/apache/spark/streaming/util/BatchedWriteAheadLog");
private static final Type SPARK_EXECUTOR_CLASSLOADER_TYPE =
Type.getObjectType("org/apache/spark/repl/ExecutorClassLoader");
private static final Type YARN_SPARK_HADOOP_UTIL_TYPE =
Type.getObjectType("org/apache/spark/deploy/yarn/YarnSparkHadoopUtil");
private static final Type KRYO_TYPE = Type.getObjectType("com/esotericsoftware/kryo/Kryo");
private static final Type SCHEMA_SERIALIZER_TYPE =
Type.getObjectType("co/cask/cdap/app/runtime/spark/serializer/SchemaSerializer");
private static final Type STRUCTURED_RECORD_SERIALIZER_TYPE =
Type.getObjectType("co/cask/cdap/app/runtime/spark/serializer/StructuredRecordSerializer");
// Don't refer akka Remoting with the ".class" because in future Spark version, akka dependency is removed and
// we don't want to force a dependency on akka.
private static final Type AKKA_REMOTING_TYPE = Type.getObjectType("akka/remote/Remoting");
private static final Type EXECUTION_CONTEXT_TYPE = Type.getObjectType("scala/concurrent/ExecutionContext");
private static final Type EXECUTION_CONTEXT_EXECUTOR_TYPE =
Type.getObjectType("scala/concurrent/ExecutionContextExecutor");
// File name of the Spark conf directory as defined by the Spark framework
// This is for the Hack to workaround CDAP-5019 (SPARK-13441)
private static final String LOCALIZED_CONF_DIR = SparkPackageUtils.LOCALIZED_CONF_DIR;
private static final String LOCALIZED_CONF_DIR_ZIP = LOCALIZED_CONF_DIR + ".zip";
// File entry name of the SparkConf properties file inside the Spark conf zip
private static final String SPARK_CONF_FILE = "__spark_conf__.properties";
private final Function<String, URL> resourceLookup;
private final boolean rewriteYarnClient;
public SparkClassRewriter(Function<String, URL> resourceLookup, boolean rewriteYarnClient) {
this.resourceLookup = resourceLookup;
this.rewriteYarnClient = rewriteYarnClient;
}
@Nullable
@Override
public byte[] rewriteClass(String className, InputStream input) throws IOException {
if (className.equals(SPARK_CONTEXT_TYPE.getClassName())) {
// Rewrite the SparkContext class by rewriting the constructor to save the context to SparkRuntimeEnv
return rewriteContext(SPARK_CONTEXT_TYPE, input);
}
if (className.equals(SPARK_STREAMING_CONTEXT_TYPE.getClassName())) {
// Rewrite the StreamingContext class by rewriting the constructor to save the context to SparkRuntimeEnv
return rewriteContext(SPARK_STREAMING_CONTEXT_TYPE, input);
}
if (className.equals(SPARK_CONF_TYPE.getClassName())) {
// Define the SparkConf class by rewriting the class to put all properties from
// SparkRuntimeEnv to the SparkConf in the constructors
return rewriteSparkConf(SPARK_CONF_TYPE, input);
}
if (className.startsWith(SPARK_SUBMIT_TYPE.getClassName())) {
// Rewrite System.setProperty call to SparkRuntimeEnv.setProperty for SparkSubmit and all inner classes
return rewriteSetProperties(input);
}
if (className.equals(SPARK_YARN_CLIENT_TYPE.getClassName()) && rewriteYarnClient) {
// Rewrite YarnClient for workaround SPARK-13441.
return rewriteClient(input);
}
if (className.equals(SPARK_DSTREAM_GRAPH_TYPE.getClassName())) {
// Rewrite DStreamGraph to set TaskSupport on parallel array usage to avoid Thread leak
return rewriteDStreamGraph(input);
}
if (className.equals(SPARK_BATCHED_WRITE_AHEAD_LOG_TYPE.getClassName())) {
// Rewrite BatchedWriteAheadLog to register it in SparkRuntimeEnv so that we can free up the batch writer thread
// even there is no Receiver based DStream (it's a thread leak from Spark) (CDAP-11577) (SPARK-20935)
return rewriteBatchedWriteAheadLog(input);
}
if (className.equals(SPARK_EXECUTOR_CLASSLOADER_TYPE.getClassName())) {
// Rewrite the Spark repl ExecutorClassLoader to call `super(null)` so that it won't use the system classloader
// as parent
return rewriteExecutorClassLoader(input);
}
if (className.equals(AKKA_REMOTING_TYPE.getClassName())) {
// Define the akka.remote.Remoting class to avoid thread leakage
return rewriteAkkaRemoting(input);
}
if (className.equals(YARN_SPARK_HADOOP_UTIL_TYPE.getClassName())) {
// CDAP-8636 Rewrite methods of YarnSparkHadoopUtil to avoid acquiring delegation token, because when we execute
// spark submit, we don't have keytab login
return rewriteSparkHadoopUtil(className, input);
}
if (className.equals(KRYO_TYPE.getClassName())) {
// CDAP-9314 Rewrite the Kryo constructor to register serializer for CDAP classes
return rewriteKryo(input);
}
return null;
}
/**
* Rewrites the constructor to call SparkRuntimeEnv#setContext(SparkContext)
* or SparkRuntimeEnv#setContext(StreamingContext), depending on the class type.
*/
private byte[] rewriteContext(final Type contextType, InputStream byteCodeStream) throws IOException {
return rewriteConstructor(contextType, byteCodeStream, new ConstructorRewriter() {
@Override
void onMethodEnter(String name, String desc, GeneratorAdapter generatorAdapter) {
Type[] argTypes = Type.getArgumentTypes(desc);
// If the constructor has SparkConf as arguments,
// update the SparkConf by calling SparkRuntimeEnv.setupSparkConf(sparkConf)
// This is mainly to make any runtime properties setup by CDAP are being pickup even restoring
// from Checkpointing.
List<Integer> confIndices = new ArrayList<>();
for (int i = 0; i < argTypes.length; i++) {
if (SPARK_CONF_TYPE.equals(argTypes[i])) {
confIndices.add(i);
}
}
// Update all SparkConf arguments.
for (int confIndex : confIndices) {
generatorAdapter.loadArg(confIndex);
generatorAdapter.invokeStatic(SPARK_RUNTIME_ENV_TYPE,
new Method("setupSparkConf", Type.VOID_TYPE, new Type[] { SPARK_CONF_TYPE }));
}
}
@Override
public void onMethodExit(String name, String desc, GeneratorAdapter generatorAdapter) {
generatorAdapter.loadThis();
generatorAdapter.invokeStatic(SPARK_RUNTIME_ENV_TYPE,
new Method("setContext", Type.VOID_TYPE, new Type[] { contextType }));
}
});
}
/**
* Rewrites the SparkConf class constructor to call SparkRuntimeEnv#setupSparkConf(SparkConf).
*/
private byte[] rewriteSparkConf(final Type sparkConfType, InputStream byteCodeStream) throws IOException {
return rewriteConstructor(sparkConfType, byteCodeStream, new ConstructorRewriter() {
@Override
public void onMethodExit(String name, String desc, GeneratorAdapter generatorAdapter) {
generatorAdapter.loadThis();
generatorAdapter.invokeStatic(SPARK_RUNTIME_ENV_TYPE,
new Method("setupSparkConf", Type.VOID_TYPE, new Type[] { sparkConfType }));
}
});
}
/**
* Rewrites the DStreamGraph class for calls to parallel array with a call to
* SparkRuntimeUtils#setTaskSupport(ParArray).
*/
private byte[] rewriteDStreamGraph(InputStream byteCodeStream) throws IOException {
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(0);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
@Override
public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
return new MethodVisitor(Opcodes.ASM5, mv) {
@Override
public void visitMethodInsn(int opcode, String owner, String name, String desc, boolean itf) {
super.visitMethodInsn(opcode, owner, name, desc, itf);
// If detected call to ArrayBuffer.par(), set the TaskSupport to avoid thread leak.
//INVOKEVIRTUAL scala/collection/mutable/ ArrayBuffer.par ()Lscala/collection/parallel/mutable/ParArray;
Type returnType = Type.getReturnType(desc);
if (opcode == Opcodes.INVOKEVIRTUAL && name.equals("par")
&& owner.equals("scala/collection/mutable/ArrayBuffer")
&& returnType.getClassName().equals("scala.collection.parallel.mutable.ParArray")) {
super.visitMethodInsn(Opcodes.INVOKESTATIC, SPARK_RUNTIME_UTILS_TYPE.getInternalName(),
"setTaskSupport", Type.getMethodDescriptor(returnType, returnType), false);
}
}
};
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* Rewrites the BatchedWriteAheadLog class to register itself to SparkRuntimeEnv so that the write ahead log thread
* can be shutdown when the Spark program finished.
*/
private byte[] rewriteBatchedWriteAheadLog(InputStream byteCodeStream) throws IOException {
return rewriteConstructor(SPARK_BATCHED_WRITE_AHEAD_LOG_TYPE, byteCodeStream, new ConstructorRewriter() {
@Override
public void onMethodExit(String name, String desc, GeneratorAdapter generatorAdapter) {
generatorAdapter.loadThis();
generatorAdapter.invokeStatic(SPARK_RUNTIME_ENV_TYPE,
new Method("addBatchedWriteAheadLog", Type.VOID_TYPE,
new Type[] { Type.getType(Object.class) }));
}
});
}
/**
* Rewrites the ExecutorClassLoader so that it won't use system classloader as parent since CDAP classes
* are not in system classloader.
* Also optionally overrides the getResource, getResources and getResourceAsStream methods if they are not
* defined (for fixing SPARK-11818 for older Spark < 1.6).
*/
private byte[] rewriteExecutorClassLoader(InputStream byteCodeStream) throws IOException {
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_MAXS);
final Type classloaderType = Type.getType(ClassLoader.class);
final Type parentClassLoaderType = Type.getObjectType("org/apache/spark/util/ParentClassLoader");
final Method parentLoaderMethod = new Method("parentLoader", parentClassLoaderType, new Type[0]);
// Map from getResource* methods to the method signature
// (can be null, since only method that has generic has signature)
final Map<Method, String> resourceMethods = new HashMap<>();
Method method = new Method("getResource", Type.getType(URL.class), new Type[]{Type.getType(String.class)});
resourceMethods.put(method, null);
method = new Method("getResources", Type.getType(Enumeration.class), new Type[] { Type.getType(String.class) });
resourceMethods.put(method, Signatures.getMethodSignature(method, new TypeToken<Enumeration<URL>>() { },
TypeToken.of(String.class)));
method = new Method("getResourceAsStream", Type.getType(InputStream.class),
new Type[] { Type.getType(String.class) });
resourceMethods.put(method, null);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
private boolean hasParentLoader;
private boolean rewriteInit;
@Override
public void visit(int version, int access, String name, String signature, String superName, String[] interfaces) {
// Only rewrite `<init>` if the ExecutorClassloader extends from ClassLoader
if (classloaderType.getInternalName().equals(superName)) {
rewriteInit = true;
}
super.visit(version, access, name, signature, superName, interfaces);
}
@Override
public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
// If the resource method is declared, no need to generate at the end.
Method method = new Method(name, desc);
resourceMethods.remove(method);
hasParentLoader = hasParentLoader || parentLoaderMethod.equals(method);
if (!rewriteInit || !"<init>".equals(name)) {
return mv;
}
return new GeneratorAdapter(Opcodes.ASM5, mv, access, name, desc) {
@Override
public void visitMethodInsn(int opcode, String owner, String name, String desc, boolean itf) {
// If there is a call to `super()`, skip that instruction and have the onMethodEnter generate the call
if (opcode == Opcodes.INVOKESPECIAL
&& Type.getObjectType(owner).equals(classloaderType)
&& name.equals("<init>")
&& Type.getArgumentTypes(desc).length == 0
&& Type.getReturnType(desc).equals(Type.VOID_TYPE)) {
// Generate `super(null)`. The `this` is already in the stack, so no need to `loadThis()`
push((Type) null);
invokeConstructor(classloaderType, new Method("<init>", Type.VOID_TYPE, new Type[] { classloaderType }));
} else {
super.visitMethodInsn(opcode, owner, name, desc, itf);
}
}
};
}
@Override
public void visitEnd() {
// See if needs to implement the getResource, getResources and getResourceAsStream methods
// All implementations are delegating to the parentLoader
if (!hasParentLoader) {
super.visitEnd();
return;
}
for (Map.Entry<Method, String> entry : resourceMethods.entrySet()) {
// Generate the method.
// return parentLoader().getResource*(arg)
Method method = entry.getKey();
MethodVisitor mv = super.visitMethod(Modifier.PUBLIC, method.getName(),
method.getDescriptor(), entry.getValue(), null);
GeneratorAdapter generator = new GeneratorAdapter(Modifier.PUBLIC, method, mv);
// call `parentLoader()`
generator.loadThis();
generator.invokeVirtual(SPARK_EXECUTOR_CLASSLOADER_TYPE, parentLoaderMethod);
// Load the argument
generator.loadArg(0);
// Call the method on the parent loader.
generator.invokeVirtual(parentClassLoaderType, method);
generator.returnValue();
generator.endMethod();
}
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* Rewrites the constructor of the Kryo class to add serializer for CDAP classes.
*
* @param byteCodeStream {@link InputStream} for reading the original bytecode of the class
* @return the rewritten bytecode
*/
private byte[] rewriteKryo(InputStream byteCodeStream) throws IOException {
return rewriteConstructor(KRYO_TYPE, byteCodeStream, new ConstructorRewriter() {
@Override
public void onMethodExit(String name, String desc, GeneratorAdapter generatorAdapter) {
// Register serializer for Schema
// addDefaultSerializer(Schema.class, SchemaSerializer.class);
generatorAdapter.loadThis();
generatorAdapter.push(Type.getType(Schema.class));
generatorAdapter.push(SCHEMA_SERIALIZER_TYPE);
generatorAdapter.invokeVirtual(KRYO_TYPE,
new Method("addDefaultSerializer", Type.VOID_TYPE,
new Type[] { Type.getType(Class.class), Type.getType(Class.class)}));
// Register serializer for StructuredRecord
// addDefaultSerializer(StructuredRecord.class, StructuredRecordSerializer.class);
generatorAdapter.loadThis();
generatorAdapter.push(Type.getType(StructuredRecord.class));
generatorAdapter.push(STRUCTURED_RECORD_SERIALIZER_TYPE);
generatorAdapter.invokeVirtual(KRYO_TYPE,
new Method("addDefaultSerializer", Type.VOID_TYPE,
new Type[] { Type.getType(Class.class), Type.getType(Class.class)}));
}
});
}
/**
* Rewrites the constructors who don't delegate to other constructor with the given {@link ConstructorRewriter}.
*
* @param classType type of the class to be defined
* @param byteCodeStream {@link InputStream} for reading the original bytecode of the class
* @param rewriter a {@link ConstructorRewriter} for rewriting the constructor
* @return the rewritten bytecode
*/
private byte[] rewriteConstructor(final Type classType, InputStream byteCodeStream,
final ConstructorRewriter rewriter) throws IOException {
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(0);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
@Override
public MethodVisitor visitMethod(int access, final String name,
final String desc, String signature, String[] exceptions) {
// Call super so that the method signature is registered with the ClassWriter (parent)
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
// We only attempt to rewrite constructor
if (!"<init>".equals(name)) {
return mv;
}
return new AdviceAdapter(Opcodes.ASM5, mv, access, name, desc) {
boolean calledThis;
@Override
public void visitMethodInsn(int opcode, String owner, String name, String desc, boolean itf) {
// See if in this constructor it is calling other constructor (this(..)).
calledThis = calledThis || (opcode == Opcodes.INVOKESPECIAL
&& Type.getObjectType(owner).equals(classType)
&& name.equals("<init>")
&& Type.getReturnType(desc).equals(Type.VOID_TYPE));
super.visitMethodInsn(opcode, owner, name, desc, itf);
}
@Override
protected void onMethodEnter() {
if (calledThis) {
// For constructors that call this(), we don't need rewrite
return;
}
rewriter.onMethodEnter(name, desc, this);
}
@Override
protected void onMethodExit(int opcode) {
if (calledThis) {
// For constructors that call this(), we don't need rewrite
return;
}
// Add a call to SparkContextCache.setContext() for the normal method return path
if (opcode == RETURN) {
rewriter.onMethodExit(name, desc, this);
}
}
};
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* Rewrites a class by rewriting all calls to {@link System#setProperty(String, String)} to
* {@link SparkRuntimeEnv#setProperty(String, String)}.
*
* @param byteCodeStream {@link InputStream} for reading in the original bytecode.
* @return the rewritten bytecode
*/
private byte[] rewriteSetProperties(InputStream byteCodeStream) throws IOException {
final Type systemType = Type.getType(System.class);
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(0);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
@Override
public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
return new MethodVisitor(Opcodes.ASM5, mv) {
@Override
public void visitMethodInsn(int opcode, String owner, String name, String desc, boolean itf) {
// If we see a call to System.setProperty, change it to SparkRuntimeEnv.setProperty
if (opcode == Opcodes.INVOKESTATIC && name.equals("setProperty")
&& owner.equals(systemType.getInternalName())) {
super.visitMethodInsn(opcode, SPARK_RUNTIME_ENV_TYPE.getInternalName(), name, desc, false);
} else {
super.visitMethodInsn(opcode, owner, name, desc, itf);
}
}
};
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* Rewrites the akka.remote.Remoting by rewriting usages of scala.concurrent.ExecutionContext.Implicits.global
* to Remoting.system().dispatcher() in the shutdown() method for fixing the Akka thread/permgen leak bug in
* https://github.com/akka/akka/issues/17729.
*
* @return the rewritten bytes or {@code null} if no rewriting is needed
*/
@Nullable
private byte[] rewriteAkkaRemoting(InputStream byteCodeStream) throws IOException {
final Type dispatcherReturnType = determineAkkaDispatcherReturnType();
if (dispatcherReturnType == null) {
LOG.warn("Failed to determine ActorSystem.dispatcher() return type. " +
"No rewriting of akka.remote.Remoting class. ClassLoader leakage might happen in SDK.");
return null;
}
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(0);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
@Override
public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
// Call super so that the method signature is registered with the ClassWriter (parent)
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
// Only rewrite the shutdown() method
if (!"shutdown".equals(name)) {
return mv;
}
return new MethodVisitor(Opcodes.ASM5, mv) {
@Override
public void visitMethodInsn(int opcode, String owner, String name, String desc, boolean itf) {
// Detect if it is making call "import scala.concurrent.ExecutionContext.Implicits.global",
// which translate to Java code as
// scala.concurrent.ExecutionContext$Implicits$.MODULE$.global()
// hence as bytecode
// GETSTATIC scala/concurrent/ExecutionContext$Implicits$.MODULE$ :
// Lscala/concurrent/ExecutionContext$Implicits$;
// INVOKEVIRTUAL scala/concurrent/ExecutionContext$Implicits$.global
// ()Lscala/concurrent/ExecutionContextExecutor;
if (opcode == Opcodes.INVOKEVIRTUAL
&& "global".equals(name)
&& "scala/concurrent/ExecutionContext$Implicits$".equals(owner)
&& Type.getMethodDescriptor(EXECUTION_CONTEXT_EXECUTOR_TYPE).equals(desc)) {
// Discard the GETSTATIC result from the stack by popping it
super.visitInsn(Opcodes.POP);
// Make the call "import system.dispatch", which translate to Java code as
// this.system().dispatcher()
// hence as bytecode
// ALOAD 0 (load this)
// INVOKEVIRTUAL akka/remote/Remoting.system ()Lakka/actor/ExtendedActorSystem;
// INVOKEVIRTUAL akka/actor/ExtendedActorSystem.dispatcher ()Lscala/concurrent/ExecutionContextExecutor;
Type extendedActorSystemType = Type.getObjectType("akka/actor/ExtendedActorSystem");
super.visitVarInsn(Opcodes.ALOAD, 0);
super.visitMethodInsn(Opcodes.INVOKEVIRTUAL, "akka/remote/Remoting", "system",
Type.getMethodDescriptor(extendedActorSystemType), false);
super.visitMethodInsn(Opcodes.INVOKEVIRTUAL, extendedActorSystemType.getInternalName(), "dispatcher",
Type.getMethodDescriptor(dispatcherReturnType), false);
} else {
// For other instructions, just call parent to deal with it
super.visitMethodInsn(opcode, owner, name, desc, itf);
}
}
};
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* Find the return type of the ActorSystem.dispatcher() method. It is ExecutionContextExecutor in
* Akka 2.3 (Spark 1.2+) and ExecutionContext in Akka 2.2 (Spark < 1.2, which CDAP doesn't support,
* however the Spark 1.5 in CDH 5.6. still has Akka 2.2, instead of 2.3).
*
* @return the return type of the ActorSystem.dispatcher() method or {@code null} if no such method
*/
@Nullable
private Type determineAkkaDispatcherReturnType() {
URL resource = resourceLookup.apply("akka/actor/ActorSystem.class");
if (resource == null) {
return null;
}
try (InputStream is = resource.openStream()) {
final AtomicReference<Type> result = new AtomicReference<>();
ClassReader cr = new ClassReader(is);
cr.accept(new ClassVisitor(Opcodes.ASM5) {
@Override
public MethodVisitor visitMethod(int access, String name, String desc, String signature, String[] exceptions) {
if (name.equals("dispatcher") && Type.getArgumentTypes(desc).length == 0) {
// Expected to be either ExecutionContext (akka 2.2, only in CDH spark)
// or ExecutionContextExecutor (akka 2.3, for open source, HDP spark).
Type returnType = Type.getReturnType(desc);
if (returnType.equals(EXECUTION_CONTEXT_TYPE)
|| returnType.equals(EXECUTION_CONTEXT_EXECUTOR_TYPE)) {
result.set(returnType);
} else {
LOG.warn("Unsupported return type of ActorSystem.dispatcher(): {}", returnType.getClassName());
}
}
return super.visitMethod(access, name, desc, signature, exceptions);
}
}, ClassReader.SKIP_DEBUG | ClassReader.SKIP_CODE | ClassReader.SKIP_FRAMES);
return result.get();
} catch (IOException e) {
LOG.warn("Failed to determine ActorSystem dispatcher() return type.", e);
return null;
}
}
/**
* Defines the org.apache.spark.deploy.yarn.Client class with rewriting of the createConfArchive method to
* workaround the SPARK-13441 bug.
*/
@Nullable
private byte[] rewriteClient(InputStream byteCodeStream) throws IOException {
// We only need to rewrite if listing either HADOOP_CONF_DIR or YARN_CONF_DIR return null.
boolean needRewrite = false;
for (String env : ImmutableList.of("HADOOP_CONF_DIR", "YARN_CONF_DIR")) {
String value = System.getenv(env);
if (value != null) {
File path = new File(value);
if (path.isDirectory() && path.listFiles() == null) {
needRewrite = true;
break;
}
}
}
// If rewrite is not needed
if (!needRewrite) {
return null;
}
ClassReader cr = new ClassReader(byteCodeStream);
ClassWriter cw = new ClassWriter(ClassWriter.COMPUTE_MAXS);
cr.accept(new ClassVisitor(Opcodes.ASM5, cw) {
@Override
public MethodVisitor visitMethod(final int access, final String name,
final String desc, String signature, String[] exceptions) {
MethodVisitor mv = super.visitMethod(access, name, desc, signature, exceptions);
// Only rewrite the createConfArchive method
if (!"createConfArchive".equals(name)) {
return mv;
}
// Check if it's a recognizable return type.
// Spark 1.5+ return type is File
boolean isReturnFile = Type.getReturnType(desc).equals(Type.getType(File.class));
Type optionType = Type.getObjectType("scala/Option");
if (!isReturnFile) {
// Spark 1.4 return type is Option<File>
if (!Type.getReturnType(desc).equals(optionType)) {
// Unknown type. Not going to modify the code.
return mv;
}
}
// Generate this for Spark 1.5+
// return SparkRuntimeUtils.createConfArchive(this.sparkConf, SPARK_CONF_FILE,
// LOCALIZED_CONF_DIR, LOCALIZED_CONF_DIR_ZIP);
// Generate this for Spark 1.4
// return Option.apply(SparkRuntimeUtils.createConfArchive(this.sparkConf, SPARK_CONF_FILE,
// LOCALIZED_CONF_DIR, LOCALIZED_CONF_DIR_ZIP));
GeneratorAdapter mg = new GeneratorAdapter(mv, access, name, desc);
// load this.sparkConf to the stack
mg.loadThis();
mg.getField(Type.getObjectType("org/apache/spark/deploy/yarn/Client"), "sparkConf", SPARK_CONF_TYPE);
// push three constants to the stack
mg.visitLdcInsn(SPARK_CONF_FILE);
mg.visitLdcInsn(LOCALIZED_CONF_DIR);
mg.visitLdcInsn(LOCALIZED_CONF_DIR_ZIP);
// call SparkRuntimeUtils.createConfArchive, return a File and leave it in stack
Type stringType = Type.getType(String.class);
mg.invokeStatic(SPARK_RUNTIME_UTILS_TYPE,
new Method("createConfArchive", Type.getType(File.class),
new Type[] { SPARK_CONF_TYPE, stringType, stringType, stringType}));
if (isReturnFile) {
// Spark 1.5+ return type is File, hence just return the File from the stack
mg.returnValue();
mg.endMethod();
} else {
// Spark 1.4 return type is Option<File>
// return Option.apply(<file from stack>);
// where the file is actually just popped from the stack
mg.invokeStatic(optionType, new Method("apply", optionType, new Type[] { Type.getType(Object.class) }));
mg.checkCast(optionType);
mg.returnValue();
mg.endMethod();
}
return null;
}
}, ClassReader.EXPAND_FRAMES);
return cw.toByteArray();
}
/**
* CDAP-8636 Rewrite methods of YarnSparkHadoopUtil to avoid acquiring delegation token, because when we execute
* spark submit, we don't have keytab login. Because of that and the change in SPARK-12241, the attempt to acquire
* delegation tokens causes a spark program submission failure.
*/
private byte[] rewriteSparkHadoopUtil(String name, InputStream byteCodeStream) throws IOException {
// we can't rewrite 'obtainTokenForHiveMetastore', because it doesn't have Void return type
Set<String> methods = ImmutableSet.of("obtainTokensForNamenodes", "obtainTokenForHBase");
return Classes.rewriteMethodToNoop(name, byteCodeStream, methods);
}
/**
* Private interface for rewriting constructor.
*/
private abstract class ConstructorRewriter {
void onMethodEnter(String name, String desc, GeneratorAdapter generatorAdapter) {
// no-op
}
void onMethodExit(String name, String desc, GeneratorAdapter generatorAdapter) {
// no-op
}
}
}
|
One of the really interesting possibilities that the internet opened up for e-government was the prospect of being able to show data in a spatial context in that it is possible to show information and data about places on maps. What’s exciting about this is I suppose it can make it very meaningful for people to see data displayed in circumstances that they can relate to. So, in the case of planning information — whether housing or infrastructural development — it makes a lot more sense to people to see it in pictorial form.
But it doesn’t just end at planning because, if data can be shown in a spatial way, it becomes possible to layer different sets of data on a map to give a more comprehensive view of what the combined data means to a particular place. For instance, in the case of the Meadowlands project in New Jersey, the authorities had succeed in layering information about toxic waste disposal, population concentration and disease on a single map and it became very easy to detect a possible link between the location of particular types of material and the incidence of cancer.
Of course, it goes without saying that great care has to be taken about the quality and range of data and, perhaps most importantly, the way the correlations are interpreted. It seems all too easy for people to jump to conclusions about coincidences and you can very easily see how particularly mischievous people could juxtapose different information sets to suggest cause and effect. But there are always those who are prepared to get up to that kind of mischief, whether the data is in spreadsheets or on nice colourful maps. So that shouldn’t be an argument against moving in that direction.
A number of local authorities are using maps to show planning proposals and that trend should continue. A big issue, however, for all public authorities is the cost of maps because they have to pay Ordnance Survey Ireland (OSI) for the use of its maps. And I suppose the difficulty is compounded by the fact that they are obliged to use OSI maps in the planning processes. Indeed, it seems strange that people applying for permission to build are still in the position of having to pay one arm of the State — OSI — for information maps that they then hand back to another arm — the planning authority. Those of you who have been following the ‘progress’ of e-government over the past half decade will have heard many of its exponents extolling the benefits to people of not having to act as couriers between state agencies — and having to pay for the privilege of doing so. So in that respect, one of the key messages of e-government mightn’t yet have gotten through to the people who run OSI or who are giving it its mandate.
Another interesting dimension to this is OSI is now being run as a commercial semi-state body and is expected to make money. In that latter respect it is not any different from the situation before its status changed, but there were a lot of question marks about their charging policies.
I haven’t followed progress on that situation since the change of status, but I wonder if the shift to a more commercial style of operation has impacted on the way it is operating internally. We know OSI has a couple of aircraft that cost a lot to keep and that are not being used that much anymore. We also hear some suggestions OSI is carrying some ‘passengers’ in its workforce since the old days, which by pure commercial standards, would not be in keeping with best practice.
In recent years with more high-quality information becoming available from satellite sources and private sector mapping organisations making the whole business very competitive. So, while a lot of the agencies that currently have to use OSI maps could probably do the whole mapping thing much cheaper themselves, they are not encouraged do so because of the effective monopoly OSI has on this market.
But from a national strategic perspective, we need an OSI in state control. But does it have to do the entire mapping itself if private sub-contractors can do it more efficiently? If the rationale for the charging regime that operates is related to the cost of having and running OSI, should it not look at how those costs might be reduced given its new commercial status?
The reason I ask these questions is that there are many commercial and community enterprises that could make very good use of the maps that OSI has if only they could afford them. In fact, the question of using maps is the subject of a draft directive — the Inspire Directive currently under consideration at the EU commission and parliament. It is raising questions for all public authorities across Europe about who can use maps, what they should pay, and how people who can’t afford to pay or who would use maps for non-commercial purposes, can access to these very valuable tools as they build virtual communities or put new meaning into local democracy in the so-called information age.
Many people are not best pleased with the growing influence of the European centre in their lives. But for many, the continued appearance of draft directives and their subsequent adoption has meant a lot of progress. True enough, it has sometimes brought us such gems as straight bananas. I suspect this current directive on mapping and the use of maps may bring us much some changes. But who will be the real winners? We will see.
|
#nepalshirt: Buy a white shirt, help survivors of the Nepalese earthquake | City A.M.
Everyone needs a white shirt - now you can buy one and help survivors of the Nepalese earthquake, after the launch of a website that only sells one thing: crisp, white shirts.
For £50, you can own a white cotton shirt (either men's or women's), with 100 per cent of the profits going straight into helping Nepal.
The people behind it are the founders of online tailoring company A Suit That Fits, whose production has been based in Nepal since 2006.
David Hathiramani, one of ASTF's two co-founders and a co-founder of the new venture, NepalShirt.com, told City A.M. that while none of their manufacturing staff had been injured in the earthquake, setting up the business was an "obvious idea". (NB for ASFT customers - he also pointed out that all orders are fully on time, and they're taking orders as usual).
Although there's no guarantee of the delivery date for the shirts, the first 1,000 will be delivered in less than eight weeks, with the next 10,000 coming within three months.
Om Yogi, an ASTF tailor based in Kathmandu, who co-founded NepalShirt.com, added that 1m orders will raise £50m for Nepal.
"There must be a million people out there who need a white shirt," he pointed out.
|
Cost-effectiveness of coronary MDCT in the triage of patients with acute chest pain. OBJECTIVE Patients at low risk for acute coronary syndrome (ACS) who present to the emergency department complaining of acute chest pain place a substantial economic burden on the U.S. health care system. Noninvasive 64-MDCT coronary angiography may facilitate their triage, and we evaluated its cost-effectiveness. MATERIALS AND METHODS A microsimulation model was developed to compare costs and health effects of performing CT coronary angiography and either discharging, stress testing, or referring emergency department patients for invasive coronary angiography, depending on their severity of atherosclerosis, compared with a standard-of-care (SOC) algorithm that based management on biomarkers and stress tests alone. RESULTS Using CT coronary angiography to triage 55-year-old men with acute chest pain increased emergency department and hospital costs by $110 and raised total health care costs by $200. In 55-year-old women, the technology was cost-saving; emergency department and hospital costs decreased by $410, and total health care costs decreased by $380. Compared with the SOC, CT coronary angiography-based triage extended life expectancy by 10 days in men and by 6 days in women. This translated into corresponding improvements of 0.03 quality-adjusted life years (QALYs) and 0.01 QALYs, respectively. The incremental cost-effectiveness ratio for CT coronary angiography was $6,400 per QALY in men; in women, CT coronary angiography was cost-saving. Cost-effectiveness ratios were sensitive to several parameters but generally remained in the range of what is typically considered cost-effective. CONCLUSION CT coronary angiography-based triage for patients with low-risk chest pain is modestly more effective than the SOC. It is also cost-saving in women and associated with low cost-effectiveness ratios in men.
|
[UPDATE]: The whole of the Maine legislator has flipped to the GOP. Several people I have talked to said such a deep and thorough shift to any one party has not happened in one election in the past 100 years.
This is an unusual Morning Briefing because you need to understand what happened while you’ve been sleeping.
Republican gains are massive. And when I say Republican gains are massive, I mean tsunami.
No, the GOP did not take the Senate and some races are still outstanding, but the Senate GOP has moved to the right. More so, the Republicans picking up, in the worst case, seven seats is historically strong.
But consider that as you wake up this morning the Republican Party has picked up more seats in the House of Representatives than at any time since 1948 — that is more than sixty seats. Ike Skelton, Class of 1976, is gone. Many, many other Democrats are gone.
That, in and of itself, is significant. But that’s not the half of it. The real story is the underreported story of the night — the Republican pick ups at the state level.
There will be 18 states subject to reapportionment. The Republicans will control a majority of those — at least ten and maybe a dozen or more. More significantly, a minimum of seventeen state legislative houses have flipped to the Republican Party.
The North Carolina Legislature is Republican for the first time since 1870. Yes, that is Eighteen Seventy.
The Alabama Legislature is Republican for the first time since 1876.
The entire Wisconsin and New Hampshire legislatures have flipped to the GOP by wide margins.
The State Houses in Indiana, Pennsylvania, Michigan, Ohio, Iowa, Montana, and Colorado flipped to the GOP.
The Maine and Minnesota Senates flipped to the GOP.
The Texas and Tennessee Houses went from virtually tied to massive Republican gains. The gains in Texas were so big that the Republicans no longer need the Democrats to get state constitutional amendments out of the state legislature.
These gains go all the way down to the municipal level across the nation. That did not happen even in 1994.
Where is David French's Article on Pete Buttigieg?
|
<reponame>VEckardt/IntegrityExcelTestSession
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package com.ptc.services.common.gateway.baseline;
import com.mks.api.response.Item;
import java.util.Date;
/**
*
* @author peisenha
*/
public class Baseline implements Comparable<Baseline> {
String label;
String asof;
Date asOfDate;
String state;
public Baseline(Item item) {
if (item.contains("AsOf")) {
this.asof = item.getField("AsOf").getValueAsString();
this.asOfDate = item.getField("AsOf").getDateTime();
}
if (item.contains("Label")) {
this.label = item.getField("Label").getValueAsString();
}
if (item.contains("State")) {
this.state = item.getField("State").getValueAsString();
}
}
public Baseline(String _label, String _asOf) {
this.asof = _asOf;
this.label = _label;
}
public String getAsof() {
return asof;
}
public Date getAsOfDate() {
return asOfDate;
}
public String getLabel() {
return label;
}
public String getState() {
return state;
}
public void setState(String state) {
this.state = state;
}
/**
* compares two Objecs of type Baseline, needed for sorting the baseline list
* @param o object to compare
* @return a negative integer, zero, or a positive integer as the first argument is less than, equal to, or greater than the second.
*/
@Override
public int compareTo(Baseline o) {
int compare;
if (asOfDate.after(o.getAsOfDate())) {
compare = 1;
} else if (asOfDate.before(o.getAsOfDate())) {
compare = -1;
} else {
compare = 0;
}
return compare;
}
@Override
public String toString() {
return new StringBuffer().append("Baseline Label: ").append(label).append("\tBaseline Date (asOf): ").append(asof).append("\tState: ").append(state).toString();
}
}
/*
* output of the api
* im viewissue --showLabels --xmlapi 79429
*
<Field name="MKSIssueLabels">
<List elementType="item">
<Item id="Version 1.0, BzR (Baselinegesetzt)" context="79429" displayId="Version 1.0, BzR (Baselinegesetzt)" modelType="im.Issue.Label">
<Field name="Label">
<Value dataType="string">Version 1.0, BzR (Baselinegesetzt)</Value>
</Field>
<Field name="AsOf">
<Value dataType="datetime">2010-06-09T12:37:16</Value>
</Field>
</Item>
<Item id="Version 1.1 BzA" context="79429" displayId="Version 1.1 BzA" modelType="im.Issue.Label">
<Field name="Label">
<Value dataType="string">Version 1.1 BzA</Value>
</Field>
<Field name="AsOf">
<Value dataType="datetime">2010-06-09T12:50:49</Value>
</Field>
</Item>
<Item id="Version 1.3 BzA" context="79429" displayId="Version 1.3 BzA" modelType="im.Issue.Label">
<Field name="Label">
<Value dataType="string">Version 1.3 BzA</Value>
</Field>
<Field name="AsOf">
<Value dataType="datetime">2010-06-09T15:18:12</Value>
</Field>
</Item>
<Item id="Aufwand FSL geschätzt" context="79429" displayId="Aufwand FSL geschätzt" modelType="im.Issue.Label">
<Field name="Label">
<Value dataType="string">Aufwand FSL geschätzt</Value>
</Field>
<Field name="AsOf">
<Value dataType="datetime">2010-06-10T07:33:31</Value>
</Field>
</Item>
</List>
</Field>
</WorkItem>
</WorkItems>
</Response>
*/
|
to parity with the greenback on Thursday, as it has all week.
dollar closed at par, up slightly from C$1.0006 to the U.S.
dollar, or 99.94 U.S. cents, at Wednesday's close.
maintained that level for long.
week testing parity before retreating toward C$1.03 to the U.S.
ticks, between C$0.9995 and C$1.0023 to the U.S. dollar.
is closely tied to the health of the United States.
sector on Thursday helped support the view that the U.S.
put in a mixed performance against U.S. Treasuries.
|
. Data on the functional status of the cardiorespiratory system are required to identify patients at risk for postoperative complication in the presence of lung diseases. Very many factors influence the course of an operation and the postoperative period so there is no golden standard or the only parameter for predicting how the postoperative period runs. Patients with normal spirographic values (FEV1, more than 80%??) and without cardiovascular comorbidity are at a slight risk for postoperative complications. These patients do not need to be additionally examined. A less than one-month history of myocardial infarction, instable angina pectoris, decompensated heart failure, severe valvular disease are contraindications to planned surgery. The risk of cardiovascular events is high when the signs of myocardial ischemia occur with low exercise (less than 4 MET). Stress echocardiography, loading tests, and radioisotopic study are used as auxiliary techniques, FEV1, under 60%; ppo-FEV1, and ppo-DC, under 40%; VO2max, under 15 ml/kg/min are the values of a high risk for respiratory complications.
|
We do love a slice of Welsh psychedelia on this show, and that’s why The Pale Blue Dots are our MPFree today. Because…it’s psychedelic. And from Wales. And completely excellent.
Today’s MPFree is really going to wake you up…if you like sleeping in past 11:30. A duo following very closely in the footsteps of Mercury Prize, heavy-duo Royal Blood (congrats to Royal Fathers btw – did you hear them in session with us back in February?) this is a short, sharp shrift into a world of loud guitars, louder vocals, and very short running times.
Today’s MPfree is a bit of a goodie – as they all are of course – but this is a version of a Wings song that you won’t have heard anywhere in the world before.
Following his triumphant 6 Music Live performance with us at Maida Vale last year, we thought it only right that we’d take you to this exclusive track before anybody else. It’s an alternate version of ‘Letting Go’ that actually won’t be appearing on the Venus And Mars reissue, out on 3rd November, but is a really interesting take on a classic. Originally released in 1975 and recorded at Abbey Road, the song is about Linda – as were many tracks around this time - who also co-wrote the song. With this mix, the rock has been replaced by a more laid back disco feel, the bass pushed to the fore with the scaling back of the guitars. Nice! Download now.
Today’s free download comes from a very talented singer-songwriter who’s on show favourite Courtney Barnett’s Milk Records label – seriously, Courtney and Jen Cloher might change your musical life. This guy is still relatively new on the scene but, oh, what a voice. Download now!
He announced last week that his debut album will be out next year and he’ll be supporting Damon Albarn at Melbourne’s Palais Theatre on 12th December. He’s also been supporting Courtney Barnett on a few dates around the big island down south as well. Certainly one to keep your eye on for next year. You heard it hear first.
Welcome to the land of free music folks. Everyday on the Lauren Laverne shoEach day of the week we play a track on the show which, as it transpires, is also available from the internet music gods for free. This band hail from Russia’s second largest city St Petersburg. Not really a place you’d go to for ethereal shoegaze, but this is a band that you simply have to keep your eye on.
We’re quick on the draw here at 6 Music. Killer Mike and El-P’s Run The Jewels returns for a second episode – announced this morning - and the whole album is available for free download.
The album Run The Jewels 2 is pretty naughty, so sensitive ears might want to give this a wide berth. Close Your Eyes features Rage Against The Machine’s Zack De La Rocha, and sees him in fine form. The album will be released properly on 3rd November, so download while you can. In other news, a fan-lead Kickstarter campaign is underway to fund Meow The Jewels - a joke set of bonus packages where El-P is to remix the album using only cat noises. They have successfully reached their target so we’re looking forward to hearing the likes of Geoff Barrow, Just Blaze, Zola Jesus, Baauer, BOOTS, Prince Paul, Dan The Automator helping with production. El-P is currently auditioning cats!
Today’s MPFree is from synth pop duo Blue Hawaii from that there Montreal. If icy, intricate, laid back vibes are your thing, then this is the song for you.
Blue Hawaii is actually a side project of Raphaelle Standell and Alex “Agor” Cowan from Braids – check out last year’s Flourish // Perish. Whilst you’re there, check out Blue Hawaii’s album from last year Untogether played many times on this show — are releasing a new one-off track called ‘Get Happy’, also included in the download package is ‘Get Happier’, which is like the former…but sped up. How could that not make you happy? The band said in a statement. “As the year progressed, we found our live show intensify but still had all these softer recordings which would never be released. Hence we present ’Get Happy’ / ’Get Happier’, where we explore both sides: the original demo and a fun, double-time edit made one day in August.” Download now, it’s guaranteed to relax your mind moods.
Paul Hartnoll joined us on the show yesterday to talk all about the demise of Orbital after 25 years together and to explain why he’s never going back. Brothers hey? But the result of that closed door, is another one opening as we welcome Paul’s solo project, 8:58.
Paul explained how he’s always been obsessed with time, and this track – featuring the spooky opening monologue from Cillian Murphy – in fact the whole project, is certainly evident of that. He says: “I’ve always had a thing for clocks and for time as a powerful force - but also the way it oppresses you.” Paul’s also been busy working on the Peaky Blinders soundtrack, where he met one Cillian Murphy and the result is a sprawling six minute journey into the electronic netherworlds so familiar to Orbital fans. The self-titled album arrives February next year, when he’ll also be on tour, stopping off at Oxford on 18th, 19th Norwich, 20th Glasgow, 21st Manchester, 24th Bristol and 25th London. Don’t say we’re not good to you.
Every now and then, a band just needs to change its name in order to establish its new course: The Quarrymen into The Beatles, The Rain into Oasis, Liberty into Liberty X. Joining that list is Temple Songs – played several times on this show - who’ve changed their name to Pink Teens.
In celebration of this change, the will be releasing a 9-track EP “Good Luck, Pink Teens”, from which More Than I Can Bear is taken. Fans of the former will be pleased that the psyche-tinged shoe gaze remain, with lead singer Jolan Lewis just having a knack for finding great melodies amongst the wealth of sound behind him. They play Bleach in Brighton tonight. Download now, watch tonight = Tuesday.
Today’s MPFree is a beautiful and mellow track to soothe you into your Monday morning by L.A. based band Dive Index.
Spearheaded by composer and producer Will Thomas, Dive Index is a collaboration of artists such as vocalists Simone White, who features on Pattern Pieces. Dive Index use a unique combination of electronic and acoustic sounds to make deeply layered music.
Dive Index's fourth album Lost In The Pressure was released on 30th September 2014 via Neutral Music Records.
Today’s MPFree goes global with a stunner from African musicians Tamikrest, discovered in the same Glitterbeat Records vaults which gave us Wednesday’s ‘Robot Dub’ track (Be Ki Don by Schneider TM).
As a band originating from the Sahara comparisons to fellow Saharan musicians Tinariwen are inevitable, but Tamikrest carve their own niche with funky bass riffs and bluesy rhythms.
This track ‘Djanegh Etoumast’ comes from their latest album Chatma, which was released in September. Never ones to shy away from political and social commentary, Chatma, which means ‘Sisters’ in Tamashek, is about the suffering and strength of the Tuareg women during war.
Their frontman Ousmane Ag Mossa recently performed alongside Patti Smith and Lenny Kaye at Burg Herzberg Festival, and the band will begin a UK tour later this month if you want to catch them live.
Today’s MPFree is a mesmerising demo from the woman of the moment – the very cool Jessie Ware.
South Londoner Jessie recorded the track "12" last year at the Red Bull Studio in London. Robin Hannibal, who is part of the Danish group Quadron, co-wrote and produced the song. Jessie released her second album Tough Love on 13th October. Even though "12" didn’t make it onto the new album Jessie tells us all to "play it late and go kiss someone x".
Jessie released her beautiful debut record "Devotion" in 2011 and went on to be nominated for the Mercury Music Prize. She has previously worked with SBTRKT and Sampha, and started her music career as Jack Penate’s backing singer. Jessie will start her UK tour in January 2015.
The MPFree picks up the pace a little today with "minimalist head-smashing garage rock that's going to get you jumping up and down on your desk".
Their self-titled second album was released last year, and received quite a level of rotation on this show – certainly one to add to your collection if it’s currently missing. Now the Oxford, Mississippi duo return with their third outing, RIP THIS. A possible commentary on the state of the industry, or just an instruction, or even sarcasm, we’re not quite sure but 'For Blood' is the first taster we’ve had. And now the free music gods have made it available for you right here. They’re also live, and some would certainly expect some loudness, in London and Manchester 13th and 14th November respectively.
Onwards we roll to the second free download of the week, and it’s a bit of a banger this one. Nemone sits in for Lauren today, and she’s bringing her Electric Ladyland-vibes to the show with this reworking of a classic by German producer Jolie Noir.
Katja aka Jolie Noir is based in Nuremberg and started DJing when she was just 18. Having grown up around 80s and early 90s hip hop, she now takes her love of the genre squarely to the dance floors with this 4/4 reworking of Grandmaster Flash’s White Lines. It’s a subtle remapping, but when the familiar elements start breaking through you know this has been made with a lot of a love. Download now.
Welcome one and all to the returning MPFree. Following last week’s triumphant 6 Music Live gigs down at that there Maida Vale, there simply wasn’t room to squeeze the free download in. But now it’s back, bigger than ever, and with an absolute wonder to wash away the blues of the rainy week ahead.
Tan Vampires are a sextet from Dover, New Hampshire, USA for fans of “Radiohead, Wilco, Local Natives, and Neil Young.” Personally, we see more My Morning Jacket and it’s a marching wave of brass, woodland folk and forlorn lyrics. They formed in 2008 having released two albums; 2011's For Physical Fitness, and 2013's Ephemera. The band is currently hard at work on a new album which should be out in the spring of next year. They also play a free gig in Manchester tonight if you’re from that side of the world.
It’s Friday! We’re bringing your Friday 12 hours early, so it’s actually late at night now, and what we need going into the early hours is a good dose of repetitive, and damn right spooky, dance-laden music for the mind. That’s exactly what Firejosé is going to provide for us.
Windy City is taken from a four-track Red Mist EP from Estonian producer Firejosé (a.k.a. Mark Stukis). This is the only track available for free download, courtesy of the Cut Label which operates on a subscription basis. On his Facebook, he has quite the introduction: “Mark Stukis pondered into the electronic music world in the 90s, moving gradually from listening to records to going to raves, falling in love with drum n bass to DJing. After years of spinning and producing solely one genre it was time to evolve...”. And evolve he has, bringing us this dark slice of late night urban Burial-esque dirge.
Today’s free download comes from Albury, Australia from an artist who goes by the name of Lisa Mitchell…and absolutely lush it is too.
Everybody loves a spot of free music, so open your minds and get downloading now.
Did you ever make a mix tape for somebody? Did they ever make one for you? If so, we want to hear from you - click through to find out how to get in touch and see the track lists of other listeners' tapes.
|
class RunManager :
"""
1. definition
-manage job list
-manage cluster tarin server status
-manage cluster job and result
2. table
CONF_SERV_CLUSTER_INFO
COMMON_JOB_LIST_INFO
NN_VER_BARCHLIST_INFO
"""
def get_view_obj(self):
"""
get view data for net config
:return:
"""
pass
def set_view_obj(self, obj):
"""
set net config data edited on view
:param obj:
:return:
"""
pass
def regist_train_request(self):
task = WorkFlowTrainTask()
task.delay()
return None
def _find_proper_train_server(self):
return None
def _check_train_server_capa(self):
return None
def _get_next_job(self):
return None
def _check_cur_running_job(self):
return None
def _get_max_allow_job(self):
return None
def _exec_next_job(self):
return None
def set_job_init(self, nn_id, wf_ver):
return None
def set_job_start(self, nn_id, wf_ver):
return None
def set_job_finish(self, nn_id, wf_ver):
return None
def set_job_error(self, nn_id, wf_ver):
return None
|
def change_default_dict_keys(cls, primary_name=None, secondary_name=None, score_name=None):
if primary_name is not None:
cls._primary_default_name = primary_name
if secondary_name is not None:
cls._secondary_default_name = secondary_name
if score_name is not None:
cls._score_default_name = score_name
|
The Ministry of Culture and Fine Arts plans to seek UNESCO cultural world heritage status for the traditional music known as Khmer Arak, an art form they believe is in danger of disappearing, ministry officials said yesterday.
Thai Norak Sathya, secretary of state for the ministry, said his colleagues hope to put in the request this year. “We are taking care of the most endangered type [of music], such as Arak bands,” he said.
According to Norak Sathya, Arak is the oldest type of local music, and originates from the animist spiritual beliefs of early Cambodians.
It is performed with the aid of such musical instruments as the flute, drum, tro and chapei (two types of stringed instruments).
The music is a form of prayer that was thought to drive out illnesses. But as the country adapts to modern medicine, Arak is in danger of dying out, he said, and few young Cambodians know about it.
UNESCO was not available for comment yesterday.
|
def run(self, url):
if not url:
return
response = self.make_request(url)
selector = Selector.from_html_text(response)
entities = []
for root in self.get_roots(selector):
entity = self.entity(root)
entity = self.process_entity(entity)
if entity:
entities.append(entity)
return EntityList(*entities)
|
/****************************************************************************************/
/* BODYINST.C */
/* */
/* Author: <NAME> */
/* Description: Actor body instance implementation. */
/* */
/* The contents of this file are subject to the Genesis3D Public License */
/* Version 1.01 (the "License"); you may not use this file except in */
/* compliance with the License. You may obtain a copy of the License at */
/* http://www.genesis3d.com */
/* */
/* Software distributed under the License is distributed on an "AS IS" */
/* basis, WITHOUT WARRANTY OF ANY KIND, either express or implied. See */
/* the License for the specific language governing rights and limitations */
/* under the License. */
/* */
/* The Original Code is Genesis3D, released March 25, 1999. */
/*Genesis3D Version 1.1 released November 15, 1999 */
/* Copyright (C) 1999 WildTangent, Inc. All Rights Reserved */
/* */
/****************************************************************************************/
#include <assert.h> //assert()
#include "body._h"
#include "bodyinst.h"
#include "ram.h"
#include "errorlog.h"
#include "strblock.h"
typedef struct geBodyInst
{
const geBody *BodyTemplate;
geBodyInst_Geometry ExportGeometry;
int LastLevelOfDetail;
geBodyInst_Index FaceCount;
} geBodyInst;
void GENESISCC geBodyInst_PostScale(const geXForm3d *M,const geVec3d *S,geXForm3d *Scaled)
{
Scaled->AX = M->AX * S->X;
Scaled->BX = M->BX * S->X;
Scaled->CX = M->CX * S->X;
Scaled->AY = M->AY * S->Y;
Scaled->BY = M->BY * S->Y;
Scaled->CY = M->CY * S->Y;
Scaled->AZ = M->AZ * S->Z;
Scaled->BZ = M->BZ * S->Z;
Scaled->CZ = M->CZ * S->Z;
Scaled->Translation = M->Translation;
}
geBodyInst *GENESISCC geBodyInst_Create(const geBody *B)
{
geBodyInst *BI;
assert( B != NULL );
assert( geBody_IsValid(B) != GE_FALSE );
BI = GE_RAM_ALLOCATE_STRUCT(geBodyInst);
if (BI == NULL)
{
geErrorLog_Add(ERR_BODY_ENOMEM, NULL);
return NULL;
}
BI->BodyTemplate = B;
{
geBodyInst_Geometry *G = &(BI->ExportGeometry);
G->SkinVertexCount =0;
G->SkinVertexArray = NULL;
G->NormalCount = 0;
G->NormalArray = NULL;
G->FaceCount = (geBody_Index) 0;
G->FaceListSize = 0;
G->FaceList = NULL;
}
BI->LastLevelOfDetail = -1;
BI->FaceCount = 0;
return BI;
}
void GENESISCC geBodyInst_Destroy( geBodyInst **BI)
{
geBodyInst_Geometry *G;
assert( BI != NULL );
assert( *BI != NULL );
G = &( (*BI)->ExportGeometry );
if (G->SkinVertexArray != NULL )
{
geRam_Free( G->SkinVertexArray );
G->SkinVertexArray = NULL;
}
if (G->NormalArray != NULL )
{
geRam_Free( G->NormalArray );
G->NormalArray = NULL;
}
if (G->FaceList != NULL )
{
geRam_Free( G->FaceList );
G->FaceList = NULL;
}
geRam_Free( *BI );
*BI = NULL;
}
#define GE_BODYINST_FACELIST_SIZE_FOR_TRIANGLE (8)
static geBodyInst_Geometry *GENESISCC geBodyInst_GetGeometryPrep(
geBodyInst *BI,
int LevelOfDetail)
{
const geBody *B;
geBodyInst_Geometry *G;
assert( BI != NULL );
assert( geBody_IsValid(BI->BodyTemplate) != GE_FALSE );
B = BI->BodyTemplate;
G = &(BI->ExportGeometry);
assert( G != NULL );
if (G->SkinVertexCount != B->XSkinVertexCount)
{
if (G->SkinVertexArray!=NULL)
{
geRam_Free(G->SkinVertexArray);
}
G->SkinVertexArray = GE_RAM_ALLOCATE_ARRAY(geBodyInst_SkinVertex,B->XSkinVertexCount);
if ( G->SkinVertexArray == NULL )
{
geErrorLog_Add(ERR_BODY_ENOMEM, NULL);
G->SkinVertexCount = 0;
return NULL;
}
G->SkinVertexCount = B->XSkinVertexCount;
}
if (G->NormalCount != B->SkinNormalCount)
{
if (G->NormalArray!=NULL)
{
geRam_Free(G->NormalArray);
}
G->NormalArray = GE_RAM_ALLOCATE_ARRAY( geVec3d,B->SkinNormalCount);
if ( G->NormalArray == NULL )
{
geErrorLog_Add(ERR_BODY_ENOMEM, NULL);
G->NormalCount = 0;
return NULL;
}
G->NormalCount = B->SkinNormalCount;
}
if (BI->FaceCount != B->SkinFaces[GE_BODY_HIGHEST_LOD].FaceCount)
{
if (G->FaceList!=NULL)
{
geRam_Free(G->FaceList);
}
G->FaceListSize = sizeof(geBody_Index) *
B->SkinFaces[GE_BODY_HIGHEST_LOD].FaceCount *
GE_BODYINST_FACELIST_SIZE_FOR_TRIANGLE;
G->FaceList = GE_RAM_ALLOCATE_ARRAY(geBody_Index,
B->SkinFaces[GE_BODY_HIGHEST_LOD].FaceCount *
GE_BODYINST_FACELIST_SIZE_FOR_TRIANGLE);
if ( G->FaceList == NULL )
{
geErrorLog_Add(ERR_BODY_ENOMEM, NULL);
BI->FaceCount = 0;
return NULL;
}
BI->FaceCount = B->SkinFaces[GE_BODY_HIGHEST_LOD].FaceCount;
}
return G;
}
const geBodyInst_Geometry *GENESISCC geBodyInst_GetGeometry(
const geBodyInst *BI,
const geVec3d *ScaleVector,
const geXFArray *BoneTransformArray,
int LevelOfDetail,
const geCamera *Camera)
{
geBodyInst_Geometry *G;
const geBody *B;
geXForm3d *BoneXFArray;
int BoneXFCount;
geBody_Index BoneIndex;
geBoolean GottaUpdateFaces = GE_FALSE;
assert( BI != NULL );
assert( BoneTransformArray != NULL );
assert( geBody_IsValid(BI->BodyTemplate) != GE_FALSE );
G = geBodyInst_GetGeometryPrep((geBodyInst *)BI,LevelOfDetail);
if (G == NULL)
{
return NULL;
}
B = BI->BodyTemplate;
BoneXFArray = geXFArray_GetElements(BoneTransformArray,&BoneXFCount);
if ( BoneXFArray == NULL)
{
geErrorLog_Add(ERR_BODY_BONEXFARRAY, NULL);
return NULL;
}
if (BoneXFCount != B->BoneCount)
{
geErrorLog_Add(ERR_BODY_BONEXFARRAY, NULL);
return NULL;
}
{
int i,LevelOfDetailBit;
if (Camera != NULL)
{
// transform and project all appropriate points
geBody_XSkinVertex *S;
geBodyInst_SkinVertex *D;
LevelOfDetailBit = 1 << LevelOfDetail;
BoneIndex = -1; // S->BoneIndex won't ever be this.
geVec3d_Set(&(G->Maxs), -GE_BODY_REALLY_BIG_NUMBER, -GE_BODY_REALLY_BIG_NUMBER, -GE_BODY_REALLY_BIG_NUMBER );
geVec3d_Set(&(G->Mins), GE_BODY_REALLY_BIG_NUMBER, GE_BODY_REALLY_BIG_NUMBER, GE_BODY_REALLY_BIG_NUMBER );
for (i=B->XSkinVertexCount,S=B->XSkinVertexArray,D=G->SkinVertexArray;
i>0;
i--,S++,D++)
{
geXForm3d ObjectToCamera;
if (S->BoneIndex!=BoneIndex)
{ //Keep XSkinVertexArray sorted by BoneIndex for best performance
BoneIndex = S->BoneIndex;
geXForm3d_Multiply( geCamera_GetCameraSpaceXForm(Camera),
&(BoneXFArray[BoneIndex]),
&ObjectToCamera);
geBodyInst_PostScale(&ObjectToCamera,ScaleVector,&ObjectToCamera);
}
if ( S->LevelOfDetailMask && LevelOfDetailBit )
{
geVec3d *VecDestPtr = &(D->SVPoint);
geXForm3d_Transform( &(ObjectToCamera),
&(S->XPoint),VecDestPtr);
#ifdef ONE_OVER_Z_PIPELINE
geCamera_ProjectZ( Camera, VecDestPtr, VecDestPtr);
#else
geCamera_Project( Camera, VecDestPtr, VecDestPtr);
#endif
D->SVU = S->XU;
D->SVV = S->XV;
if (VecDestPtr->X > G->Maxs.X ) G->Maxs.X = VecDestPtr->X;
if (VecDestPtr->X < G->Mins.X ) G->Mins.X = VecDestPtr->X;
if (VecDestPtr->Y > G->Maxs.Y ) G->Maxs.Y = VecDestPtr->Y;
if (VecDestPtr->Y < G->Mins.Y ) G->Mins.Y = VecDestPtr->Y;
if (VecDestPtr->Z > G->Maxs.Z ) G->Maxs.Z = VecDestPtr->Z;
if (VecDestPtr->Z < G->Mins.Z ) G->Mins.Z = VecDestPtr->Z;
D->ReferenceBoneIndex=BoneIndex;
}
}
}
else
{
// transform all appropriate points
geBody_XSkinVertex *S;
geBodyInst_SkinVertex *D;
LevelOfDetailBit = 1 << LevelOfDetail;
BoneIndex = -1; // S->BoneIndex won't ever be this.
geVec3d_Set(&(G->Maxs), -GE_BODY_REALLY_BIG_NUMBER, -GE_BODY_REALLY_BIG_NUMBER, -GE_BODY_REALLY_BIG_NUMBER );
geVec3d_Set(&(G->Mins), GE_BODY_REALLY_BIG_NUMBER, GE_BODY_REALLY_BIG_NUMBER, GE_BODY_REALLY_BIG_NUMBER );
for (i=B->XSkinVertexCount,S=B->XSkinVertexArray,D=G->SkinVertexArray;
i>0;
i--,S++,D++)
{
geXForm3d ObjectToWorld;
if (S->BoneIndex!=BoneIndex)
{ //Keep XSkinVertexArray sorted by BoneIndex for best performance
BoneIndex = S->BoneIndex;
geBodyInst_PostScale(&BoneXFArray[BoneIndex],ScaleVector,&ObjectToWorld);
}
if ( S->LevelOfDetailMask && LevelOfDetailBit )
{
geVec3d *VecDestPtr = &(D->SVPoint);
geXForm3d_Transform( &(ObjectToWorld),
&(S->XPoint),VecDestPtr);
D->SVU = S->XU;
D->SVV = S->XV;
if (VecDestPtr->X > G->Maxs.X ) G->Maxs.X = VecDestPtr->X;
if (VecDestPtr->X < G->Mins.X ) G->Mins.X = VecDestPtr->X;
if (VecDestPtr->Y > G->Maxs.Y ) G->Maxs.Y = VecDestPtr->Y;
if (VecDestPtr->Y < G->Mins.Y ) G->Mins.Y = VecDestPtr->Y;
if (VecDestPtr->Z > G->Maxs.Z ) G->Maxs.Z = VecDestPtr->Z;
if (VecDestPtr->Z < G->Mins.Z ) G->Mins.Z = VecDestPtr->Z;
D->ReferenceBoneIndex=BoneIndex;
}
}
}
{
geBody_Normal *S;
geVec3d *D;
// rotate all appropriate normals
for (i=B->SkinNormalCount,S=B->SkinNormalArray,D=G->NormalArray;
i>0;
i--,S++,D++)
{
if ( S->LevelOfDetailMask && LevelOfDetailBit )
{
geXForm3d_Rotate(&(BoneXFArray[S->BoneIndex]),
&(S->Normal),D);
}
}
}
}
if (LevelOfDetail != BI->LastLevelOfDetail)
{
// build face list to export
int i,j;
geBody_Index Count;
const geBody_Triangle *T;
geBody_Index *D;
Count = B->SkinFaces[LevelOfDetail].FaceCount;
for (i=0,T=B->SkinFaces[LevelOfDetail].FaceArray,D=G->FaceList;
i<Count;
i++,T++,B++)
{
*D = GE_BODYINST_FACE_TRIANGLE;
D++;
*D = T->MaterialIndex;
D++;
for (j=0; j<3; j++)
{
*D = T->VtxIndex[j];
D++;
*D = T->NormalIndex[j];
D++;
}
}
assert( ((uint32)D) - ((uint32)G->FaceList) == (uint32)(G->FaceListSize) );
G->FaceCount = Count;
((geBodyInst *)BI)->LastLevelOfDetail = LevelOfDetail;
}
return G;
}
|
<reponame>asellappen/atlas<gh_stars>0
/*
* (C) Copyright 2013 ECMWF.
*
* This software is licensed under the terms of the Apache Licence Version 2.0
* which can be obtained at http://www.apache.org/licenses/LICENSE-2.0.
* In applying this licence, ECMWF does not waive the privileges and immunities
* granted to it by virtue of its status as an intergovernmental organisation
* nor does it submit to any jurisdiction.
*/
#include "atlas/library/FloatingPointExceptions.h"
#include <cfenv>
#include <cstring>
#include <iomanip>
#include "eckit/config/LibEcKit.h"
#include "eckit/config/Resource.h"
#include "eckit/utils/StringTools.h"
#include "eckit/utils/Translator.h"
#include "atlas/library/config.h"
#include "atlas/runtime/Exception.h"
#include "atlas/runtime/Log.h"
#include "atlas/runtime/Trace.h"
static int atlas_feenableexcept( int excepts ) {
#if ATLAS_HAVE_FEENABLEEXCEPT
return ::feenableexcept( excepts );
#else
return 0;
#endif
}
#ifdef UNUSED
static int atlas_fedisableexcept( int excepts ) {
#if ATLAS_HAVE_FEDISABLEEXCEPT
return ::fedisableexcept( excepts );
#else
return 0;
#endif
}
#endif
namespace atlas {
namespace library {
static std::map<std::string, int> str_to_except = {{"FE_INVALID", FE_INVALID}, {"FE_INEXACT", FE_INEXACT},
{"FE_DIVBYZERO", FE_DIVBYZERO}, {"FE_OVERFLOW", FE_OVERFLOW},
{"FE_UNDERFLOW", FE_UNDERFLOW}, {"FE_ALL_EXCEPT", FE_ALL_EXCEPT}};
static std::map<int, std::string> except_to_str = {{FE_INVALID, "FE_INVALID"}, {FE_INEXACT, "FE_INEXACT"},
{FE_DIVBYZERO, "FE_DIVBYZERO"}, {FE_OVERFLOW, "FE_OVERFLOW"},
{FE_UNDERFLOW, "FE_UNDERFLOW"}, {FE_ALL_EXCEPT, "FE_ALL_EXCEPT"}};
static std::map<std::string, int> str_to_signal = {{"SIGINT", SIGINT}, {"SIGILL", SIGILL}, {"SIGABRT", SIGABRT},
{"SIGFPE", SIGFPE}, {"SIGKILL", SIGKILL}, {"SIGSEGV", SIGSEGV},
{"SIGTERM", SIGTERM}};
static std::map<int, std::string> signal_to_str = {{SIGINT, "SIGINT"}, {SIGILL, "SIGILL"}, {SIGABRT, "SIGABRT"},
{SIGFPE, "SIGFPE"}, {SIGKILL, "SIGKILL"}, {SIGSEGV, "SIGSEGV"},
{SIGTERM, "SIGTERM"}};
// ------------------------------------------------------------------------------------
class Signal {
// Not sure if this should be made public (in header file) just yet
public:
Signal();
Signal( int signum );
Signal( int signum, signal_action_t );
Signal( int signum, signal_handler_t signal_handler );
operator int() const { return signum_; }
int signum() const { return signum_; }
const std::string& code() const { return signal_to_str[signum_]; }
std::string str() const { return str_; }
const signal_handler_t& handler() const { return signal_action_.sa_handler; }
const struct sigaction* action() const { return &signal_action_; }
private:
friend std::ostream& operator<<( std::ostream&, const Signal& );
int signum_;
std::string str_;
struct sigaction signal_action_;
};
// ------------------------------------------------------------------------------------
class Signals {
// Not sure if this should be made public (in header file) just yet
private:
Signals() {}
public:
static Signals& instance();
void setSignalHandlers();
void setSignalHandler( const Signal& );
void restoreSignalHandler( int signum );
void restoreAllSignalHandlers();
const Signal& signal( int signum ) const;
private:
typedef std::map<int, Signal> registered_signals_t;
registered_signals_t registered_signals_;
};
// ------------------------------------------------------------------------------------
// ------------------------------------------------------------------------------------
[[noreturn]] void atlas_signal_handler( int signum, siginfo_t* si, void* /*unused*/ ) {
Signal signal = Signals::instance().signal( signum );
std::string signal_code;
if ( signum == SIGFPE ) {
switch ( si->si_code ) {
case FPE_FLTDIV:
signal_code = " [FE_DIVBYZERO]";
break;
case FPE_FLTINV:
signal_code = " [FE_INVALID]";
break;
case FPE_FLTOVF:
signal_code = " [FE_OVERFLOW]";
break;
case FPE_FLTUND:
signal_code = " [FE_UNDERFLOW]";
break;
case FPE_FLTRES:
signal_code = " [FE_INEXACT]";
break;
}
}
std::ostream& out = Log::error();
out << "\n"
<< "=========================================\n"
<< signal << signal_code << " (signal intercepted by atlas)\n";
out << "-----------------------------------------\n"
<< "BACKTRACE\n"
<< "-----------------------------------------\n"
<< backtrace() << "\n"
<< "=========================================\n"
<< std::endl;
Signals::instance().restoreSignalHandler( signum );
eckit::LibEcKit::instance().abort();
// Just in case we end up here, which normally we shouldn't.
std::_Exit( EXIT_FAILURE );
}
//----------------------------------------------------------------------------------------------------------------------
Signals& Signals::instance() {
static Signals signals;
return signals;
}
void Signals::restoreSignalHandler( int signum ) {
if ( registered_signals_.find( signum ) != registered_signals_.end() ) {
Log::debug() << "\n";
std::signal( signum, SIG_DFL );
Log::debug() << "Atlas restored default signal handler for signal " << std::setw( 7 ) << std::left
<< registered_signals_[signum].code() << " [" << registered_signals_[signum] << "]\n";
Log::debug() << std::endl;
registered_signals_.erase( signum );
}
}
void Signals::restoreAllSignalHandlers() {
Log::debug() << "\n";
for ( registered_signals_t::const_iterator it = registered_signals_.begin(); it != registered_signals_.end();
++it ) {
std::signal( it->first, SIG_DFL );
Log::debug() << "Atlas restored default signal handler for signal " << std::setw( 7 ) << std::left
<< it->second.code() << " [" << it->second.str() << "]\n";
}
Log::debug() << std::endl;
registered_signals_.clear();
}
const Signal& Signals::signal( int signum ) const {
return registered_signals_.at( signum );
}
std::ostream& operator<<( std::ostream& out, const Signal& signal ) {
out << signal.str();
return out;
}
void Signals::setSignalHandlers() {
Log::debug() << "\n";
setSignalHandler( SIGINT );
setSignalHandler( SIGILL );
setSignalHandler( SIGABRT );
setSignalHandler( SIGFPE );
setSignalHandler( SIGSEGV );
setSignalHandler( SIGTERM );
}
void Signals::setSignalHandler( const Signal& signal ) {
registered_signals_[signal] = signal;
sigaction( signal, signal.action(), nullptr );
Log::debug() << "Atlas registered signal handler for signal " << std::setw( 7 ) << std::left << signal.code()
<< " [" << signal << "]" << std::endl;
}
Signal::Signal() : signum_( 0 ), str_() {
signal_action_.sa_handler = SIG_DFL;
}
Signal::Signal( int signum ) : Signal( signum, atlas_signal_handler ) {}
Signal::Signal( int signum, signal_handler_t signal_handler ) : signum_( signum ), str_( strsignal( signum ) ) {
memset( &signal_action_, 0, sizeof( signal_action_ ) );
sigemptyset( &signal_action_.sa_mask );
signal_action_.sa_handler = signal_handler;
signal_action_.sa_flags = 0;
}
Signal::Signal( int signum, signal_action_t signal_action ) : signum_( signum ), str_( strsignal( signum ) ) {
memset( &signal_action_, 0, sizeof( signal_action_ ) );
sigemptyset( &signal_action_.sa_mask );
signal_action_.sa_sigaction = signal_action;
signal_action_.sa_flags = SA_SIGINFO;
}
void enable_floating_point_exceptions() {
// Following line gives runtime errors with Cray 8.6 due to compiler bug (but works with Cray 8.5 and Cray 8.7)
// std::vector<std::string> floating_point_exceptions = eckit::Resource<std::vector<std::string>>( "atlasFPE;$ATLAS_FPE", {"false"} );
// Instead, manually access environment
std::vector<std::string> floating_point_exceptions{"false"};
if ( ::getenv( "ATLAS_FPE" ) ) {
std::string env( ::getenv( "ATLAS_FPE" ) );
std::vector<std::string> tmp = eckit::Translator<std::string, std::vector<std::string>>()( env );
floating_point_exceptions = tmp;
// Above trick with "tmp" is what avoids the Cray 8.6 compiler bug
}
else {
floating_point_exceptions = eckit::Resource<std::vector<std::string>>( "atlasFPE", {"false"} );
}
{
bool _enable = false;
int _excepts = 0;
auto enable = [&]( int except ) {
_excepts |= except;
_enable = true;
Log::debug() << "Atlas enabled floating point exception " << except_to_str[except] << std::endl;
};
bool skip_map = false;
if ( floating_point_exceptions.size() == 1 ) {
std::string s = eckit::StringTools::lower( floating_point_exceptions[0] );
if ( s == "no" || s == "off" || s == "false" || s == "0" ) {
_enable = false;
skip_map = true;
}
else if ( s == "yes" || s == "on" || s == "true" || s == "1" ) {
enable( FE_INVALID );
enable( FE_DIVBYZERO );
enable( FE_OVERFLOW );
skip_map = true;
}
}
if ( not skip_map ) {
for ( auto& s : floating_point_exceptions ) {
if ( str_to_except.find( s ) == str_to_except.end() ) {
throw eckit::UserError(
s + " is not a valid floating point exception code. "
"Valid codes: [FE_INVALID,FE_INEXACT,FE_DIVBYZERO,FE_OVERFLOW,FE_ALL_EXCEPT]",
Here() );
}
enable( str_to_except[s] );
}
}
if ( _enable ) {
atlas_feenableexcept( _excepts );
}
}
}
void enable_atlas_signal_handler() {
if ( eckit::Resource<bool>( "atlasSignalHandler;$ATLAS_SIGNAL_HANDLER", false ) ) {
Signals::instance().setSignalHandlers();
}
}
} // namespace library
} // namespace atlas
|
<filename>tests/immu/test_sql_transaction.py<gh_stars>1-10
# Copyright 2022 CodeNotary, Inc. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from tests.immuTestClient import ImmuTestClient
class TestSQLTransaction:
def test_sql_simple_transaction_multiline(self, wrappedClient: ImmuTestClient):
# BEFORE 1.2.0 we can't pass ; at the end of BEGIN TRANSACTION
countQuery = "COUNT(*)"
if(not wrappedClient.serverHigherOrEqualsToVersion("1.2.0")):
wrappedClient.transactionStart = "BEGIN TRANSACTION"
countQuery = "COUNT()"
tabname = wrappedClient.createTestTable(
"id INTEGER AUTO_INCREMENT", "tester VARCHAR[10]", "PRIMARY KEY id")
params = dict()
queries = []
for index in range(0, 10):
paramName = "tester{index}".format(index=index)
params[paramName] = str(index)
queries.append(wrappedClient.prepareInsertQuery(
tabname, ["tester"], ["@" + paramName]))
wrappedClient.executeWithTransaction(params, queries)
counted = wrappedClient.simpleSelect(tabname, [countQuery], dict())
assert(len(counted) > 0)
assert(counted[0][0] == 10)
def test_sql_simple_transaction_flat(self, wrappedClient: ImmuTestClient):
# BEFORE 1.2.0 we can't pass ; at the end of BEGIN TRANSACTION
countQuery = "COUNT(*)"
if(not wrappedClient.serverHigherOrEqualsToVersion("1.2.0")):
wrappedClient.transactionStart = "BEGIN TRANSACTION"
countQuery = "COUNT()"
tabname = wrappedClient.createTestTable(
"id INTEGER AUTO_INCREMENT", "tester VARCHAR[10]", "PRIMARY KEY id")
params = dict()
queries = []
for index in range(0, 10):
paramName = "tester{index}".format(index=index)
params[paramName] = str(index)
queries.append(wrappedClient.prepareInsertQuery(
tabname, ["tester"], ["@" + paramName]))
wrappedClient.executeWithTransaction(params, queries, separator=" ")
counted = wrappedClient.simpleSelect(tabname, [countQuery], dict())
assert(len(counted) > 0)
assert(counted[0][0] == 10)
|
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
#pragma once
#include "core/codegen/common/common.h"
#include "core/codegen/common/registry.h"
#include "core/common/common.h"
#include "core/framework/tensor.h"
#include <tvm/tvm.h>
namespace onnxruntime {
namespace tvm_codegen {
using CoordTransFunc = std::function<tvm::Array<tvm::Expr>(const tvm::Array<tvm::Expr>&)>;
// WeightLayout is data layout trasnformer for weight/initializer
class WeightLayout {
public:
// Static function to return unique string as a key
static const std::string GetKey(
const std::string& name,
ONNX_NAMESPACE::TensorProto_DataType proto_type,
int input_dim,
float pad_zero);
public:
WeightLayout(
const std::string& name,
ONNX_NAMESPACE::TensorProto_DataType proto_type,
int input_dim,
float pad_zero);
virtual ~WeightLayout() = default;
// Return a CoordTransFunc from actual (transformed) coordinate to normial (original) coordinate
virtual CoordTransFunc ToNominal(const tvm::Tensor& X) const = 0;
// Return a CoordTransFunc from normial (original) coordinate to actual (transformed) coordinate
virtual CoordTransFunc ToActual(const tvm::Tensor& X) const = 0;
// Return actual (transformed) shape in tvm::Array (tvm_codegen)
virtual tvm::Array<tvm::Expr> ToActualShape(const tvm::Tensor& X) const = 0;
// Return actual (transformed) shape in vector<int64_t> (ort)
virtual std::vector<int64_t> ToActualShape(const Tensor* X) const = 0;
// Create Layout Marshalling op in outputs
void CreateLayoutMarshallingTVMOp(tvm::Array<tvm::Tensor>& inputs,
tvm::Array<tvm::Tensor>& outputs) const;
// Layout name
const std::string& Name() const;
protected:
std::string name_;
ONNX_NAMESPACE::TensorProto_DataType proto_type_;
int input_dim_;
float pad_zero_;
private:
ORT_DISALLOW_COPY_ASSIGNMENT_AND_MOVE(WeightLayout);
};
// Weight Layout Registry is a registry holds all WeightLayout
using WeightLayoutRegistry = codegen::RegistryBase<WeightLayout>;
} // namespace tvm_codegen
} // namespace onnxruntime
|
/**
* <p>
* A builder class to creare and open new database and individual collections.
* It has several static factory methods.
* Method names depends on type of storage it opens.
* {@code DBMaker}is typically used this way
* </p>
* <pre>
* DB db = DBMaker
* .memoryDB() //static method
* .transactionDisable() //configuration option
* .make() //opens db
* </pre>
*
*
*
* @author Jan Kotek
*/
public final class DBMaker{
protected static final Logger LOG = Logger.getLogger(DBMaker.class.getName());
protected static final String TRUE = "true";
protected interface Keys{
String cache = "cache";
String cacheSize = "cacheSize";
String cache_disable = "disable";
String cache_hashTable = "hashTable";
String cache_hardRef = "hardRef";
String cache_softRef = "softRef";
String cache_weakRef = "weakRef";
String cache_lru = "lru";
String cacheExecutorPeriod = "cacheExecutorPeriod";
String file = "file";
String metrics = "metrics";
String metricsLogInterval = "metricsLogInterval";
String volume = "volume";
String volume_fileChannel = "fileChannel";
String volume_raf = "raf";
String volume_mmapfIfSupported = "mmapfIfSupported";
String volume_mmapf = "mmapf";
String volume_byteBuffer = "byteBuffer";
String volume_directByteBuffer = "directByteBuffer";
String volume_unsafe = "unsafe";
String fileMmapCleanerHack = "fileMmapCleanerHack";
String fileMmapPreclearDisable = "fileMmapPreclearDisable";
String fileLockDisable = "fileLockDisable";
String fileLockHeartbeatEnable = "fileLockHeartbeatEnable";
String lockScale = "lockScale";
String lock = "lock";
String lock_readWrite = "readWrite";
String lock_single = "single";
String lock_threadUnsafe = "threadUnsafe";
String store = "store";
String store_direct = "direct";
String store_wal = "wal";
String store_append = "append";
String store_heap = "heap";
String store_archive = "archive";
String storeExecutorPeriod = "storeExecutorPeriod";
String transactionDisable = "transactionDisable";
String asyncWrite = "asyncWrite";
String asyncWriteFlushDelay = "asyncWriteFlushDelay";
String asyncWriteQueueSize = "asyncWriteQueueSize";
String deleteFilesAfterClose = "deleteFilesAfterClose";
String closeOnJvmShutdown = "closeOnJvmShutdown";
String readOnly = "readOnly";
String compression = "compression";
String compression_lzf = "lzf";
String encryptionKey = "encryptionKey";
String encryption = "encryption";
String encryption_xtea = "xtea";
String checksum = "checksum";
String freeSpaceReclaimQ = "freeSpaceReclaimQ";
String commitFileSyncDisable = "commitFileSyncDisable";
String snapshots = "snapshots";
String strictDBGet = "strictDBGet";
String fullTx = "fullTx";
String allocateStartSize = "allocateStartSize";
String allocateIncrement = "allocateIncrement";
String allocateRecidReuseDisable = "allocateRecidReuseDisable";
}
/**
* Creates new in-memory database which stores all data on heap without serialization.
* This mode should be very fast, but data will affect Garbage Collector the same way as traditional Java Collections.
*/
public static Maker heapDB(){
return new Maker()._newHeapDB();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#heapDB()} */
public static Maker newHeapDB(){
return heapDB();
}
/**
* Creates new in-memory database. Changes are lost after JVM exits.
* This option serializes data into {@code byte[]},
* so they are not affected by Garbage Collector.
*/
public static Maker memoryDB(){
return new Maker()._newMemoryDB();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#memoryDB()} */
public static Maker newMemoryDB(){
return memoryDB();
}
/**
* <p>
* Creates new in-memory database. Changes are lost after JVM exits.
* </p><p>
* This will use {@code DirectByteBuffer} outside of HEAP, so Garbage Collector is not affected
* You should increase ammount of direct memory with
* {@code -XX:MaxDirectMemorySize=10G} JVM param
* </p>
*/
public static Maker memoryDirectDB(){
return new Maker()._newMemoryDirectDB();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#memoryDirectDB()} */
public static Maker newMemoryDirectDB(){
return memoryDirectDB();
}
/**
* <p>
* Creates new in-memory database. Changes are lost after JVM exits.
* </p><p>
* This will use {@code sun.misc.Unsafe}. It uses direct-memory access and avoids boundary checking.
* It is bit faster compared to {@code DirectByteBuffer}, but can cause JVM crash in case of error.
* </p><p>
* If {@code sun.misc.Unsafe} is not available for some reason, MapDB will log an warning and fallback into
* {@code DirectByteBuffer} based in-memory store without throwing an exception.
* </p>
*/
public static Maker memoryUnsafeDB(){
return new Maker()._newMemoryUnsafeDB();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#memoryUnsafeDB()} */
public static Maker newMemoryUnsafeDB(){
return memoryUnsafeDB();
}
/**
* Creates or open append-only database stored in file.
* This database uses format other than usual file db
*
* @param file
* @return maker
*/
public static Maker appendFileDB(File file) {
return new Maker()._newAppendFileDB(file);
}
public static Maker archiveFileDB(File file) {
return new Maker()._newArchiveFileDB(file);
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#appendFileDB(File)} */
public static Maker newAppendFileDB(File file) {
return appendFileDB(file);
}
/**
* <p>
* Create new BTreeMap backed by temporary file storage.
* This is quick way to create 'throw away' collection.
* </p><p>
*
* Storage is created in temp folder and deleted on JVM shutdown
* </p>
*/
public static <K,V> BTreeMap<K,V> tempTreeMap(){
return newTempFileDB()
.deleteFilesAfterClose()
.closeOnJvmShutdown()
.transactionDisable()
.make()
.treeMapCreate("temp")
.closeEngine()
.make();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#tempTreeMap()} */
public static <K,V> BTreeMap<K,V> newTempTreeMap(){
return tempTreeMap();
}
/**
* <p>
* Create new HTreeMap backed by temporary file storage.
* This is quick way to create 'throw away' collection.
* </p><p>
*
* Storage is created in temp folder and deleted on JVM shutdown
* </p>
*/
public static <K,V> HTreeMap<K,V> tempHashMap(){
return newTempFileDB()
.deleteFilesAfterClose()
.closeOnJvmShutdown()
.transactionDisable()
.make()
.hashMapCreate("temp")
.closeEngine()
.make();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#tempHashMap()} */
public static <K,V> HTreeMap<K,V> newTempHashMap() {
return tempHashMap();
}
/**
* <p>
* Create new TreeSet backed by temporary file storage.
* This is quick way to create 'throw away' collection.
* </p><p>
*
* Storage is created in temp folder and deleted on JVM shutdown
* </p>
*/
public static <K> NavigableSet<K> tempTreeSet(){
return newTempFileDB()
.deleteFilesAfterClose()
.closeOnJvmShutdown()
.transactionDisable()
.make()
.treeSetCreate("temp")
.standalone()
.make();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#tempTreeSet()} */
public static <K> NavigableSet<K> newTempTreeSet(){
return tempTreeSet();
}
/**
* <p>
* Create new HashSet backed by temporary file storage.
* This is quick way to create 'throw away' collection.
* </p><p>
*
* Storage is created in temp folder and deleted on JVM shutdown
* </p>
*/
public static <K> Set<K> tempHashSet(){
return newTempFileDB()
.deleteFilesAfterClose()
.closeOnJvmShutdown()
.transactionDisable()
.make()
.hashSetCreate("temp")
.closeEngine()
.make();
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#tempHashSet()} */
public static <K> Set<K> newTempHashSet(){
return tempHashSet();
}
/**
* Creates new database in temporary folder.
*/
public static Maker tempFileDB() {
try {
return newFileDB(File.createTempFile("mapdb-temp","db"));
} catch (IOException e) {
throw new IOError(e);
}
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#tempFileDB()} */
public static Maker newTempFileDB(){
return tempFileDB();
}
/**
* Creates new off-heap cache with maximal size in GBs.
* Entries are removed from cache in most-recently-used fashion
* if store becomes too big.
*
* This method uses off-heap direct ByteBuffers. See {@link java.nio.ByteBuffer#allocateDirect(int)}
*
* @param size maximal size of off-heap store in gigabytes.
* @return map
*
* @deprecated TODO this method is going to be replaced by something
*/
public static <K,V> HTreeMap<K,V> newCacheDirect(double size){
return DBMaker
.memoryDirectDB()
.transactionDisable()
.make()
.hashMapCreate("cache")
.expireStoreSize(size)
.counterEnable()
.make();
}
/**
* Creates new cache with maximal size in GBs.
* Entries are removed from cache in most-recently-used fashion
* if store becomes too big.
*
* This cache uses on-heap {@code byte[]}, but does not affect GC since objects are serialized into binary form.
* This method uses ByteBuffers backed by on-heap byte[]. See {@link java.nio.ByteBuffer#allocate(int)}
*
* @param size maximal size of off-heap store in gigabytes.
* @return map
* @deprecated TODO this method is going to be replaced by something
*/
public static <K,V> HTreeMap<K,V> newCache(double size){
return DBMaker
.memoryDB()
.transactionDisable()
.make()
.hashMapCreate("cache")
.expireStoreSize(size)
.counterEnable()
.make();
}
/** Creates or open database stored in file. */
public static Maker fileDB(File file){
return new Maker(file);
}
/** @deprecated method renamed, prefix removed, use {@link DBMaker#fileDB(File)} */
public static Maker newFileDB(File file){
return fileDB(file);
}
public static final class Maker {
protected Fun.RecordCondition cacheCondition;
protected ScheduledExecutorService executor;
protected ScheduledExecutorService metricsExecutor;
protected ScheduledExecutorService cacheExecutor;
protected ScheduledExecutorService storeExecutor;
protected ClassLoader serializerClassLoader;
protected Map<String,ClassLoader> serializerClassLoaderRegistry;
protected Properties props = new Properties();
/** use static factory methods, or make subclass */
protected Maker(){}
protected Maker(File file) {
props.setProperty(Keys.file, file.getPath());
}
public Maker _newHeapDB(){
props.setProperty(Keys.store,Keys.store_heap);
return this;
}
public Maker _newMemoryDB(){
props.setProperty(Keys.volume,Keys.volume_byteBuffer);
return this;
}
public Maker _newMemoryDirectDB() {
props.setProperty(Keys.volume,Keys.volume_directByteBuffer);
return this;
}
public Maker _newMemoryUnsafeDB() {
props.setProperty(Keys.volume,Keys.volume_unsafe);
return this;
}
public Maker _newAppendFileDB(File file) {
props.setProperty(Keys.file, file.getPath());
props.setProperty(Keys.store, Keys.store_append);
return this;
}
public Maker _newArchiveFileDB(File file) {
props.setProperty(Keys.file, file.getPath());
props.setProperty(Keys.store, Keys.store_archive);
return this;
}
public Maker _newFileDB(File file){
props.setProperty(Keys.file, file.getPath());
return this;
}
/**
* Enables background executor
*
* @return this builder
*/
public Maker executorEnable(){
executor = Executors.newScheduledThreadPool(4);
return this;
}
/**
* <p>
* Transaction journal is enabled by default
* You must call <b>DB.commit()</b> to save your changes.
* It is possible to disable transaction journal for better write performance
* In this case all integrity checks are sacrificed for faster speed.
* </p><p>
* If transaction journal is disabled, all changes are written DIRECTLY into store.
* You must call DB.close() method before exit,
* otherwise your store <b>WILL BE CORRUPTED</b>
* </p>
*
* @return this builder
*/
public Maker transactionDisable(){
props.put(Keys.transactionDisable, TRUE);
return this;
}
/**
* Enable metrics, log at info level every 10 SECONDS
*
* @return this builder
*/
public Maker metricsEnable(){
return metricsEnable(CC.DEFAULT_METRICS_LOG_PERIOD);
}
public Maker metricsEnable(long metricsLogPeriod) {
props.put(Keys.metrics, TRUE);
props.put(Keys.metricsLogInterval, ""+metricsLogPeriod);
return this;
}
/**
* Enable separate executor for metrics.
*
* @return this builder
*/
public Maker metricsExecutorEnable(){
return metricsExecutorEnable(
Executors.newSingleThreadScheduledExecutor());
}
/**
* Enable separate executor for metrics.
*
* @return this builder
*/
public Maker metricsExecutorEnable(ScheduledExecutorService metricsExecutor){
this.metricsExecutor = metricsExecutor;
return this;
}
/**
* Enable separate executor for cache.
*
* @return this builder
*/
public Maker cacheExecutorEnable(){
return cacheExecutorEnable(
Executors.newSingleThreadScheduledExecutor());
}
/**
* Enable separate executor for cache.
*
* @return this builder
*/
public Maker cacheExecutorEnable(ScheduledExecutorService metricsExecutor){
this.cacheExecutor = metricsExecutor;
return this;
}
/**
* Sets interval in which executor should check cache
*
* @param period in ms
* @return this builder
*/
public Maker cacheExecutorPeriod(long period){
props.put(Keys.cacheExecutorPeriod, ""+period);
return this;
}
/**
* Enable separate executor for store (async write, compaction)
*
* @return this builder
*/
public Maker storeExecutorEnable(){
return storeExecutorEnable(
Executors.newScheduledThreadPool(4));
}
/**
* Enable separate executor for cache.
*
* @return this builder
*/
public Maker storeExecutorEnable(ScheduledExecutorService metricsExecutor){
this.storeExecutor = metricsExecutor;
return this;
}
/**
* Sets interval in which executor should check cache
*
* @param period in ms
* @return this builder
*/
public Maker storeExecutorPeriod(long period){
props.put(Keys.storeExecutorPeriod, ""+period);
return this;
}
/**
* Install callback condition, which decides if some record is to be included in cache.
* Condition should return {@code true} for every record which should be included
*
* This could be for example useful to include only BTree Directory Nodes and leave values and Leaf nodes outside of cache.
*
* !!! Warning:!!!
*
* Cache requires **consistent** true or false. Failing to do so will result in inconsitent cache and possible data corruption.
* Condition is also executed several times, so it must be very fast
*
* You should only use very simple logic such as {@code value instanceof SomeClass}.
*
* @return this builder
*/
public Maker cacheCondition(Fun.RecordCondition cacheCondition){
this.cacheCondition = cacheCondition;
return this;
}
/**
/**
* Disable cache if enabled. Cache is disabled by default, so this method has no longer purpose.
*
* @return this builder
* @deprecated cache is disabled by default
*/
public Maker cacheDisable(){
props.put(Keys.cache,Keys.cache_disable);
return this;
}
/**
* <p>
* Enables unbounded hard reference cache.
* This cache is good if you have lot of available memory.
* </p><p>
*
* All fetched records are added to HashMap and stored with hard reference.
* To prevent OutOfMemoryExceptions MapDB monitors free memory,
* if it is bellow 25% cache is cleared.
* </p>
*
* @return this builder
*/
public Maker cacheHardRefEnable(){
props.put(Keys.cache, Keys.cache_hardRef);
return this;
}
/**
* <p>
* Set cache size. Interpretations depends on cache type.
* For fixed size caches (such as FixedHashTable cache) it is maximal number of items in cache.
* </p><p>
*
* For unbounded caches (such as HardRef cache) it is initial capacity of underlying table (HashMap).
* <p></p>
*
* Default cache size is 2048.
* <p>
*
* @param cacheSize new cache size
* @return this builder
*/
public Maker cacheSize(int cacheSize){
props.setProperty(Keys.cacheSize, "" + cacheSize);
return this;
}
/**
* <p>
* Fixed size cache which uses hash table.
* Is thread-safe and requires only minimal locking.
* Items are randomly removed and replaced by hash collisions.
* </p><p>
*
* This is simple, concurrent, small-overhead, random cache.
* </p>
*
* @return this builder
*/
public Maker cacheHashTableEnable(){
props.put(Keys.cache, Keys.cache_hashTable);
return this;
}
/**
* <p>
* Fixed size cache which uses hash table.
* Is thread-safe and requires only minimal locking.
* Items are randomly removed and replaced by hash collisions.
* </p><p>
*
* This is simple, concurrent, small-overhead, random cache.
* </p>
*
* @param cacheSize new cache size
* @return this builder
*/
public Maker cacheHashTableEnable(int cacheSize){
props.put(Keys.cache, Keys.cache_hashTable);
props.setProperty(Keys.cacheSize, "" + cacheSize);
return this;
}
/**
* Enables unbounded cache which uses <code>WeakReference</code>.
* Items are removed from cache by Garbage Collector
*
* @return this builder
*/
public Maker cacheWeakRefEnable(){
props.put(Keys.cache, Keys.cache_weakRef);
return this;
}
/**
* Enables unbounded cache which uses <code>SoftReference</code>.
* Items are removed from cache by Garbage Collector
*
* @return this builder
*/
public Maker cacheSoftRefEnable(){
props.put(Keys.cache, Keys.cache_softRef);
return this;
}
/**
* Enables Least Recently Used cache. It is fixed size cache and it removes less used items to make space.
*
* @return this builder
*/
public Maker cacheLRUEnable(){
props.put(Keys.cache,Keys.cache_lru);
return this;
}
/**
* <p>
* Disable locks. This will make MapDB thread unsafe. It will also disable any background thread workers.
* </p><p>
*
* <b>WARNING: </b> this option is dangerous. With locks disabled multi-threaded access could cause data corruption and causes.
* MapDB does not have fail-fast iterator or any other means of protection
* </p>
*
* @return this builder
*/
public Maker lockDisable() {
props.put(Keys.lock, Keys.lock_threadUnsafe);
return this;
}
/**
* <p>
* Disables double read-write locks and enables single read-write locks.
* </p><p>
*
* This type of locking have smaller overhead and can be faster in mostly-write scenario.
* </p>
* @return this builder
*/
public Maker lockSingleEnable() {
props.put(Keys.lock, Keys.lock_single);
return this;
}
/**
* <p>
* Sets concurrency scale. More locks means better scalability with multiple cores, but also higher memory overhead
* </p><p>
*
* This value has to be power of two, so it is rounded up automatically.
* </p>
*
* @return this builder
*/
public Maker lockScale(int scale) {
props.put(Keys.lockScale, "" + scale);
return this;
}
/**
*@deprecated renamed to {@link #fileMmapEnable()}
*/
public Maker mmapFileEnable() {
return fileMmapEnable();
}
/**
* <p>
* Enables Memory Mapped Files, much faster storage option. However on 32bit JVM this mode could corrupt
* your DB thanks to 4GB memory addressing limit.
* </p><p>
*
* You may experience {@code java.lang.OutOfMemoryError: Map failed} exception on 32bit JVM, if you enable this
* mode.
* </p>
*/
public Maker fileMmapEnable() {
assertNotInMemoryVolume();
props.setProperty(Keys.volume, Keys.volume_mmapf);
return this;
}
/**
* <p>
* Enables cleaner hack to close mmaped files at DB.close(), rather than Garbage Collection.
* See relevant <a href="http://bugs.java.com/view_bug.do?bug_id=4724038">JVM bug</a>.
* Please note that this option closes files, but could cause all sort of problems,
* including JVM crash.
* </p><p>
* Memory mapped files in Java are not unmapped when file closes.
* Unmapping happens when {@code DirectByteBuffer} is garbage collected.
* Delay between file close and GC could be very long, possibly even hours.
* This causes file descriptor to remain open, causing all sort of problems:
* </p><p>
* On Windows opened file can not be deleted or accessed by different process.
* It remains locked even after JVM process exits until Windows restart.
* This is causing problems during compaction etc.
* </p><p>
* On Linux (and other systems) opened files consumes file descriptor. Eventually
* JVM process could run out of available file descriptors (couple of thousands)
* and would be unable to open new files or sockets.
* </p><p>
* On Oracle and OpenJDK JVMs there is option to unmap files after closing.
* However it is not officially supported and could result in all sort of strange behaviour.
* In MapDB it was linked to <a href="https://github.com/jankotek/mapdb/issues/442">JVM crashes</a>,
* and was disabled by default in MapDB 2.0.
* </p>
* @return this builder
*/
public Maker fileMmapCleanerHackEnable() {
props.setProperty(Keys.fileMmapCleanerHack,TRUE);
return this;
}
/**
* <p>
* Disables preclear workaround for JVM crash. This will speedup inserts on mmap files, if store is expanded.
* As sideffect JVM might crash if there is not enough free space.
* TODO document more, links
* </p>
* @return this builder
*/
public Maker fileMmapPreclearDisable() {
props.setProperty(Keys.fileMmapPreclearDisable,TRUE);
return this;
}
/**
* <p>
* MapDB needs exclusive lock over storage file it is using.
* When single file is used by multiple DB instances at the same time, storage file gets quickly corrupted.
* To prevent multiple opening MapDB uses {@link FileChannel#lock()}.
* If file is already locked, opening it fails with {@link DBException.FileLocked}
* </p><p>
* In some cases file might remain locked, if DB is not closed correctly or JVM crashes.
* This option disables exclusive file locking. Use it if you have troubles to reopen files
*
* </p>
* @return this builder
*/
public Maker fileLockDisable() {
props.setProperty(Keys.fileLockDisable,TRUE);
return this;
}
/**
* <p>
* MapDB needs exclusive lock over storage file it is using.
* When single file is used by multiple DB instances at the same time, storage file gets quickly corrupted.
* To prevent multiple opening MapDB uses {@link FileChannel#lock()}.
* If file is already locked, opening it fails with {@link DBException.FileLocked}
* </p><p>
* In some cases file might remain locked, if DB is not closed correctly or JVM crashes.
* This option replaces {@link FileChannel#lock()} exclusive file locking with {@code *.lock} file.
* This file is periodically updated by background thread. If JVM dies, the lock file gets old
* and eventually expires. Use it if you have troubles to reopen files.
* </p><p>
* This method was taken from <a href="http://www.h2database.com/">H2 database</a>.
* It was originally written by Thomas Mueller and modified for MapDB purposes.
* </p><p>
* Original description from H2 documentation:
* </p><ul>
* <li>If the lock file does not exist, it is created (using the atomic operation <code>File.createNewFile</code>).
* Then, the process waits a little bit (20 ms) and checks the file again. If the file was changed during this time,
* the operation is aborted. This protects against a race condition when one process deletes the lock file just after
* another one create it, and a third process creates the file again. It does not occur if there are only
* two writers. </li>
* <li> If the file can be created, a random number is inserted together with the locking method ('file').
* Afterwards, a watchdog thread is started that checks regularly (every second once by default)
* if the file was deleted or modified by another (challenger) thread / process. Whenever that occurs,
* the file is overwritten with the old data. The watchdog thread runs with high priority so that a change
* to the lock file does not get through undetected even if the system is very busy. However, the watchdog
* thread does use very little resources (CPU time), because it waits most of the time. Also, the watchdog
* only reads from the hard disk and does not write to it. </li>
* <li> If the lock file exists and was recently
* modified, the process waits for some time (up to two seconds). If it was still changed, an exception is thrown
* (database is locked). This is done to eliminate race conditions with many concurrent writers. Afterwards,
* the file is overwritten with a new version (challenge). After that, the thread waits for 2 seconds.
* If there is a watchdog thread protecting the file, he will overwrite the change and this process will fail
* to lock the database. However, if there is no watchdog thread, the lock file will still be as written by
* this thread. In this case, the file is deleted and atomically created again. The watchdog thread is started
* in this case and the file is locked. </li>
* </ul>
* <p> This algorithm is tested with over 100 concurrent threads. In some cases, when there are many concurrent
* threads trying to lock the database, they block each other (meaning the file cannot be locked by any of them)
* for some time. However, the file never gets locked by two threads at the same time. However using that many
* concurrent threads / processes is not the common use case. Generally, an application should throw an error
* to the user if it cannot open a database, and not try again in a (fast) loop. </p>
*
* @return this builder
*/
public Maker fileLockHeartbeatEnable() {
props.setProperty(Keys.fileLockHeartbeatEnable,TRUE);
return this;
}
private void assertNotInMemoryVolume() {
if(Keys.volume_byteBuffer.equals(props.getProperty(Keys.volume)) ||
Keys.volume_directByteBuffer.equals(props.getProperty(Keys.volume)))
throw new IllegalArgumentException("Can not enable mmap file for in-memory store");
}
/**
*
* @return this
* @deprecated mapdb 2.0 uses single file, no partial mapping possible
*/
public Maker mmapFileEnablePartial() {
return this;
}
/**
* Enable Memory Mapped Files only if current JVM supports it (is 64bit).
* @deprecated renamed to {@link #fileMmapEnableIfSupported()}
*/
public Maker mmapFileEnableIfSupported() {
return fileMmapEnableIfSupported();
}
/**
* Enable Memory Mapped Files only if current JVM supports it (is 64bit).
*/
public Maker fileMmapEnableIfSupported() {
assertNotInMemoryVolume();
props.setProperty(Keys.volume,Keys.volume_mmapfIfSupported);
return this;
}
/**
* Enable FileChannel access. By default MapDB uses {@link java.io.RandomAccessFile}.
* whic is slower and more robust. but does not allow concurrent access (parallel read and writes). RAF is still thread-safe
* but has global lock.
* FileChannel does not have global lock, and is faster compared to RAF. However memory-mapped files are
* probably best choice.
*/
public Maker fileChannelEnable() {
assertNotInMemoryVolume();
props.setProperty(Keys.volume,Keys.volume_fileChannel);
return this;
}
/**
* MapDB supports snapshots. {@code TxEngine} requires additional locking which has small overhead when not used.
* Snapshots are disabled by default. This option switches the snapshots on.
*
* @return this builder
*/
public Maker snapshotEnable(){
props.setProperty(Keys.snapshots,TRUE);
return this;
}
/**
* <p>
* Enables mode where all modifications are queued and written into disk on Background Writer Thread.
* So all modifications are performed in asynchronous mode and do not block.
* </p><p>
*
* Enabling this mode might increase performance for single threaded apps.
* </p>
*
* @return this builder
*/
public Maker asyncWriteEnable(){
props.setProperty(Keys.asyncWrite,TRUE);
return this;
}
/**
* <p>
* Set flush interval for write cache, by default is 0
* </p><p>
* When BTreeMap is constructed from ordered set, tree node size is increasing linearly with each
* item added. Each time new key is added to tree node, its size changes and
* storage needs to find new place. So constructing BTreeMap from ordered set leads to large
* store fragmentation.
* </p><p>
*
* Setting flush interval is workaround as BTreeMap node is always updated in memory (write cache)
* and only final version of node is stored on disk.
* </p>
*
* @param delay flush write cache every N miliseconds
* @return this builder
*/
public Maker asyncWriteFlushDelay(int delay){
props.setProperty(Keys.asyncWriteFlushDelay,""+delay);
return this;
}
/**
* <p>
* Set size of async Write Queue. Default size is
* </p><p>
* Using too large queue size can lead to out of memory exception.
* </p>
*
* @param queueSize of queue
* @return this builder
*/
public Maker asyncWriteQueueSize(int queueSize){
props.setProperty(Keys.asyncWriteQueueSize,""+queueSize);
return this;
}
/**
* Try to delete files after DB is closed.
* File deletion may silently fail, especially on Windows where buffer needs to be unmapped file delete.
*
* @return this builder
*/
public Maker deleteFilesAfterClose(){
props.setProperty(Keys.deleteFilesAfterClose,TRUE);
return this;
}
/**
* Adds JVM shutdown hook and closes DB just before JVM;
*
* @return this builder
*/
public Maker closeOnJvmShutdown(){
props.setProperty(Keys.closeOnJvmShutdown,TRUE);
return this;
}
/**
* <p>
* Enables record compression.
* </p><p>
* Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
* </p>
*
* @return this builder
*/
public Maker compressionEnable(){
props.setProperty(Keys.compression,Keys.compression_lzf);
return this;
}
/**
* <p>
* Encrypt storage using XTEA algorithm.
* </p><p>
* XTEA is sound encryption algorithm. However implementation in MapDB was not peer-reviewed.
* MapDB only encrypts records data, so attacker may see number of records and their sizes.
* </p><p>
* Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
* </p>
*
* @param password for encryption
* @return this builder
*/
public Maker encryptionEnable(String password){
return encryptionEnable(password.getBytes(Charset.forName("UTF8")));
}
/**
* <p>
* Encrypt storage using XTEA algorithm.
* </p><p>
* XTEA is sound encryption algorithm. However implementation in MapDB was not peer-reviewed.
* MapDB only encrypts records data, so attacker may see number of records and their sizes.
* </p><p>
* Make sure you enable this every time you reopen store, otherwise record de-serialization fails unpredictably.
* </p>
*
* @param password for encryption
* @return this builder
*/
public Maker encryptionEnable(byte[] password){
props.setProperty(Keys.encryption, Keys.encryption_xtea);
props.setProperty(Keys.encryptionKey, DataIO.toHexa(password));
return this;
}
/**
* <p>
* Adds CRC32 checksum at end of each record to check data integrity.
* It throws 'IOException("Checksum does not match, data broken")' on de-serialization if data are corrupted
* </p><p>
* Make sure you enable this every time you reopen store, otherwise record de-serialization fails.
* </p>
*
* @return this builder
*/
public Maker checksumEnable(){
props.setProperty(Keys.checksum,TRUE);
return this;
}
/**
* <p>
* DB Get methods such as {@link DB#treeMap(String)} or {@link DB#atomicLong(String)} auto create
* new record with default values, if record with given name does not exist. This could be problem if you would like to enforce
* stricter database schema. So this parameter disables record auto creation.
* </p><p>
*
* If this set, {@code DB.getXX()} will throw an exception if given name does not exist, instead of creating new record (or collection)
* </p>
*
* @return this builder
*/
public Maker strictDBGet(){
props.setProperty(Keys.strictDBGet,TRUE);
return this;
}
/**
* Open store in read-only mode. Any modification attempt will throw
* <code>UnsupportedOperationException("Read-only")</code>
*
* @return this builder
*/
public Maker readOnly(){
props.setProperty(Keys.readOnly,TRUE);
return this;
}
/**
* @deprecated right now not implemented, will be renamed to allocate*()
* @param maxSize
* @return this
*/
public Maker sizeLimit(double maxSize){
return this;
}
/**
* Set free space reclaim Q. It is value from 0 to 10, indicating how eagerly MapDB
* searchs for free space inside store to reuse, before expanding store file.
* 0 means that no free space will be reused and store file will just grow (effectively append only).
* 10 means that MapDB tries really hard to reuse free space, even if it may hurt performance.
* Default value is 5;
*
* @return this builder
*
* @deprecated ignored in MapDB 2 for now
*/
public Maker freeSpaceReclaimQ(int q){
if(q<0||q>10) throw new IllegalArgumentException("wrong Q");
props.setProperty(Keys.freeSpaceReclaimQ,""+q);
return this;
}
/**
* Disables file sync on commit. This way transactions are preserved (rollback works),
* but commits are not 'durable' and data may be lost if store is not properly closed.
* File store will get properly synced when closed.
* Disabling this will make commits faster.
*
* @return this builder
* @deprecated ignored in MapDB 2 for now
*/
public Maker commitFileSyncDisable(){
props.setProperty(Keys.commitFileSyncDisable,TRUE);
return this;
}
/**
* Tells allocator to set initial store size, when new store is created.
* Value is rounder up to nearest multiple of 1MB or allocation increment.
*
* @return this builder
*/
public Maker allocateStartSize(long size){
props.setProperty(Keys.allocateStartSize,""+size);
return this;
}
/**
* Tells allocator to grow store with this size increment. Minimal value is 1MB.
* Incremental size is rounded up to nearest power of two.
*
* @return this builder
*/
public Maker allocateIncrement(long sizeIncrement){
props.setProperty(Keys.allocateIncrement,""+sizeIncrement);
return this;
}
/**
* Sets class loader used to POJO serializer to load classes during deserialization.
*
* @return this builder
*/
public Maker serializerClassLoader(ClassLoader classLoader ){
this.serializerClassLoader = classLoader;
return this;
}
/**
* Register class with given Class Loader. This loader will be used by POJO deserializer to load and instantiate new classes.
* This might be needed in OSGI containers etc.
*
* @return this builder
*/
public Maker serializerRegisterClass(String className, ClassLoader classLoader ){
if(this.serializerClassLoaderRegistry==null)
this.serializerClassLoaderRegistry = new HashMap<String, ClassLoader>();
this.serializerClassLoaderRegistry.put(className, classLoader);
return this;
}
/**
* Register classes with their Class Loaders. This loader will be used by POJO deserializer to load and instantiate new classes.
* This might be needed in OSGI containers etc.
*
* @return this builder
*/
public Maker serializerRegisterClass(Class... classes){
if(this.serializerClassLoaderRegistry==null)
this.serializerClassLoaderRegistry = new HashMap<String, ClassLoader>();
for(Class clazz:classes) {
this.serializerClassLoaderRegistry.put(clazz.getName(), clazz.getClassLoader());
}
return this;
}
/**
* Allocator reuses recids immediately, that can cause problems to some data types.
* This option disables recid reusing, until they are released by compaction.
* This option will cause higher store fragmentation with HTreeMap, queues etc..
*
* @deprecated this setting might be removed before 2.0 stable release, it is very likely it will become enabled by default
* @return this builder
*/
public Maker allocateRecidReuseDisable(){
props.setProperty(Keys.allocateRecidReuseDisable,TRUE);
return this;
}
/** constructs DB using current settings */
public DB make(){
boolean strictGet = propsGetBool(Keys.strictDBGet);
boolean deleteFilesAfterClose = propsGetBool(Keys.deleteFilesAfterClose);
Engine engine = makeEngine();
boolean dbCreated = false;
boolean metricsLog = propsGetBool(Keys.metrics);
long metricsLogInterval = propsGetLong(Keys.metricsLogInterval, metricsLog ? CC.DEFAULT_METRICS_LOG_PERIOD : 0);
ScheduledExecutorService metricsExec2 = metricsLog? (metricsExecutor==null? executor:metricsExecutor) : null;
try{
DB db = new DB(
engine,
strictGet,
deleteFilesAfterClose,
executor,
false,
metricsExec2,
metricsLogInterval,
storeExecutor,
cacheExecutor,
makeClassLoader());
dbCreated = true;
return db;
}finally {
//did db creation fail? in that case close engine to unlock files
if(!dbCreated)
engine.close();
}
}
protected Fun.Function1<Class, String> makeClassLoader() {
if(serializerClassLoader==null &&
(serializerClassLoaderRegistry==null || serializerClassLoaderRegistry.isEmpty())){
return null;
}
//makje defensive copies
final ClassLoader serializerClassLoader2 = this.serializerClassLoader;
final Map<String, ClassLoader> serializerClassLoaderRegistry2 =
new HashMap<String, ClassLoader>();
if(this.serializerClassLoaderRegistry!=null){
serializerClassLoaderRegistry2.putAll(this.serializerClassLoaderRegistry);
}
return new Fun.Function1<Class, String>() {
@Override
public Class run(String className) {
ClassLoader loader = serializerClassLoaderRegistry2.get(className);
if(loader == null)
loader = serializerClassLoader2;
if(loader == null)
loader = Thread.currentThread().getContextClassLoader();
return SerializerPojo.classForName(className, loader);
}
};
}
public TxMaker makeTxMaker(){
props.setProperty(Keys.fullTx,TRUE);
if(props.containsKey(Keys.cache)){
props.remove(Keys.cache);
LOG.warning("Cache setting was disabled. Instance Cache can not be used together with TxMaker");
}
snapshotEnable();
Engine e = makeEngine();
//init catalog if needed
DB db = new DB(e);
db.commit();
return new TxMaker(e, propsGetBool(Keys.strictDBGet), executor, makeClassLoader());
}
/** constructs Engine using current settings */
public Engine makeEngine(){
if(storeExecutor==null) {
storeExecutor = executor;
}
final boolean readOnly = propsGetBool(Keys.readOnly);
final boolean fileLockDisable = propsGetBool(Keys.fileLockDisable) || propsGetBool(Keys.fileLockHeartbeatEnable);
final String file = props.containsKey(Keys.file)? props.getProperty(Keys.file):"";
final String volume = props.getProperty(Keys.volume);
final String store = props.getProperty(Keys.store);
if(readOnly && file.isEmpty())
throw new UnsupportedOperationException("Can not open in-memory DB in read-only mode.");
if(readOnly && !new File(file).exists() && !Keys.store_append.equals(store)){
throw new UnsupportedOperationException("Can not open non-existing file in read-only mode.");
}
DataIO.HeartbeatFileLock heartbeatFileLock = null;
if(propsGetBool(Keys.fileLockHeartbeatEnable) && file!=null && file.length()>0
&& !readOnly){ //TODO should we lock readonly files?
File lockFile = new File(file+".lock");
heartbeatFileLock = new DataIO.HeartbeatFileLock(lockFile, CC.FILE_LOCK_HEARTBEAT);
heartbeatFileLock.lock();
}
Engine engine;
int lockingStrategy = 0;
String lockingStrategyStr = props.getProperty(Keys.lock,Keys.lock_readWrite);
if(Keys.lock_single.equals(lockingStrategyStr)){
lockingStrategy = 1;
}else if(Keys.lock_threadUnsafe.equals(lockingStrategyStr)) {
lockingStrategy = 2;
}
final int lockScale = DataIO.nextPowTwo(propsGetInt(Keys.lockScale,CC.DEFAULT_LOCK_SCALE));
final long allocateStartSize = propsGetLong(Keys.allocateStartSize,0L);
final long allocateIncrement = propsGetLong(Keys.allocateIncrement,0L);
final boolean allocateRecidReuseDisable = propsGetBool(Keys.allocateRecidReuseDisable);
boolean cacheLockDisable = lockingStrategy!=0;
byte[] encKey = propsGetXteaEncKey();
final boolean snapshotEnabled = propsGetBool(Keys.snapshots);
if(Keys.store_heap.equals(store)) {
engine = new StoreHeap(propsGetBool(Keys.transactionDisable), lockScale, lockingStrategy, snapshotEnabled);
}else if(Keys.store_archive.equals(store)){
Volume.VolumeFactory volFac = extendStoreVolumeFactory(false);
engine = new StoreArchive(
file,
volFac,
true
);
}else if(Keys.store_append.equals(store)){
if(Keys.volume_byteBuffer.equals(volume)||Keys.volume_directByteBuffer.equals(volume))
throw new UnsupportedOperationException("Append Storage format is not supported with in-memory dbs");
Volume.VolumeFactory volFac = extendStoreVolumeFactory(false);
engine = new StoreAppend(
file,
volFac,
createCache(cacheLockDisable,lockScale),
lockScale,
lockingStrategy,
propsGetBool(Keys.checksum),
Keys.compression_lzf.equals(props.getProperty(Keys.compression)),
encKey,
propsGetBool(Keys.readOnly),
snapshotEnabled,
fileLockDisable,
heartbeatFileLock,
propsGetBool(Keys.transactionDisable),
storeExecutor,
allocateStartSize,
allocateIncrement
);
}else{
Volume.VolumeFactory volFac = extendStoreVolumeFactory(false);
boolean compressionEnabled = Keys.compression_lzf.equals(props.getProperty(Keys.compression));
boolean asyncWrite = propsGetBool(Keys.asyncWrite) && !readOnly;
boolean txDisable = propsGetBool(Keys.transactionDisable);
if(!txDisable){
engine = new StoreWAL(
file,
volFac,
createCache(cacheLockDisable,lockScale),
lockScale,
lockingStrategy,
propsGetBool(Keys.checksum),
compressionEnabled,
encKey,
propsGetBool(Keys.readOnly),
snapshotEnabled,
fileLockDisable,
heartbeatFileLock,
storeExecutor,
allocateStartSize,
allocateIncrement,
allocateRecidReuseDisable,
CC.DEFAULT_STORE_EXECUTOR_SCHED_RATE,
propsGetInt(Keys.asyncWriteQueueSize,CC.DEFAULT_ASYNC_WRITE_QUEUE_SIZE)
);
}else if(asyncWrite) {
engine = new StoreCached(
file,
volFac,
createCache(cacheLockDisable, lockScale),
lockScale,
lockingStrategy,
propsGetBool(Keys.checksum),
compressionEnabled,
encKey,
propsGetBool(Keys.readOnly),
snapshotEnabled,
fileLockDisable,
heartbeatFileLock,
storeExecutor,
allocateStartSize,
allocateIncrement,
allocateRecidReuseDisable,
CC.DEFAULT_STORE_EXECUTOR_SCHED_RATE,
propsGetInt(Keys.asyncWriteQueueSize,CC.DEFAULT_ASYNC_WRITE_QUEUE_SIZE)
);
}else{
engine = new StoreDirect(
file,
volFac,
createCache(cacheLockDisable, lockScale),
lockScale,
lockingStrategy,
propsGetBool(Keys.checksum),
compressionEnabled,
encKey,
propsGetBool(Keys.readOnly),
snapshotEnabled,
fileLockDisable,
heartbeatFileLock,
storeExecutor,
allocateStartSize,
allocateIncrement,
allocateRecidReuseDisable);
}
}
if(engine instanceof Store){
((Store)engine).init();
}
if(propsGetBool(Keys.fullTx))
engine = extendSnapshotEngine(engine, lockScale);
engine = extendWrapSnapshotEngine(engine);
if(readOnly)
engine = new Engine.ReadOnlyWrapper(engine);
if (!readOnly && propsGetBool(Keys.deleteFilesAfterClose)) {
engine = new Engine.DeleteFileEngine(engine, file);
}
if(propsGetBool(Keys.closeOnJvmShutdown)){
engine = new Engine.CloseOnJVMShutdown(engine);
}
//try to readrt one record from DB, to make sure encryption and compression are correctly set.
Fun.Pair<Integer,byte[]> check = null;
try{
check = (Fun.Pair<Integer, byte[]>) engine.get(Engine.RECID_RECORD_CHECK, Serializer.BASIC);
if(check!=null){
if(check.a != Arrays.hashCode(check.b))
throw new RuntimeException("invalid checksum");
}
}catch(Throwable e){
throw new DBException.WrongConfig("Error while opening store. Make sure you have right password, compression or encryption is well configured.",e);
}
if(check == null && !engine.isReadOnly()){
//new db, so insert testing record
byte[] b = new byte[127];
if(encKey!=null) {
new SecureRandom().nextBytes(b);
} else {
new Random().nextBytes(b);
}
check = new Fun.Pair(Arrays.hashCode(b), b);
engine.update(Engine.RECID_RECORD_CHECK, check, Serializer.BASIC);
engine.commit();
}
return engine;
}
protected Store.Cache createCache(boolean disableLocks, int lockScale) {
final String cache = props.getProperty(Keys.cache, CC.DEFAULT_CACHE);
if(cacheExecutor==null) {
cacheExecutor = executor;
}
long executorPeriod = propsGetLong(Keys.cacheExecutorPeriod, CC.DEFAULT_CACHE_EXECUTOR_PERIOD);
if(Keys.cache_disable.equals(cache)){
return null;
}else if(Keys.cache_hashTable.equals(cache)){
int cacheSize = propsGetInt(Keys.cacheSize, CC.DEFAULT_CACHE_SIZE) / lockScale;
return new Store.Cache.HashTable(cacheSize,disableLocks);
}else if (Keys.cache_hardRef.equals(cache)){
int cacheSize = propsGetInt(Keys.cacheSize, CC.DEFAULT_CACHE_SIZE) / lockScale;
return new Store.Cache.HardRef(cacheSize,disableLocks,cacheExecutor, executorPeriod);
}else if (Keys.cache_weakRef.equals(cache)){
return new Store.Cache.WeakSoftRef(true, disableLocks, cacheExecutor, executorPeriod);
}else if (Keys.cache_softRef.equals(cache)){
return new Store.Cache.WeakSoftRef(false, disableLocks, cacheExecutor,executorPeriod);
}else if (Keys.cache_lru.equals(cache)){
int cacheSize = propsGetInt(Keys.cacheSize, CC.DEFAULT_CACHE_SIZE) / lockScale;
return new Store.Cache.LRU(cacheSize,disableLocks);
}else{
throw new IllegalArgumentException("unknown cache type: "+cache);
}
}
protected int propsGetInt(String key, int defValue){
String ret = props.getProperty(key);
if(ret==null) return defValue;
return Integer.valueOf(ret);
}
protected long propsGetLong(String key, long defValue){
String ret = props.getProperty(key);
if(ret==null) return defValue;
return Long.valueOf(ret);
}
protected boolean propsGetBool(String key){
String ret = props.getProperty(key);
return ret!=null && ret.equals(TRUE);
}
protected byte[] propsGetXteaEncKey(){
if(!Keys.encryption_xtea.equals(props.getProperty(Keys.encryption)))
return null;
return DataIO.fromHexa(props.getProperty(Keys.encryptionKey));
}
/**
* Check if large files can be mapped into memory.
* For example 32bit JVM can only address 2GB and large files can not be mapped,
* so for 32bit JVM this function returns false.
*
*/
protected static boolean JVMSupportsLargeMappedFiles() {
String prop = System.getProperty("os.arch");
if(prop!=null && prop.contains("64")) {
String os = System.getProperty("os.name");
if(os==null)
return false;
os = os.toLowerCase();
return !os.startsWith("windows");
}
//TODO better check for 32bit JVM
return false;
}
protected int propsGetRafMode(){
String volume = props.getProperty(Keys.volume);
if(volume==null||Keys.volume_raf.equals(volume)){
return 2;
}else if(Keys.volume_mmapfIfSupported.equals(volume)){
return JVMSupportsLargeMappedFiles()?0:2;
//TODO clear mmap values
// }else if(Keys.volume_mmapfPartial.equals(volume)){
// return 1;
}else if(Keys.volume_fileChannel.equals(volume)){
return 3;
}else if(Keys.volume_mmapf.equals(volume)){
return 0;
}
return 2; //default option is RAF
}
protected Engine extendSnapshotEngine(Engine engine, int lockScale) {
return new TxEngine(engine,propsGetBool(Keys.fullTx), lockScale);
}
protected Engine extendWrapSnapshotEngine(Engine engine) {
return engine;
}
protected Volume.VolumeFactory extendStoreVolumeFactory(boolean index) {
String volume = props.getProperty(Keys.volume);
boolean cleanerHackEnabled = propsGetBool(Keys.fileMmapCleanerHack);
boolean mmapPreclearDisabled = propsGetBool(Keys.fileMmapPreclearDisable);
if(Keys.volume_byteBuffer.equals(volume))
return Volume.ByteArrayVol.FACTORY;
else if(Keys.volume_directByteBuffer.equals(volume))
return cleanerHackEnabled?
Volume.MemoryVol.FACTORY_WITH_CLEANER_HACK:
Volume.MemoryVol.FACTORY;
else if(Keys.volume_unsafe.equals(volume))
return Volume.UNSAFE_VOL_FACTORY;
int rafMode = propsGetRafMode();
if(rafMode == 3)
return Volume.FileChannelVol.FACTORY;
boolean raf = rafMode!=0;
if(raf && index && rafMode==1)
raf = false;
return raf?
Volume.RandomAccessFileVol.FACTORY:
new Volume.MappedFileVol.MappedFileFactory(cleanerHackEnabled, mmapPreclearDisabled);
}
}
public static DB.HTreeMapMaker hashMapSegmented(DBMaker.Maker maker){
maker = maker
.lockScale(1)
//TODO with some caches enabled, this will become thread unsafe
.lockDisable()
.transactionDisable();
DB db = maker.make();
Engine[] engines = new Engine[HTreeMap.SEG];
engines[0] = db.engine;
for(int i=1;i<HTreeMap.SEG;i++){
engines[i] = maker.makeEngine();
}
return new DB.HTreeMapMaker(db,"hashMapSegmented", engines)
.closeEngine();
}
public static DB.HTreeMapMaker hashMapSegmentedMemory(){
return hashMapSegmented(
DBMaker.memoryDB()
);
}
public static DB.HTreeMapMaker hashMapSegmentedMemoryDirect(){
return hashMapSegmented(
DBMaker.memoryDirectDB()
);
}
/**
* Returns Compiler Config, static settings MapDB was compiled with
* @return Compiler Config
*/
public static Map<String,Object> CC() throws IllegalAccessException {
Map<String, Object> ret = new TreeMap<String, Object>();
for (Field f : CC.class.getDeclaredFields()) {
f.setAccessible(true);
Object value = f.get(null);
ret.put(f.getName(), value);
}
return ret;
}
}
|
Enthalpy of isopropanol adsorption on zeolite Abstract The enthalpy of isopropanol adsorption on ZSM-5 (Zeolite Socony Mobil Framework Type MFI) was determined by the static adsorption method at the temperature range from 20°C to 100°C. Langmuir and Huttig models of equilibrium adsorption have been used to calculate the enthalpy of isopropanol adsorption at these conditions. Adsorption isotherms determined by the flow method at 20°C and 30°C have been also used in the calculations. The obtained values of isopropanol adsorption enthalpy were compared with the values of isopropanol evaporation enthalpy and with the results obtained from isopropanol and water desorption measurements with thermogravimetry and differential scanning calorimetry methods.
|
A dynamic capability maturity model for improving cyber security Cyber attacks continue to proliferate, with increasing sophistication and severity. Businesses and government agencies must establish robust governance, cultures, and data management processes to minimize vulnerability to such threats and ensure effective responses. Capability Maturity Models (CMMs) have been proposed to address this critical need; they enable organizations to benchmark their Cyber Security processes against a framework of recognized best practices. Unfortunately, CMMs are inherently static and diagnostic: they help identify maturity gaps, but are not directly actionable. This paper describes how to extend an existing Cyber Security CMM into a dynamic performance management framework through an intuitive Model-Simulate-Analyze methodology. This software-based framework enables organizations to formulate plans for improving their Cyber Security maturity levels; test and validate or refine those plans prior to roll-out; and monitor execution results to detect emerging problems and make appropriate mid-course adjustments to ensure success.
|
You also didn’t earn most of it.
It seems like every time I discuss taxation, some libertarian will waltz in and say "it’s my money and I don’t see why the government should be able to take it."
So let’s run through why, no, it isn’t your money. We’ll start with two numbers. The income per capita for the US in 2005 was $43,740. The income per capita for Bangladesh was $470.
Now I want you to ask yourself the following question: are Bengalis genetically inferior to Americans? Since not too many FDLers think white sheets look great at a lynching, I’ll assume everyone aswered no.
Right then, being American is worth $43,270 more than being Bengali and it’s not due to Americans being superior human beings. If it isn’t because Americans are superior, then what is it?
The answer is that if it isn’t individual, it must be social. On the individual but still social level, Americans are in fact smarter than Bengalis because as children they are far less likely to suffer from malnutrition. However not suffering from malnutrition when you’re a baby, toddler or young child has nothing to do with you and everything to do with the society you live in and your family–two things you have zero influence over (perhaps you chose your mother, I didn’t.)
Bengalis won’t, on average, get as good an education. They won’t get as much education either, since every child is needed to help earn a living as soon as possible. For most Bengalis there’s no room for having the extended childhood and adolescence westerners are used to, which often stretches into the late twenties or even early thirties, amongst those seeking Ph.D’s or becoming doctors or lawyers.
When a Bengali grows up the jobs available aren’t as good. If he or she starts a business it will earn much less money than the equivalent American business. If he or she speculates in land and is very successful, they will still be much less rich than an American would be.
One could go on and on. I trust the point is obvious — the vast majority of money that an American earns is due to being born an American. Certainly the qualities that make America a good place to live and a good place to make money are things that were created by Americans, but mostly they were created by Americans long dead or they are created by all Americans working together and are not located in the individual.
Now the same is true of the really rich. Forbes keeps track of the world’s billionaires, and almost half of them are in the US. This is because US society and the US government in particular, is very set up to create billionaires. Your odds of being a billionaire take a massive jump if you’re born in the US. Your odds of being a billionaire if you’re born in Bangladesh? Essentially zero. Now one could point out that billionaires are still so rare that the odds are always essentially zero (how many billionaires in your circle of friends?) Nonetheless the US in 2005 had 371. Germany, with the second most, had 55.
Bangladesh, you won’t be surprised to hear, had zero.
If you’re a billionaire in the US, you’re a billionaire in large part because you live in the US.
So, if you’re American, a large chunk of the reason you make a lot of money (relative to the rest of the world) is that you are American. The main cause of your relative wealth is not that you work hard, or that you’re innately smarter than members of other nations (though you may be since you weren’t starved as a child). It’s because you had opportunities given to you that most people will never had, and those opportunities existed due to the pure accident of your birth or because you or your family chose to come to the US. The same is true of most first world nations.
Immigrants understand this very well. There’s a reason why Mexicans, for example, are willing to risk death to cross the border. Their average income is $7,310, compared to the US average income of $43,740. They won’t make up all the difference just by crossing the border, but they’ll make up enough that it’s more than worth it. They haven’t personally changed, they don’t work harder now that they’re across the border. They aren’t smarter and they aren’t stronger. They just changed where they lived and suddenly the opportunities open to them were so much better that their income went up.
So let’s bring this back to our typical Libertarian with his whine that he earned it, and the government shouldn’t take it away. He didn’t earn most of it. Most of it is just because in global terms, he was born on third and thinks he threw a triple. That doesn’t mean he doesn’t have to work for it, but it does mean most of the value of his work has nothing to do with him (and Ayn Rand aside, it’s almost always a him).
Now what a government is, in a democratic society, is the vehicle that the population as a whole chooses to use to organize collective action. Government is, imperfect as it is, the closest approximation to the "will of society" that we’ve got.
Since the majority of the money any American earns is a function of being American, not of their own individual virtues, the government has the moral right to tax. And since those who are rich get more from being American than those who are poor, it also has the moral right to take more money from them.
More importantly than the moral right, it has the pragmatic duty to do so. The roads and bridges that government builds and maintains; the schools that it funds, the police and courts that keep the peace; the investment in R&D that produced the internet; the sewage systems that make real estate speculation possible, and on and on, are a huge chunk of what makes being American worth so much more than being a Bengali. Failure to reinvest in both human and inanimate infrastructure is like killing the golden goose, and America, for decades now, has not been keeping its infrastructure properly maintained, let alone building it up.
And money itself is something that government provides for its people. It’s not your money, it’s America’s money and it’s a damn good thing too. If you don’t believe me, try issuing your own money and see how many people accept it. Some will, because what money is, when an individual issues, is an IOU. I’ve written a few in my life. In every case the person I gave it to was less happy to receive it than he would have been to get some nice crisp dollars. And I rested my IOU’s on dollars — I promised to repay in my country’s currency. If you don’t want to do that you’d have to issue an IOU saying "I will repay you with a bundle of rice" or gold, or a service. And then you come to the question of enforcement (one thing even libertarians admit the government should do) because what if I refuse to meet the conditions of the IOU. Even an IOU is based on the sanction of the government, if it isn’t it’s worth only as much as the good will of the person issuing it or the strong arm of the person holding it.
So no, it isn’t your money, and it’s a good thing it isn’t. And while you may have worked your butt off for it, you also didn’t earn most of it. The value you impute to yourself "I’m worth my 80K salary" is mostly a function of where you live, of where you were born and of who your parents are.
|
Evolution kills people. Andrew Read has been saying so for years. But he never actually saw it firsthand until he worked this summer in a hospital in Ann Arbor, Michigan.
That’s when Read, who is Evan Pugh Professor of Biology at Penn State, stepped away from his busy University Park lab to study the problem of drug resistance up close, sifting through massive clinical databases and consulting with infectious-disease specialists struggling with difficult cases in real time. He well remembers the first patient he saw die.
Read works in the relatively new field of evolutionary medicine, specializing in infectious diseases. He is best known for his work on malaria, looking for solutions to the rising resistance in humans against antimalarial drugs — and to a similar resistance against insecticides in the mosquitoes that carry the disease.
“Malaria parasites,” Read says, “do evolution on steroids. The only way we can stop them evolving in my lab is by freezing them solid.” His basic premise is that an overly aggressive, unscientific use of drugs is driving this evolution, and the evolution of many other pathogens.
Postdoctoral researcher Eleanore Sternberg uses a bottle of warm water to attract female mosquitoes to a mesh barrier in Penn State’s insectary so they can be separated from males. The insectary, completed in 2011 at a cost of $3 million, is one of the best in the country. Its environmental chambers allow scientists to do complex experiments involving temperature variations, insecticides, and behavioral assessments on a variety of insect species.
Call it adaptation writ large. To a biologist like Read, what’s happening is rapid evolution in response to selection pressure — the same force that shaped the bills of Darwin’s finches on Galapagos. In this case, he argues, the pressure is being applied by modern medicine.
One hundred thousand Americans die of infections each year, according to the Centers for Disease Control — at least a quarter of them from drug-resistant infections that were easily cured 20 years ago. And evolution is speeding up: Read says pathogens are now evolving resistance to existing drugs faster than new drugs can make it through the regulatory process.
In other words, if you kill all the weak, drug-sensitive bugs, you leave the field wide open for the strong, drug-resistant ones. The least harmful result of this kind of survival of-the-fittest, Read says, is that our drugs grow less and less effective, and eventually fail. The worst case is the development of brand new superbugs, pathogenic strains of extraordinary — and terrifying — virulence.
Biologist Andrew Read studies drug resistance and evolutionary medicine. Methicillin-resistant Staphylococcus aureus (MRSA) is an example of a largely antibiotic-resistant bacteria. This colorized photo depicts the interaction of MRSA cells (green) with a human white blood cell. The bacteria shown is strain MRSA252, a leading cause of hospital-associated infections in the United States and United Kingdom.
An emerging perspective in evolutionary medicine, he says, suggests that a different approach may be warranted. “There’s an idea that, in cancer treatment for example, maybe you shouldn’t be going at it quite as aggressively — you could step back a bit.
Not surprisingly, Read’s ideas have caused considerable controversy, especially in medical circles. But after ten years, he says, they are now at least firmly on the scientific agenda. And indeed, he isn’t pushing them for clinical use. Not yet. The whole point is that the best course isn’t clear.
As a boy in his native New Zealand, Read dreamed of saving that country’s exotic endangered birds. The dream eventually carried him to Oxford University in England, where as a graduate student he began a study of avian blood parasites.
“Toward the end of my Ph.D.,” he says, “it dawned on me that nobody was paying much attention to these parasites, even though they are very closely related to malaria parasites which kill people, and they evolve in real time. The field was wide open.” The exact same science that had unlocked the secrets of adaptation in Darwin’s finches, he realized, could be applied to infectious disease.
A native of New Zealand, biologist Andrew Read started his career studying avian blood parasites in hopes of saving that country’s exotic endangered birds. He says the same evolutionary pressures and opportunities at work in wild species also apply to malaria and other human infectious diseases.
As Read explains, there are two vital components in the evolution of a pathogen: the genetic mutation that is the origin of resistance, and the natural selection that spreads it. “You need both.” But depending on the disease, one component can be more important than the other.
In something like HIV, he explains, new mutants are constantly arising, so that resistance pops up all the time. In something like malaria, by contrast, resistance arises extremely rarely — the spread is the issue.
The same pressures that drive the evolution of drug resistance, Read says, can also play out with vaccines. Think about extremely deadly pathogens — “by which I mean ones that kill everybody all the time,” he says.
With the use of vaccines, however, strains that previously would’ve been sufficiently nasty to obliterate themselves find themselves in a host who is protected. And if that vaccine is “leaky,” i.e., not good enough to prevent transmission, the pathogen can spread to other hosts. “The result is you keep alive in a vaccinated population something that’s nastier than what you would’ve had in the population before.” When that strain enters an unvaccinated individual, the results are devastating.
Read believes this is what happened in the case of Marek’s disease, a scourge of the modern poultry industry. Sixty years ago, Marek’s symptoms in barnyard fowl were mild but in the 1960s the virus became troubling enough in industrial operations to spark development of a vaccine. It became routinely administered to commercially raised chickens. In the decades since, two versions of the vaccine have grown ineffective, with a third showing signs of weakening. Meanwhile, Marek’s expression has become lethal. “The strains that are now circulating are so nasty they kill 100 percent of unvaccinated birds within 10 days,” Read says.
There could be many contributing factors, he allows, including a half-century’s worth of changes in the poultry industry. But “our theoretical work suggests that vaccination alone is sufficient to maintain these hyperpathogenic strains.” He is currently conducting experiments to test the theory.
Read emphasizes that Marek’s is a leaky vaccine, unlike those that have proved so successful in the eradication of human diseases like measles, mumps, rubella, and polio. But there are several human diseases, he says, including Pertussis, typhus, and human papilloma virus, which may be susceptible to the same possibility.
That means no more “wasting good drugs” the way we did in the last century. “We need to use them more scientifically,” he says. Instead of relying on a constant stream of drug development — engaging in an arms race with nature that we’re bound to lose — we need to practice stewardship, finding ways to keep our current drugs effective for as long as possible.
In the world of agriculture, an evolution-based approach to protecting livestock, he suggests, might include counterintuitive measures like breeding for disease susceptibility instead of resistance — so that infected individuals die off before pathogens are transmitted. “Keeping every chicken alive might be bad,” he says.
For slowing the evolution of malaria, conversely, it might be best to let some mosquitoes live. Read and Penn State colleague Matthew Thomas made headlines a few years ago when they developed a fungal insecticide that kills only older mosquitoes. Since only older mosquitoes can transmit the parasite, the theory runs, this is enough to stop the disease from spreading. At the same time, leaving younger mosquitoes alive to breed greatly lessens selection pressure — the reproductive advantage that comes with evolving resistance to the insecticide. With the pressure removed, evolution slows, and the insecticide remains potent much longer.
Mathematical models and laboratory experiments with mosquitoes bear this out, Read says, “but we have had almost no luck getting money to move that project forward.
The Michigan patient he saw die, he continues, “had a big effect on me. The doc I was working with said he thought it was a failure of the science. We had uncontrolled evolution and we did not know how to get it under control.
Andrew Read is Evan Pugh Professor of Biology. He can be reached at [email protected].
|
Distinct Pairing Symmetries in $Nd_{1.85}Ce_{0.15}CuO_{4-y}$ and $La_{1.89}Sr_{0.11}CuO_{4}$ Single Crystals: Evidence from Comparative Tunnelling Measurements We used point-contact tunnelling spectroscopy to study the superconducting pairing symmetry of electron-doped $Nd_{1.85}Ce_{0.15}CuO_{4-y}$ (NCCO) and hole-doped $La_{1.89}Sr_{0.11}CuO_{4}$ (LSCO). Nearly identical spectra without zero bias conductance peak (ZBCP) were obtained on the and oriented surfaces (the so-called nodal and anti-nodal directions) of NCCO. In contrast, LSCO showed a remarkable ZBCP in the nodal direction as expected from a d-wave superconductor. Detailed analysis reveals an s-wave component in the pairing symmetry of the NCCO sample with $\Delta/k_BT_c=1.66$, a value remarkable close to that of a weakly coupled BCS superconductor. We argue that this s-wave component is formed at the Fermi surface pockets centered at ($\pm\pi$,0) and (0,$\pm\pi$) although a d-wave component may also exist. We used point-contact tunnelling spectroscopy to study the superconducting pairing symmetry of electron-doped N d1.85Ce0.15CuO4−y (NCCO) and hole-doped La1.89Sr0.11CuO4 (LSCO). Nearly identical spectra without zero bias conductance peak (ZBCP) were obtained on the and oriented surfaces (the so-called nodal and anti-nodal directions) of NCCO. In contrast, LSCO showed a remarkable ZBCP in the nodal direction as expected from a d-wave superconductor. Detailed analysis reveals an s-wave component in the pairing symmetry of the NCCO sample with ∆/kBTc = 1.66, a value remarkable close to that of a weakly coupled BCS superconductor. We argue that this s-wave component is formed at the Fermi surface pockets centered at (±,0) and (0,±) although a d-wave component may also exist. I. INTRODUCTION For hole-doped cuprate superconductors, there is a well-known electronic phase diagram characterized by a dome-like superconducting region when p is above a certain threshold. It was found that instead of doping holes into the Cu-O plane, one can also achieve superconductivity by doping electrons into the Cu-O plane in systems like Ln 2−x Ce x CuO 4 with Ln = Nd, Pr, La, etc. One thus naturally questions whether the superconducting mechanism is the same between the hole-and electron-doped cuprates. Among many superconducting properties, the symmetry of the order parameter is an important one. While the symmetry of the superconducting order parameter is believed to be d-wave for the hole-doped region, the situation for electron-doped materials is highly controversial. For example, Angleresolved photoemission spectroscopy (ARPES), specific heat, phase-sensitive scanning SQUID, bicrystal grain-boundary Josephson junction and some penetration depth measurements indicate a d-wave symmetry. In addition, Raman scattering and recent ARPES experiments show a nonmonotonic d-wave order parameter. However, this has been contrasted by tunnelling and some other specific heat and penetration depth measurements. In particular, there may be a crossover from d-wave to s-wave symmetries by changing the doped electron concentration or decreasing temperature. Although such a crossover may explain the conflicting experimental results on the pairing symmetry in the electron-doped cuprates, the characteristics of such possible s-wave pairing symmetry has yet to be established. In this paper, we report directional tunnelling measurements on single crystals of optimally electron-doped cuprate N d 1.85 Ce 0.15 CuO 4−y (NCCO). By injecting current along either the Cu-Cu bond or Cu-O bond direction, we obtain nearly identical tunnelling spectra indicating an s-wave component of the pairing symmetry in this material. For comparison, similar measurements were carried out on underdoped p-type single crystals of La 1.89 Sr 0.11 CuO 4 (LSCO), and we observe clear zero bias conductance peaks (ZBCP) in tunneling spectra along the direction as expected from the d x 2 −y 2 -wave symmetry. Our results thus indicate that the optimally doped NCCO has at least an unavoidable s-wave component, which is in contrast with the case in hole-doped LSCO where a pure d-wave has been established. II. EXPERIMENTAL DETAILS We grew the NCCO and LSCO single crystals using the travelling-solvent floating-zone technique. As shown in Fig. 1, the resistive curve measured on the NCCO sample indicates a zero-resistance temperature at 25.1K, AC susceptibility shows the onset of bulk superconductivity at T c ≈ 25.6K. The LSCO sample has a T c ≈ 28K characterized by AC susceptibility measurement. The single crystals used in the tunnelling measurements were cut into rectangular flakes of size 2.5 2.5 1mm 3 for NCCO and 1.5 1.5 0.8mm 3 for LSCO with the long axis along a or b and short axis along c, one corner of the crystal was then cut off to expose -oriented crystal planes. The directional point-contact tunneling measurements were carried out by pointing a Pt/Ir alloy tip or Au tip towards the specified directions as shown in the left inset of Fig. 1. The tip's preparation and the equipment details are described in Ref.. In order to reduce quasiparticle scattering in the barrier layer and hence obtain high quality data, the nonaqueous chemical etch was used to attenuate the insulating layer on the sample surface immediately before mounting the sample on the point contact device. Since it takes about 20-40 minutes to insert the sample mounted device into the Helium gas environment in the dewar, the sample surfaces were exposed to air during this period. For chemical etching, we use 10% HCl in absolute ethanol for several seconds to several tens of seconds for NCCO and 1% Br 2 in absolute ethanol for several minutes to ten minutes in the case of LSCO. Typical four-terminal and lock-in technique were used to measure the I ∼ V curves and the differential resistance dV /dI vs V of the point contacts. III. RESULTS AND DISCUSSIONS A. Tunneling spectra of Nd1.85Ce0.15CuO4−y (NCCO) Fig. 2 shows the raw data of the conductance () of Au/NCCO point contact for various temperatures from 2K to 30K. In order to show the electronic stability of the experiments, we simultaneously present in Fig. 3(a) the ∼ V curves obtained from the lock-in technique and DC I ∼ V measurement, respectively. It is obvious that these two curves merge into each other although the latter one has a higher noise. We also compared the ∼ V curves recorded by both positively and negatively scanning the bias voltage. As shown in Fig. 3(b), the complete overlap of the data with different scanning directions indicates the good thermal stability during measurements. It is well known that the high T c superconducting cuprate compounds react readily with atmospheric H 2 O and CO 2 to form insulating hydroxides and carbonates on the surface and at grain boundaries. While such poor conductive layer may be a natural barrier needed The normalized ∼ V curves obtained by different surface treatments and using different tips. Note that the spectra measured on the post-etched surfaces are much sharper than that of pre-etched ones, accompanied by a remarkable decrease in the junction resistance from several hundreds of Ohms to below 10 as shown in Table I. in tunneling experiments, they may introduce large scattering factor (just like the situation of fabricated planar junction) and thus result to less sharpening spectra than those measured by the vacuum barrier based STM. In order to reduce the scattering factor and obtain sharp spectra, we used nonaqueous chemical etch to treat the sample surfaces. In Fig. 4, we present the measured spectra on the surfaces as cut, after the first etch and after the second etch, all the spectra have been normalized for comparison. The detailed experimental conditions are listed in Table I single crystal and hence decrease the scattering factor. We have also studied the dependence of the measured spectrum on the junction resistance which can be controlled by adjusting the tip-to-sample distance with a differential screw. Fig. 5 shows the spectra measured at a fixed position on the sample surface as a function of junction resistances, in these measurements the sample surfaces had been chemically etched. When the junction resistance decreases from several tens of Ohms to several Ohms, the spectral shape becomes sharper (refer to Fig. 8(a)). For small resistance, the spectral shape changes hardly with the resistive values and no Andreev reflection-like spectrum occurs (Fig. 5). This indicates that although the surface contaminant phase has been reduced, it still exists as a solid barrier layer due to the exposure to air. After the metal tip reaches the sample surface, the barrier layer is abraded at first and its thickness decreases with the increasing pressure. Consequently, the measured spectrum becomes sharper due to the weakening of the inelastic scattering of the injecting quasiparticles near the normal-metal/superconductor (N/S) micro-constriction. Further pressure of the tip on the sample surface may simply flatten the point, giving a larger contact area over the same minimal thickness of a tenacious barrier layer which can not be entirely flaked off. In this case, the junction resistance becomes still smaller while with no obvious change on the measured spectrum until the junction is damaged eventually. Therefore, the none-zero zero bias conductance should be ascribed to the stronger scattering effect on injecting quasiparticles and lower height of the surface barrier than that of the vacuum barrier. Nonetheless, such surface contaminants resulting from air exposure form orders of magnitude slower on NCCO than on many hole-doped cuprates which results from the absence of reactive alkaline earth elements in NCCO. This is also the reason why the measured spectra of LSCO are poorer than that of NCCO using the same method, as shown in the section III-C in this paper. In the following, we will focus our attention on the spectra with the resistance from 1 to 15, namely, in the regime with good spectral stability and weaker scattering effect. In MaglabExa-12, the magnetic field can be applied along the tip's direction, namely, parallel to the abplane. The field dependence of the tunneling spectra were measured at various temperatures from 4K to 16K. The changing tendency of the spectral shape with increasing field is identical at all measured temperatures. The typical results of 4K and 10K in Fig. 6 suggest that the field induced smearing of the spectrum is very slight, which is consistent with the much higher H c2 than 7T for H ab − plane. To sum up the main characteristics of the measured spectra, we note that all the spectra are near the tunneling limit with two clear coherence peaks at symmetric positions of the bias voltage (much sharper than that of the previous works studied by both STM and point contact ) with no Andreev reflection-like spectrum. Second, the spectral shape is nearly identical for the and directions. Thirdly, there was no evidence for zero-bias conductance peak (ZBCP) expected in the (or the so-called nodal direction) of the dwave superconductors. Here we emphasize that the results are independent of the positions on the sample surface. As discussed above, if the superconducting NCCO has d-wave pairing symmetry, a ZBCP should be observed in the -oriented tunneling spectrum. In other words, our experimental results suggest that the optimally doped NCCO has a s-wave symmetry. In order to explain this viewpoint, we present in Fig. 7 the best fitting of the -oriented tunneling spectrum with the formulas corresponding s-wave and d-wave models respectively. For the simulations, the extended BTK model was accepted by selecting a constant gap value for s-wave symmetry and the anisotropic gap of ∆() = ∆ 0 cos(2) for d x 2 −y 2 symmetry, where is the polar angle measured from the crystallographic axis a. In this model, two parameters are introduced to describe the necessary physical quantities, i.e., the effective potential barrier (Z) and the superconducting energy gap (∆). As an extension, the quasiparticle energy E is replaced by E − i, where is the broadening parameter characterizing the finite lifetime of the quasiparticles due to inelastic scattering near the N/S micro-constriction. In a real N-I-S junction configuration, total tunneling conductance spectrum includes the integration over the solid angle. In the case of two dimension, it reduces to the in- tegration over the injection angle from −/2 to /2, as done in this work. Fig. 7(b) clearly shows that the d-wave theoretical simulation along the direction deviates from the experimental data. When the normal direction of the sample surface departs a small angle from the crystallographic axis a, a ZBCP will appear, accompanied by the remarkable depression of the coherence peaks. As shown in Fig. 7(a), even the d-wave simulation absolutely along direction can not fit the experimental data in the range below superconducting gap. However, the calculated curves in terms of the s-wave theory are in good agreement with the experimental results both in and directions. It should be pointed out that the values of Z and are related to the selected form of the background (such as the slope coefficient of the high bias straight-line segments) in a certain extent, which may be the main artificial uncertainty in the data analysis. So we did not focus our attention on the absolute values of these parameters or the quantitative comparison between the different spots. However, it is also found that the energy gap ∆ is insensitive to the selection of different back- ground in the fitting process. Moreover, when we chose a general procedure to construct the background for all spectra measured on a fixed spot on the sample surface, the variation of the fitting parameters of Z and should be physically meaningful. As an example, we presented in Fig. 8(a) the normalized spectra measured at a fixed spot with different junction resistance, which have been theoretically fitted to the s-wave BTK model and the fitting parameters are shown in Fig. 8(b). It is noted that the derived energy gap is nearly constant for all spectra, while the barrier height Z and broadening parameter continuously decreases with the decreasing junction resistance. This is physically reasonable and in good agreement with the foregoing discussions, namely, the abrasion of the contaminant layer will depress the barrier height accompanied by the weakening of the quasiparticle scattering near the N/S micro-constriction. The good spatial repeatability of the tunneling spectra allowed further investigations on their temperature dependence. The spectral shape change continuously until above 24K (T c ∼ 26K), thus suggesting that the tunneling spectra reflect the bulk superconductivity of the NCCO sample ( Fig. 2(a)). The temperature dependent normalized spectra for both and directions are presented in Fig. 9, accompanied by the s-wave BTK fitting curves denoted by the red solid lines. Because the spectra were affected by the critical current effect at higher temperature near T c, it is difficult to determine the background for these temperatures. Therefore, these results have not been included in our theoretical analysis. All the parameters of the simulations shown in Fig. 9 are presented in Fig. 10 (the barrier height Z and the broadening parameter ) and Fig. 11 (superconducting energy gap ∆). The junction resistances measured at 20mV (R 20mV ) are also given in Fig. 10. The magnitude of A = /∆ < 20% presented is larger than that of the clean point-contact junction between normal metal and conventional superconductors which in general is smaller than 10%. However, considering the much shorter coherence length and faster exterior degradation of cuprates than that of the conventional superconductors, one can easily understand the larger A. Actually, a large A (> 20%) is often obtained on the oxidized surface of Nb foil, a typical conventional superconductor, as shown in Fig. 12. The lowest value of A = 17% achieved in this work is much smaller than the previous report of A ≈ 1 by STM measurements on NCCO single crystal. As shown in Fig. 11, the normalized gap function ∆(T ) determined by the fitting procedure can be well described by BCS theory, yielding a gap ratio ∆/k B T c ≈ 1.66, which is very close to the theoretical value of 1.76. Such consistency between the tunneling data of cuprates and the theoretical model over a wide temperature range (between 0K and T c ) is surprising. In order to determine the repeatability of such temperature dependent measurements, we plot in Fig. 11 two ∆(T ) relations obtained at two different positions for the direction. All the ∆(T ) functions obtained along both and directions and from different positions follow the BCS theory in a normalized scale. The T c values derived from this figure is between 24.5K∼26K, near the bulk transition temperature T c ≈ 25.6K. The distribution of the superconducting energy gap ∆ derived from the spectral measurements at many different positions are shown in Fig. 13. There is no obvious difference between the data obtained from and directions. The value of ∆ varies in a narrow range from 3.50 to 3.70, indicating the good homogeneity of the superconductivity in the investigated regions. C. Tunneling spectra of La1.89Sr0.11CuO4 (LSCO) As a comparison, the directional tunneling spectra along both and directions were also studied on the LSCO single crystal, which has a typical d x 2 −y 2 - wave pairing symmetry (as checked recently by specific heat ). The measured spectra at various temperatures are presented in Fig. 14. For the direction at low temperatures, two clear coherence peaks appear in the spectra and no ZBCP can be observed. While for the -oriented spectra, a prominent ZBCP was observed, accompanied by the disappearance of the coherence peaks. With the increasing temperature, all these characteristics become weaker and weaker and eventually vanish around T c. As mentioned above, photoemission experiments have proved that surface degradation resulting from air exposure form orders of magnitude faster on LSCO than NCCO, due to the existence of reactive alkaline earth elements in LSCO. Therefore, the surface of LSCO degraded quickly before being put into the helium gas environment, which directly affected the subsequent spectral measurements. That is, the injecting quasiparticles felt strong scattering effect from the surface barrier layer (resulting in a very large spectral broadening factor), i.e., the measured spectra were badly smeared. As shown in Fig. 14, the superconducting characteristics nearly completely disappear above 20K which is much lower than the bulk critical temperature T c ≈ 28K. In order to further demonstrate the spectral differences along the two directions, the spectra of T = 2K have been normalized using the higher bias backgrounds, as shown in Fig. 15. Although the strong scattering effect has severely smeared the spectra and made the quantitative analysis difficult, the coherence peaks around gap energy are depressed and a ZBCP appears when the tunneling direction changes from to as expected from the d-wave theory. These spectra are also consistent with the results obtained on the La 2−x Sr x CuO 4 /Ag junctions fabricated using a ramp-edge technique. In this section, we try to explain our experimental data with other possible models. In Ref., the authors ascribed the unphysically large A (A = /∆ ∼ 1) to the unreasonable assumption of isotropic s-wave symmetry. They found that if the maximum value of /∆ was assumed to be 0.2 similar to YBCO, reasonable fitting is obtained by assuming anisotropic s-wave symmetry, namely, ∆() = ∆ 0 + ∆ 1 cos(4). Comparing this previous report with present work, we find that the large value of A is mainly due to the surface degradation. Nonetheless, the anisotropic s-wave symmetry seems to be reasonable considering the crystallographic symmetry and the topology of Fermi surface (FS) of NCCO. In Fig. 16(a), the best fitting to such anisotropic s-wave model is presented for both and directions. Recently, the ARPES experiment revealed another form of the gap function described as ∆() = ∆ 0 |cos(2)−0.3cos(6)|. Such ∆() function has also been tried in our fittings as shown in Fig. 16(b). All the parameters for the theoretical simulations of Fig. 16 are listed in Table II. It is found that the isotropic swave is the best candidate among the three s-wave models mentioned above. Recent ARPES experiments revealed the doping evolution of the Fermi surface (FS) in Nd 2−x Ce x CuO 4. At low doping, a small Fermi pocket appears around (,0). Upon increasing doping, another pocket begins to form around (/2,/2) and eventually at optimal doping x = 0.15 several FS pieces evolve into a large curve around (,). These findings were effectively described as a two-band system. Most recently, Luo et al. used a weakly coupled two-band BCS-like model to account for the low energy electromagnetic response of superconducting quasiparticles in electron-doped materials. The special angle dependence of the energy gap observed in Ref. (as shown in the inset of Fig. 16(b)) may be a reflection of the two-band model. In the analysis in Ref., both the pairing symmetries of band-1 and band-2 are assumed to be the d x 2 −y 2 type in order to achieve the best fitting for the superfluid density data, in which the labels 1 and 2 represent the bands contributing to the FS centered at (±,0), (0,±) and (±/2,±/2), respectively. However, as pointed out by the authors, there is a finite excitation gap in band-1 The insets are the schematic drawing of the angle dependent energy gaps. The best fitting to the first type of anisotropic s-wave model has the value of ∆1/∆0 smaller than 15% which is negligible within the fitting precision. If the second model is accepted, the simulations are also not bad as a whole, but can not fit very well the spectral shape around zero bias. since the nodal lines do not intersect with the FS of that band if the system is not heavily overdoped. This indicates that the superconducting state of NCCO is actually a mixture of d-wave and s-wave-like pairing states. In other words, this analysis can not definitely determine whether the pairing symmetry of band-1 is s-wave or dwave type. Based on this two-band model, we calculated the directional tunneling spectra by assuming different pairing symmetries (s-wave or d-wave) for band-1 while definite d x 2 −y 2 for band-2. In these calculations, the ratio of the contribution from band-1 to that from band-2 was chosen according to the discussions in Ref.. If only the d-wave symmetry of band-1 is accepted, a prominent ZBCP will appear in the spectra along direction. That is, the s-wave symmetry of band-1 is required in the fitting procedure. Moreover, the best fitting needs a much larger value of A = /∆ for band-2 than that for band-1 as shown in Fig. 17, possibly due in part to the strong depression of the d-wave superconductivity on the sample surface while the essential origin is yet to be found. Such mixture of superconductivity coming from two bands may be another possible reason of the contradicting reports on the pairing symmetry by different experiments. For example the phase sensitive measurements may selectively detect the gap information from band-2 which crosses the Fermi surface near (/2, /2). In addition, such inter-band mixture may also be responsible to the doping dependent pairing symmetry observed in P r 2−x Ce x CuO 4 thin films. In any case, the s-wave pairing symmetry appears to be an important component in the superconductivity of optimally-doped NCCO. In summary, the directional tunnelling spectra along and axis on the NCCO and LSCO single crystals illustrate clearly distinct pairing symmetries. In contrast to the results of LSCO (x=0.11), no ZBCP was observed for NCCO (x=0.15) along the two different directions while sharp coherence peaks are existent for both directions, which disagrees with the pure d-wave pairing symmetry. The almost identical spectral shapes for the two directions on NCCO can be understood in the framework of s-wave BTK theory, leading to a BCS type temperature dependence of energy gap with the ratio of ∆/k B T c ≈ 1.66. The present work provides evidence for the s-wave component in the superconductivity of the optimally-doped NCCO, which should mainly come from the band crossing the FS around (±,0) and (0,±) if a two-band model is accepted.
|
Determinants of oral contraception use in a southern European setting ObjectivesTo describe the determinants of oral contraceptive (OC) use in Catalonia (Spain) in 2006. MethodsMore than 4,400 women, aged 1549, were interviewed using a standardised health questionnaire. The main variable was OC use on any of the two days before the interview. Independent variables were socio-economic class, marital status, number of children, visits to the gynaecologist, and lifestyles. The prevalence and the odds ratios of OC use were calculated. ResultsAround 17% of the women of reproductive age used OCs. The typical OC user was an affluent, nulliparous woman in her third decade, who had visited a gynaecologist during the preceding year. Being married was not related to OC use while having two children or more was negatively related. The use of OCs followed a curve that started during adolescence, reached a peak at age 24, decreased thereafter and flattened out after having reached a low point in the 30s. ConclusionsResults from this large sample of women confirm that OC use is lower in Spain than in many other European countries. The difference reflects divergent social and cultural attitudes toward fertility control, sexuality, and the roles of women in society, rather than problems with availability and accessibility.
|
. OBJECTIVE To study the effect of hypoxia-inducible factor-1alpha (HIF-1alpha) inhibition caused by RNA interference on permeability of hypoxic vascular endothelial (VE) cells. METHODS Plasmid pcDNA6.2-GW/EmGFP-miR was applied to construct the RNA interference expression vector targeted to human HIF-1alpha gene. VE cells were divided into normal control group (NC), hypoxia group (H, cells were treated for hypoxia in mixed gas with 1% O for 6 hours), transfection group (T), and transfection hypoxia group (TH, transfected with vector and treated with hypoxia). Expression of HIF-1alpha mRNA in NC and T groups were determined with RT-PCR. Expression of HIF-1alpha protein in each group was determined with Western blot. The permeability of VE cell monolayer was detected by fluorospectrophotometer. Another sample of VE cells were divided into dimethyloxallyl glycine (DMOG) group, transfected with DMOG group (TD), normal control group (NC), and transfection group (T), with 1 mmol/L DMOG (HIF-1alpha specific derivant) replacing hypoxia treatment. The expression of HIF-1alpha protein in each group was determined with Western blot. All data were recorded as density value ratio except for permeability data, which was recorded as fluorescence intensity value. Data were processed with t test (pairwise comparison among groups). RESULTS The relative content of HIF-1alpha mRNA of cells in NC group (0.765 +/- 0.069) was significantly higher than that of cells in T group (0.093 +/- 0.007, t = 16.696, P < 0.05). Content of HIF-1alpha protein of cells in TH group (0.591 +/- 0.029) was significantly lower than that of cells in H group (2.612 +/- 0.259, t = 13.415, P < 0.05). Content of HIF-1alpha protein of cells in TD group (0.566 +/- 0.008) was significantly lower than that of cells in DMOG group (3.243 +/- 0.551, t = 6.975, P < 0.05). The permeability of cell monolayer in H group (41.6 +/- 11.1) was significantly higher than that of cell monolayer in NC group (9.4 +/- 1.5, t = 6.238, P < 0.05). The permeability of cell monolayer in TH group (13.3 +/- 4.5) was markedly lower than that of cell monolayer in H group (t = 5.430, P < 0.05). CONCLUSIONS The expression of HIF-1alpha gene in vascular endothelial cells is effectively inhibited by specific RNA interference, which significantly prevents the hypoxia-induced increase in vascular endothelial cell permeability.
|
/**
* simply creates or loads some data that will be used throughout the life of the servlet
*
*/
public void init() throws ServletException {
windsoriteFacade = NewWindsoriteFacade.getFacade();
}
|
<filename>client/src/components/user.tsx
import * as React from 'react';
import * as actions from '../actions/projects';
import Projects from '../containers/projects/projects';
import { ProjectModel } from '../models/project';
import Breadcrumb from './breadcrumb';
export interface Props {
user: string;
createProject: (project: ProjectModel) => actions.ProjectAction;
}
export default class User extends React.Component<Props, {}> {
public render() {
return (
<div className="row">
<div className="col-md-12">
<div className="entity-details">
<Breadcrumb links={[{name: this.props.user}, {name: 'projects'}]}/>
</div>
<Projects user={this.props.user}/>
</div>
</div>
);
}
}
|
Update: On November 13, 2017, the FDA approved the Abilify "digital pill," the first time the agency has accepted a medication embedded with a sensor.
No, really. This month, the Food and Drug Administration accepted an application to evaluate a new drug-sensor-app system that tracks when a pill's been taken. The app comes connected to a Band Aid-like sensor, worn on the body, that knows when a tiny chip hidden inside a pill is swallowed—so if patients aren’t keeping up with their meds, the program can alert their doctors.
The drug here is Abilify, a popular antipsychotic from the pharmaceutical giant Otsuka, and the sensor and the app come from Proteus Digital Health, a California-based health technology company. The FDA has already approved the drug and the sensor system separately—now, they’ll be evaluated together under a whole new category of “digital medicines.” If approved, the ingestible sensor can actually be used in the pill.
Because this combo of pill and tech is so new, the companies had to work with the FDA to even figure out what kind data to submit to get approval. That the agency is willing to try something new points to the potential of Proteus’s chip and sensor, which can work in any type of pill.
Abilify, used to treat schizophrenia, bipolar disorder, and depression, was the number one selling drug in the US in 2013. But in 2015, things came crashing down. Otsuka’s patent for Abilify expired, and the company’s last-ditch attempt to prolong the patent in a convoluted lawsuit with the FDA failed. The FDA approved generic versions of Abilify, also called aripiprazole, in April.
Pharmaceutical companies have a history of reformulating off-patent drugs, changing them to an extended release pill or a liquid version to get a new patent. McQuade, though, contends that the curious timing of this new version of Abilify with a built-in sensor has nothing to do with its patent expiring and everything to do with Abilify being a popular and relatively safe drug that would be easy to get through the FDA’s brand new approval process.
On the tech side of things, Proteus received FDA approval for its ingestible sensor as a de novo medical device back in 2012. It embeds a small piece of magnesium and copper inside a pill. When the pill falls into stomach juices, the two metals create a small voltage detectable by an adhesive sensor stuck on the torso. That sensor then sends information to a mobile phone app, and with the patient’s permissions, to his or her doctor.
But don’t call that communication “monitoring,” representatives from Proteus and Otsuka made very clear in their conversations with WIRED. “We’re not managing. We’re about empowerment and enablement,” says Proteus CEO Andrew Thompson.
Colbert was playing the conservative buffoon, but he kind of had a point about the sensors. “When people are a little on the edge," like certain mental health patients, "you have to be careful about introducing this idea,” says Diana Perkins, a psychiatrist at the University of North Carolina who treats patients with schizophrenia. Patients don’t take pills for a whole range of reasons, and even Proteus and Otsuka admit the system doesn’t make sense for everyone.
Their sensor is primarily aimed at people who want to take their meds but forget. Taking a pill every day is a hassle, which can get in the way of adherence, and the drugs can have side effects that can be more than minor. “When you’re ambivalent, it’s easy to ‘forget,’” adds Perkins.
The sensor could potentially prompt interventions if people miss doses because they don’t like the side effects. But it can’t help in non-adherence cases more specific to schizophrenia, such as when patients don’t believe they’re ill. According to one survey by the Veterans Hospital Administration, denial of illness is a barrier to taking pills in 10 percent of patients with schizophrenia. “Those kinds of things require different interventions,” says Mark Olfson, a psychiatrist at Columbia.
Cost is another issue, which circles back to Abilify’s expired patent. Abilify embedded with an ingestible sensor is likely to cost significantly more than generic non-sensor enabled versions now available. Homelessness and lack of social support are major barriers in adherence, but the same patients struggling with those problems are also least likely to afford the more expensive drug.
Measuring the success of that system also turns out to be complicated. People who choose to enroll in clinical trials are already inclined to keep up with their meds, and the trials themselves—with doctor’s visits and phone checkups—are designed to get people to take their medicine. Thompson says Proteus’ data on patients with mental illness so far are promising, but he declined to share specifics.
Proteus is currently testing its platform with drugs for other chronic conditions like diabetes and high-blood pressure. The current FDA application is only the first in this new category of digital medicines that could introduce some real changes into health. But no drug is a panacea; neither is any tech.
|
package com.lidong.gateway.service;
import org.springframework.cloud.gateway.route.RouteDefinition;
/**
* 更新内存中的路由信息
*/
public interface NacosDynamicRouteService {
/**
* 更新路由信息
* @param gatewayDefine
* @return
* @throws Exception
*/
String update(RouteDefinition gatewayDefine);
}
|
James Neal has scored 20-plus goals in 10 straight campaigns.
The Calgary Flames are counting on Monday’s marquee free-agent addition — signed to a five-year contract with an annual salary-cap hit of US$5.75 million — to continue to tack on to that streak.
Neal has also skated in the Stanley Cup final in back-to-back springs.
The 30-year-old winger is determined to extend that trend, too.
Except next time, with an alternate ending. Neal has helped the Nashville Predators (2017) and Vegas Golden Knights (2018) raise Western Conference championship banners but has yet to get his hands on the ultimate prize.
Nicknamed ‘Real Deal,’ this is a big deal for the Flames.
Five years is a lot of term for a guy already on the wrong side of 30, but Neal was one of the most prolific marksman available on the free-agent market and the Flames were in desperate need of additional firepower after averaging just 2.63 goals per game last season, the fifth-worst mark on the 31-team loop.
Although Neal is a left-handed shot, he’s often worked the right-wing and immediately becomes a top candidate to join Johnny Gaudreau and Sean Monahan on Calgary’s first line.
Or perhaps, he could be a perfect complement for up-and-comer Matthew Tkachuk on the next unit.
“This is the type of player we’ve been looking for for a while,” said Flames general manager Brad Treliving said of Neal, who stands 6-foot-2 and tips the scales at 221 lb. “We’ve liked him for a long time. He’s been successful for a long time at one of the hardest jobs in hockey, and that’s shooting it in the net.
Fun fact: James Neal scores a lot of goals. In fact, he's had 10 consecutive 20+ goal seasons aka every year he's been in the league.
Treliving signed a pair of right-handed forwards — centre Derek Ryan and winger Austin Czarnik — during the early stages of Sunday’s free-agent fest.
Turns out, he hadn’t reeled in his biggest fish yet.
Neal notched 25 goals and 19 assists in 71 regular-season outings with the Golden Knights last winter and then added six markers and five helpers in 20 playoff dates.
He arrives in Calgary with a grand total of 263 tallies on his career resume. During his decade in the league, only 14 guys have tickled twine more often.
“I know we have a great group of guys in Calgary,” said Neal, who turns 31 prior to training camp in September. “Speaking with (Golden Knights defenceman) Deryk Engelland, who was there a while, he talked about the character and how many good guys are on this team. I’m excited to be a part of that.
What Flames fans are hoping, above all else, is that he’ll be skating in the Stanley Cup final for a third straight spring.
Preferably, with a fairytale ending.
“You learn a lot about yourself, you learn a lot about what it takes to win and you learn what you have to do to get to that point,” Neal said of his oh-so-close calls with both the Predators and Golden Knights. “I’m going to bring that leadership and that focus and that drive and that winning attitude and winning culture … I’m going to bring that to Calgary.
Ice chips: The Flames also announced Monday that they’ve inked 26-year-old winger Buddy Robinson to a two-way contract. A big dude at 6-foot-6 and 232 lb., Robinson registered 25 goals and 53 points in 74 outings last winter for the AHL’s Manitoba Moose.
|
Hepatocellular Carcinoma Confirmation, Treatment, and Survival in Surveillance, Epidemiology, and End Results Registries, 19922008 Approaches to the diagnosis and management of hepatocellular carcinoma (HCC) are improving survival. In the Surveillance, Epidemiology, and End Results13 registries, HCC stage, histological confirmation, and firstcourse surgery were examined. Among 21,390 HCC cases diagnosed with followup of vital status during 19982008, there were 4,727 (22%) with reported firstcourse invasive liver surgery, local tumor destruction, or both. The proportion with reported liver surgery or ablation was 39% among localized stage cases and only 4% among distant/unstaged cases. Though 70% of cases had histologically confirmed diagnoses, the proportion with confirmed diagnoses was higher among cases with reported invasive surgery (99%), compared to cases receiving ablation (81%) or no reported therapy (65%). Incidence rates of histologically unconfirmed HCC increased faster than those of confirmed HCC from 1992 to 2008 (8% versus 3% per year). Two encouraging findings were that incidence rates of localizedstage HCC increased faster than rates of regional and distantstage HCC combined (8% versus 4% per year), and that incidence rates of reported firstcourse surgery or tumor destruction increased faster than incidence rates of HCC without such therapy (11% versus 7%). Between 19751977 and 19982007, 5year causespecific HCC survival increased from just 3% to 18%. Survival was 84% among transplant recipients, 53% among cases receiving radiofrequency ablation at early stage, 47% among cases undergoing resection, and 35% among cases receiving local tumor destruction. Asian or Pacific Islander cases had significantly better 5year survival (23%) than white (18%), Hispanic (15%), or black cases (12%). Conclusion: HCC survival is improving, because more cases are diagnosed and treated at early stages. Additional progress may be possible with continued use of clinical surveillance to follow individuals at risk for HCC, enabling early intervention. (HEPATOLOGY 2012;)
|
/**
* This provides mechanism for looking up a {@link BuildEngineAction} via the {@link BuildTarget}
* that {@link BuildEngineAction#getDependencies()} provides.
*
* <p>This lets the {@link com.facebook.buck.core.build.engine.BuildEngine} find and schedule
* dependencies to be build.
*/
public class BuildTargetToBuildEngineActionResolver {
private final BuildRuleResolver buildRuleResolver;
public BuildTargetToBuildEngineActionResolver(BuildRuleResolver buildRuleResolver) {
this.buildRuleResolver = buildRuleResolver;
}
public BuildEngineAction resolve(BuildTarget target) {
/**
* TODO (bobyf): change this to return {@link Action} if the {@link BuildTarget} if of a rule
* from {@link RuleAnalysisComputation}.
*/
return buildRuleResolver.getRule(target);
}
}
|
Moving Beyond Politics: Diversity, Equity, Inclusion, and Respect Too often, the words diversity, equity, inclusion, and respect are reduced to buzzwords, talking points which are parroted by those in power. The asks that are made to further these principles ultimately, unfortunately, become politicized. A line is drawn in the sand. The Woke Left. The Entrenched Right. But issues of equity are not political, they're humanistic. As scientists, we understand that the SARS-CoV-2 virus, the causative agent of this years-long-crisis, is not a political actor it is a microbe that has been politicized and turned into a source of strife where we should have united against it. The issues of diversity, equity, inclusion, and respect are the same. They're used as talking points to further our political differences. They turn into fights, and they divide us. People become angry that they must undergo trainings to learn about how to foster an environment that is diverse, equitable, inclusive, and respectful of all people.
|
<reponame>Gui25Reis/Mudanca-de-base
print('Resultdo: a partir do 1º zero do quociente')
print('Máx. 15 dígitos')
print()
a = int(input("Base inicial: ", ))
b = int(input("Base final: ", ))
if a != 2:
print()
n = int(input('Número, SEM ESPAÇO: ', ))
print()
d1 = int(n/(10**14))
r1 = int(n%(10**14))
d2 = int(r1/(10**13))
r2 = int(r1%(10**13))
d3 = int(r2/(10**12))
r3 = int(r2%(10**12))
d4 = int(r3/(10**11))
r4 = int(r3%(10**11))
d5 = int(r4/(10**10))
r5 = int(r4%(10**10))
d6 = int(r5/(10**9))
r6 = int(r5%(10**9))
d7 = int(r6/(10**8))
r7 = int(r6%(10**8))
d8 = int(r7/(10**7))
r8 = int(r7%(10**7))
d9 = int(r8/(10**6))
r9 = int(r8%(10**6))
d10 = int(r9/(10**5))
r10 = int(r9%(10**5))
d11 = int(r10/(10**4))
r11 = int(r10%(10**4))
d12 = int(r11/(10**3))
r12 = int(r11%(10**3))
d13 = int(r12/10**2)
r13 = int(r12%10**2)
d14 = int(r13/10)
r14 = int(r13%10)
d15 = int(r14/1)
r15 = int(r14%1)
s1=(d15*1)+(d14*a)+(d13*a**2)+(d12*a**3)+(d11*a**4)+(d10*a**5)+(d9*a**6)+(d8*a**7)+(d7*a**8)+(d6*a**9)+(d5*a**10)+(d4*a**11)+(d3*a**12)+(d2*a**13)+(d1*a**14)
print("Base:",a,"-> Base: 10 =", s1)
c1 = int(s1/b)
c2 = int(c1/b)
c3 = int(c2/b)
c4 = int(c3/b)
c5 = int(c4/b)
c6 = int(c5/b)
c7 = int(c6/b)
c8 = int(c7/b)
c9 = int(c8/b)
c10 = int(c9/b)
c11 = int(c10/b)
c12 = int(c11/b)
c13 = int(c12/b)
c14 = int(c13/b)
c15 = int(c14/b)
c16 = int(c15/b)
c17 = int(c16/b)
c18 = int(c17/b)
c19 = int(c18/b)
c20 = int(c19/b)
c21 = int(c20/b)
c22 = int(c21/b)
c23 = int(c22/b)
c24 = int(c23/b)
c25 = int(c24/b)
c26 = int(c25/b)
c27 = int(c26/b)
c28 = int(c27/b)
c29 = int(c28/b)
c30 = int(c29/b)
r1 = int(s1-(c1*b))
r2 = int(c1-(c2*b))
r3 = int(c2-(c3*b))
r4 = int(c3-(c4*b))
r5 = int(c4-(c5*b))
r6 = int(c5-(c6*b))
r7 = int(c6-(c7*b))
r8 = int(c7-(c8*b))
r9 = int(c8-(c9*b))
r10 = int(c9-(c10*b))
r11 = int(c10-(c11*b))
r12 = int(c11-(c12*b))
r13 = int(c12-(c13*b))
r14 = int(c13-(c14*b))
r15 = int(c14-(c15*b))
r16 = int(c15-(c16*b))
r17 = int(c16-(c17*b))
r18 = int(c17-(c18*b))
r19 = int(c18-(c19*b))
r20 = int(c19-(c20*b))
r21 = int(c20-(c21*b))
r22 = int(c21-(c22*b))
r23 = int(c22-(c23*b))
r24 = int(c23-(c24*b))
r25 = int(c24-(c25*b))
r26 = int(c25-(c26*b))
r27 = int(c26-(c27*b))
r28 = int(c27-(c28*b))
r29 = int(c28-(c29*b))
r30 = int(c29-(c30*b))
print()
print("Quociente:",c30,c29,c28,c27,c26,c25,c24,c23,c22,c21,c20,c19,c18,c17,c16,c15,c14,c13,c12,c11,c10,c9,c8,c7,c6,c5,c4,c3,c2,c1)
print("Resultado:" ,r30,r29,r28,r27,r26,r25,r24,r23,r22,r21,r20,r19,r18,r17,r16,r15,r14,r13,r12,r11,r10,r9,r8,r7,r6,r5,r4,r3,r2,r1)
print()
if (a == 2 and a > 10):
print()
print('Ex: 356 -> 000000000000356 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ')
print(' | | | | | | | | | | | | | | | ')
p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13,p14,p15 = input('Número, COM ESPAÇO: ', ). split()
p1 = int(p1)
p2 = int(p2)
p3 = int(p3)
p4 = int(p4)
p5 = int(p5)
p6 = int(p6)
p7 = int(p7)
p8 = int(p8)
p9 = int(p9)
p10 = int(p10)
p11 = int(p11)
p12 = int(p12)
p13 = int(p13)
p14 = int(p14)
p15 = int(p15)
s2 = (p15*1)+(p14*a)+(p13*a**2)+(p12*a**3)+(p11*a**4)+(p10*a**5)+(p9*a**6)+(p8*a**7)+(p7*a**8)+(p6*a**9)+(p5*a**10)+(p4*a**11)+(p3*a**12)+(p2*a**13)+(p1*a**14)
print("Base:",a," -> Base: 10 =", s2)
c1 = int(s2/b)
c2 = int(c1/b)
c3 = int(c2/b)
c4 = int(c3/b)
c5 = int(c4/b)
c6 = int(c5/b)
c7 = int(c6/b)
c8 = int(c7/b)
c9 = int(c8/b)
c10 = int(c9/b)
c11 = int(c10/b)
c12 = int(c11/b)
c13 = int(c12/b)
c14 = int(c13/b)
c15 = int(c14/b)
c16 = int(c15/b)
c17 = int(c16/b)
c18 = int(c17/b)
c19 = int(c18/b)
c20 = int(c19/b)
c21 = int(c20/b)
c22 = int(c21/b)
c23 = int(c22/b)
c24 = int(c23/b)
c25 = int(c24/b)
c26 = int(c25/b)
c27 = int(c26/b)
c28 = int(c27/b)
c29 = int(c28/b)
c30 = int(c29/b)
r1 = int(s2-(c1*b))
r2 = int(c1-(c2*b))
r3 = int(c2-(c3*b))
r4 = int(c3-(c4*b))
r5 = int(c4-(c5*b))
r6 = int(c5-(c6*b))
r7 = int(c6-(c7*b))
r8 = int(c7-(c8*b))
r9 = int(c8-(c9*b))
r10 = int(c9-(c10*b))
r11 = int(c10-(c11*b))
r12 = int(c11-(c12*b))
r13 = int(c12-(c13*b))
r14 = int(c13-(c14*b))
r15 = int(c14-(c15*b))
r16 = int(c15-(c16*b))
r17 = int(c16-(c17*b))
r18 = int(c17-(c18*b))
r19 = int(c18-(c19*b))
r20 = int(c19-(c20*b))
r21 = int(c20-(c21*b))
r22 = int(c21-(c22*b))
r23 = int(c22-(c23*b))
r24 = int(c23-(c24*b))
r25 = int(c24-(c25*b))
r26 = int(c25-(c26*b))
r27 = int(c26-(c27*b))
r28 = int(c27-(c28*b))
r29 = int(c28-(c29*b))
r30 = int(c29-(c30*b))
print()
print("Quociente:",c30,c29,c28,c27,c26,c25,c24,c23,c22,c21,c20,c19,c18,c17,c16,c15,c14,c13,c12,c11,c10,c9,c8,c7,c6,c5,c4,c3,c2,c1)
print("Resultado:" ,r30,r29,r28,r27,r26,r25,r24,r23,r22,r21,r20,r19,r18,r17,r16,r15,r14,r13,r12,r11,r10,r9,r8,r7,r6,r5,r4,r3,r2,r1)
print()
if b > 10:
print("A=10 B=11 C=12 D=13 E=14 F=15")
input()
|
William Waldegrave, Viscount Chewton
William Frederick Waldegrave, Viscount Chewton (29 June 1816 in Cardington, Bedfordshire – 8 October 1854) was a British army officer.
Waldegrave was the eldest son of Hon. William Waldegrave and was educated at Cheam School. While still at school, he served as a midshipman aboard his father's ship, HMS Seringapatam from 1829–31 and later graduated from Trinity College, Cambridge in 1837. He then emigrated to Canada and served with the militia which put down the rebellions of 1837 and returned to Britain in 1843 and served with the British Army.
In 1846, his father had inherited his earldom from the latter's nephew and Waldegrave took the courtesy title of Viscount Chewton. That year, Chewton fought in the Battle of Sobraon and then captained the 6th Regiment of Foot stationed at the Cape of Good Hope in 1847 and then the Royal Scots Fusiliers at Scotland in 1848. Chewton later fought in the Battle of Alma in September 1854, but died of his wounds a few weeks later.
Family
Lord Chewton married, on 2 July 1850, Frances Bastard, daughter of Captain John Bastard, RN, of Sharpham, Devon, and they had a son, William, in 1851 and later a daughter who died in infancy. Frances, Viscountess Chewton was a Woman of the Bedchamber to Queen Victoria, and received the Order of Victoria and Albert, 3rd class. She died 11 April 1902, at Bookham lodge, Cobham, Surrey, in her 80th year, of pneumonia.
|
<reponame>abourget/ledger
// Package print provides a low-level writer for ledger files.
package print
import (
"bytes"
"errors"
"fmt"
"github.com/abourget/ledger/parse"
)
// Printer formats the AST of a Ledger file into a properly formatted
// .ledger file.
type Printer struct {
tree *parse.Tree
MinimumAccountWidth int
PostingsIndent int
}
func New(tree *parse.Tree) *Printer {
return &Printer{
tree: tree,
MinimumAccountWidth: 48,
PostingsIndent: 4,
}
}
func (p *Printer) Print(buf *bytes.Buffer) error {
tree := p.tree
if tree.Root == nil {
return errors.New("parse tree is empty (Root is nil)")
}
for _, nodeIface := range tree.Root.Nodes {
var err error
fmt.Printf("MAMA %T\n", nodeIface)
switch node := nodeIface.(type) {
case *parse.XactNode:
p.writePlainXact(buf, node)
case *parse.CommentNode:
_, err = buf.WriteString(node.Comment + "\n")
case *parse.SpaceNode:
_, err = buf.WriteString(node.Space)
case *parse.DirectiveNode:
_, err = buf.WriteString(node.Raw)
case *parse.CommodityNode:
p.writeCommodity(buf, node)
default:
return fmt.Errorf("unprintable node type %T", nodeIface)
}
if err != nil {
return err
}
}
return nil
}
|
<gh_stars>1-10
/*
* Copyright 2013 Department of Computer Science and Technology, Guangxi University
*
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
* LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
* OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
* WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package gxu.software_engineering.market.android.service;
import gxu.software_engineering.market.android.util.NetworkUtils;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import gxu.software_engineering.market.android.R;
import gxu.software_engineering.market.android.provider.MarketProvider;
import gxu.software_engineering.market.android.util.C;
import gxu.software_engineering.market.android.util.Processor;
import gxu.software_engineering.market.android.util.ServiceHelper;
import android.app.IntentService;
import android.content.ContentResolver;
import android.content.ContentValues;
import android.content.Intent;
import android.os.Handler;
import android.util.Log;
import android.widget.Toast;
/**
* 抓取云端数据。
*
* @author longkai(龙凯)
* @email <EMAIL>
* @since 2013-6-23
*/
public class FetchService extends IntentService {
private static final String TAG = FetchService.class.getSimpleName();
private Handler handler;
public FetchService() {
super(TAG);
}
@Override
public void onCreate() {
super.onCreate();
handler = new Handler();
}
@Override
protected void onHandleIntent(Intent intent) {
if (NetworkUtils.connected(getApplicationContext())) {
try {
ServiceHelper.pre(intent);
ServiceHelper.doing(getContentResolver(), intent);
} catch (JSONException e) {
e.printStackTrace();
handler.post(new Runnable() {
@Override
public void run() {
Toast.makeText(getApplicationContext(), R.string.resolve_data_error, Toast.LENGTH_SHORT).show();
}
});
}
} else {
handler.post(new Runnable() {
@Override
public void run() {
Toast.makeText(getApplicationContext(), R.string.network_down, Toast.LENGTH_SHORT).show();
}
});
}
}
}
|
An evaluation of the ability of chemical measurements to predict polycyclic aromatic hydrocarboncontaminated sediment toxicity to Hyalella azteca The present study examined the ability of three chemical estimation methods to predict toxicity and nontoxicity of polycyclic aromatic hydrocarbon (PAH) contaminated sediment to the freshwater benthic amphipod Hyalella azteca for 192 sediment samples from 12 field sites. The first method used bulk sediment concentrations of 34 PAH compounds (PAH34), and fraction of total organic carbon, coupled with equilibrium partitioning theory to predict porewater concentrations (KOC method). The second method used bulk sediment PAH34 concentrations and the fraction of anthropogenic (black carbon) and natural organic carbon coupled with literaturebased black carbonwater and organic carbonwater partition coefficients to estimate porewater concentrations (KOCKBC method). The final method directly measured porewater concentrations (porewater method). The U.S. Environmental Protection Agency's hydrocarbon narcosis model was used to predict sediment toxicity for all three methods using the modeled or measured porewater concentration as input. The KOC method was unable to predict nontoxicity (83% of nontoxic samples were predicted to be toxic). The KOCKBC method was not able to predict toxicity (57% of toxic samples were predicted to be nontoxic) and, therefore, was not protective of the environment. The porewater method was able to predict toxicity (correctly predicted 100% of the toxic samples were toxic) and nontoxicity (correctly predicted 71% of the nontoxic samples were nontoxic). This analysis clearly shows that direct porewater measurement is the most accurate chemical method currently available to estimate PAHcontaminated sediment toxicity to H. azteca. Environ. Toxicol. Chem. 2010;29:15451550. © 2010 SETAC
|
There has been a place of Christian worship at the sea edge in Aberdaron since the 5th century.
This walk begins and ends in the ancient village of Aberdaron which is about as far West as you can get on the Welsh mainland.
Return to the Aberdaron walk.
|
Optimization of remote seeding source and gain in minimizing backreflections effects on colorless ONU in 10 Gbps bidirectional WDM PON system This paper investigates the influence of backreflection on the upstream signal in the 10 Gbps bidirectional WDM PON system running on a single fiber. High backreflection light may be produced by Rayleigh backscattering, Fresnel and Brillouin backscattering and ultimately causing poor bit error rate. However the utilization of SOA that provides gain helps to overcome the network losses and should be maintained at an appropriate level to avoid further deterioration.
|
<reponame>DenitsaKostova/cf-mta-deploy-service
package com.sap.cloud.lm.sl.cf.process.steps;
import java.util.ArrayList;
import java.util.List;
import org.activiti.engine.delegate.DelegateExecution;
import org.cloudfoundry.client.lib.CloudFoundryException;
import org.cloudfoundry.client.lib.CloudFoundryOperations;
import org.cloudfoundry.client.lib.domain.CloudDomain;
import org.springframework.beans.factory.config.BeanDefinition;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Component;
import com.sap.activiti.common.ExecutionStatus;
import com.sap.cloud.lm.sl.cf.process.message.Messages;
import com.sap.cloud.lm.sl.common.SLException;
@Component("addDomainsStep")
@Scope(BeanDefinition.SCOPE_PROTOTYPE)
public class AddDomainsStep extends AbstractProcessStep {
@Override
protected ExecutionStatus executeStepInternal(DelegateExecution context) throws SLException {
getStepLogger().logActivitiTask();
try {
getStepLogger().info(Messages.ADDING_DOMAINS);
CloudFoundryOperations client = getCloudFoundryClient(context);
List<CloudDomain> existingDomains = client.getDomainsForOrg();
List<String> existingDomainNames = getDomainNames(existingDomains);
getStepLogger().debug("Existing domains: " + existingDomainNames);
List<String> customDomains = StepsUtil.getCustomDomains(context);
getStepLogger().debug("Custom domains: " + customDomains);
addDomains(client, customDomains, existingDomainNames);
getStepLogger().debug(Messages.DOMAINS_ADDED);
return ExecutionStatus.SUCCESS;
} catch (CloudFoundryException cfe) {
SLException e = StepsUtil.createException(cfe);
getStepLogger().error(e, Messages.ERROR_ADDING_DOMAINS);
throw e;
} catch (SLException e) {
getStepLogger().error(e, Messages.ERROR_ADDING_DOMAINS);
throw e;
}
}
private List<String> getDomainNames(List<CloudDomain> domains) {
List<String> domainNames = new ArrayList<>();
for (CloudDomain domain : domains) {
domainNames.add(domain.getName());
}
return domainNames;
}
private void addDomains(CloudFoundryOperations client, List<String> domainNames, List<String> existingDomainNames) {
for (String domainName : domainNames) {
addDomain(client, domainName, existingDomainNames);
}
}
private void addDomain(CloudFoundryOperations client, String domainName, List<String> existingDomainNames) {
if (existingDomainNames.contains(domainName)) {
getStepLogger().debug(Messages.DOMAIN_ALREADY_EXISTS, domainName);
} else {
getStepLogger().info(Messages.ADDING_DOMAIN, domainName);
client.addDomain(domainName);
}
}
}
|
Differential efficacy of gonadotropin-releasing hormone (GnRH) agonist treatment on pedunculated and degenerated myomas: a retrospective study of 630 women This study was designed to compare the anatomical location of myomas (submucous, intramural, subserous, or cervical), whether pedunculated or non-pedunculated if degenerated or undegenerated and relating these findings to myoma volume reduction in women treated with gonadotropin-releasing hormone agonist (GnRHa). Our retrospective study group included 630 patients with symptoms attributed to fibroids. They were treated with a single GnRH agonist for 20 weeks and the size of the myoma was monitored by magnetic resonance imaging and/or ultrasonographic scan. During a 20 week-treatment, submucous, intramural, or subserous fibroids showed an overall reduction of 63% ( P <0.01) with little variation between these types. No reduction in size was seen in cases of pedunculated, degenerated, or cervical myomas. These data suggest that GnRH agonist therapy might be used primarily for non-pedunculated and undegenerated myomas.
|
<reponame>LinMAD/SimpleGames
// Rogesci.cpp : This file contains the 'main' function. Program execution begins and ends there.
//
#include <Windows.h>
#include <iostream>
#include <chrono>
#include "Configuration.h"
#include "Player.h"
#include "Map.h"
#include "Render.h"
#include "Prefab.h"
#include "CollisionHandler.h"
using namespace std;
using namespace chrono;
int main()
{
// Bufer
wchar_t* screen = new wchar_t[screenResolution];
HANDLE hConsole = CreateConsoleScreenBuffer(GENERIC_READ | GENERIC_WRITE, 0, NULL, CONSOLE_TEXTMODE_BUFFER, NULL);
SetConsoleActiveScreenBuffer(hConsole);
DWORD dwBytesWritten = 0;
wstring gameLevel = GenerateGameLevel();
// Sync to system time
auto tp1 = system_clock::now();
auto tp2 = system_clock::now();
// Game loop
while (true) {
for (int x = 0; x < screenWidth; x++) {
tp2 = system_clock::now();
duration<float> systemElapsedTime = tp2 - tp1;
tp1 = tp2;
// Normolize game to system time
float elapsedTime = systemElapsedTime.count();
HandlePlayerControlls(elapsedTime);
float rayAngle = (playerAngle - playerFieldOfView / 2.0f) + ((float) x / (float) screenWidth) * playerFieldOfView;
float rayDistanceToObject = 0.0f;
bool isRayHitObject = false;
// Vision
float eyeX = sinf(rayAngle);
float eyeY = cosf(rayAngle);
// TODO Refactor to collision checker ?
while (!isRayHitObject && rayDistanceToObject < mapMaxDepth) {
rayDistanceToObject += 0.1f;
int testX = (int)(playerX + eyeX * rayDistanceToObject);
int testY = (int)(playerY + eyeY * rayDistanceToObject);
if (testX < 0 || testX >= mapWidth || testY < 0 || testY >= mapHeight) {
isRayHitObject = true;
rayDistanceToObject = mapMaxDepth;
} else {
if (gameLevel[testY * mapWidth + testX] == mapWallSymbol) {
isRayHitObject = true;
}
}
}
for (int y = 0; y < screenHeight; y++) {
UpdateScreen(screen, x, y, (float) rayDistanceToObject);
}
}
screen[screenResolution - 1] = '\0';
WriteConsoleOutputCharacter(hConsole, screen, screenResolution, { 0, 0 }, &dwBytesWritten);
}
return 0;
}
|
import * as React from 'react';
import MainLayout from '@components/layouts/main-layout';
import SEO from '@components/seo';
const NotFoundPage = () => (
<MainLayout>
<SEO title="404: Not found" />
<h1>NOT FOUND</h1>
<p>You just hit a route that doesn't exist... the sadness.</p>
</MainLayout>
);
export default NotFoundPage;
|
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.graphics.g3d.Model;
import com.badlogic.gdx.graphics.g3d.ModelInstance;
import com.badlogic.gdx.graphics.g3d.utils.AnimationController;
import com.badlogic.gdx.math.Vector3;
import com.badlogic.gdx.physics.bullet.collision.btBoxShape;
import com.badlogic.gdx.physics.bullet.collision.btCollisionObject;
import com.badlogic.gdx.physics.bullet.collision.btCollisionShape;
import com.badlogic.gdx.utils.Timer;
import com.badlogic.gdx.utils.Timer.Task;
/**
* Contains all the spider related information
*/
public class Spider {
private Model spider;
private ModelInstance spiderInstance;
private AnimationController animationController;
private btCollisionShape spiderShape;
private btCollisionObject spiderCollisionObject;
private float spiderX;
private float spiderY;
private float spiderZ;
private enum State {
TRACKING, STUNNED, DEAD, ATTACKING, MOTIONLESS
};
private State state;
private int grenadeDetonated;
private static float spiderSpeed;
private float currentIntercept;
private int xIncrease;
private int xDecrease;
private int zIncrease;
private int zDecrease;
private boolean dead;
private boolean caughtPlayer;
private CollisionDetection detect;
// This boolean is required to prevent methods from being called more than once
private boolean gotCalled;
Spider(int x, int z) {
spider = Map.getAssetManager().get("Spider/Spider_2.g3db", Model.class);
spiderInstance = new ModelInstance(spider);
spiderShape = new btBoxShape(new Vector3(100f, 100f, 100f));
spiderCollisionObject = new btCollisionObject();
spiderCollisionObject.setCollisionShape(spiderShape);
spiderCollisionObject.setWorldTransform(spiderInstance.transform);
detect = new CollisionDetection();
spiderX = x;
spiderY = -122f;
spiderZ = z;
spiderInstance.transform.setToTranslation(spiderX, spiderY, spiderZ);
spiderInstance.transform.rotate(Vector3.Y, 65);
spiderSpeed = 13f;
state = State.TRACKING;
xIncrease = 0;
xDecrease = 0;
zIncrease = 0;
zDecrease = 0;
dead = false;
caughtPlayer = false;
gotCalled = false;
grenadeDetonated = -1;
animationController = new AnimationController(spiderInstance);
animationController.setAnimation("run_ani_vor", -1);
}
public ModelInstance getSpiderInstance() {
return spiderInstance;
}
public Model getSpiderModel() {
return spider;
}
public AnimationController getAnimationController() {
return animationController;
}
public btCollisionObject getSpiderCollisionObject() {
return spiderCollisionObject;
}
public btCollisionShape getSpiderShape() {
return spiderShape;
}
public float getSpiderX() {
return spiderX;
}
public float getSpiderY() {
return spiderY;
}
public float getSpiderZ() {
return spiderZ;
}
public boolean checkMethodCall() {
return gotCalled;
}
public void gotCalled() {
gotCalled = true;
}
public boolean getIsDead() {
return dead;
}
public void killed() {
state = State.DEAD;
dead = true;
}
public void becomesMotionless() {
state = State.MOTIONLESS;
}
public boolean checkSpiderCaughtPlayer() {
return caughtPlayer;
}
public State getSpiderState() {
return state;
}
public CollisionDetection getDetector() {
return detect;
}
/**
* Increases the spider's speed everytime the player picks up a key
*/
public static void increaseSpiderSpeed() {
spiderSpeed++;
}
public void updateAnimation() {
animationController.update(Gdx.graphics.getDeltaTime());
}
/**
* Calculates the intercept for the path the spider should take everytime it
* turns
*
* @return float
*/
private float calculateInterceptZ() {
return (float) (spiderZ - (0.42766 * spiderX));
}
private float calculateInterceptX() {
return (float) (spiderX + (0.42766 * spiderZ));
}
/**
* Decreases the spider's x-coordinate based on the slope
*/
private void xDecrease() {
if (xDecrease == 1) // Checks if the method was called once
currentIntercept = calculateInterceptZ();
spiderX = spiderX - spiderSpeed;
spiderZ = (float) (0.42766 * spiderX + currentIntercept);
spiderInstance.transform.setToTranslation(spiderX, spiderY, spiderZ);
spiderInstance.transform.rotate(Vector3.Y, 65);
spiderCollisionObject.setWorldTransform(spiderInstance.transform);
}
/**
* Increases the spider's x-coordinate based on the slope
*/
private void xIncrease() {
if (xIncrease == 1)
currentIntercept = calculateInterceptZ();
spiderX = spiderX + spiderSpeed;
spiderZ = (float) (0.42766 * spiderX + currentIntercept);
spiderInstance.transform.setToTranslation(spiderX, spiderY, spiderZ);
spiderInstance.transform.rotate(Vector3.Y, -110);
spiderCollisionObject.setWorldTransform(spiderInstance.transform);
}
/**
* Increases the spider's z-coordinate based on the slope
*/
private void zIncrease() {
if (zIncrease == 1)
currentIntercept = calculateInterceptX();
spiderZ = spiderZ + spiderSpeed;
spiderX = (float) (-0.42766 * spiderZ + currentIntercept);
spiderInstance.transform.setToTranslation(spiderX, spiderY, spiderZ);
spiderInstance.transform.rotate(Vector3.Y, -210);
spiderCollisionObject.setWorldTransform(spiderInstance.transform);
}
/**
* Decreases the spider's z-coordinate based on the slope
*/
private void zDecrease() {
if (zDecrease == 1)
currentIntercept = calculateInterceptX();
spiderZ = spiderZ - spiderSpeed;
spiderX = (float) (-0.42766 * spiderZ + currentIntercept);
spiderInstance.transform.setToTranslation(spiderX, spiderY, spiderZ);
spiderInstance.transform.rotate(Vector3.Y, -20);
spiderCollisionObject.setWorldTransform(spiderInstance.transform);
}
/**
* Based on the spider's proximity to the player and their mines, it will call
* the appropriate method
*/
public void track(Player player) {
if (detect.checkCollision(spiderCollisionObject, player.getPlayerCollisionObject()))
state = State.ATTACKING;
for (int i = 0; i < player.getMines().size; i++) {
if (detect.checkCollision(spiderCollisionObject, player.getMines().get(i).getCollisionObjects())) {
state = State.STUNNED;
grenadeDetonated = i;
gotCalled = true;
break;
}
}
switch (state) {
case ATTACKING:
attack(player);
break;
case STUNNED:
hitsMine(player, grenadeDetonated);
break;
case TRACKING:
if (player.getPlayerZ() + 80 < spiderZ && xIncrease == 0) {
xDecrease = 0;
xIncrease = 0;
zDecrease++;
zIncrease = 0;
zDecrease();
}
else if (player.getPlayerX() + 80 < spiderX) {
xDecrease++;
xIncrease = 0;
zDecrease = 0;
zIncrease = 0;
xDecrease();
}
else if (player.getPlayerZ() - 80 > spiderZ) {
xDecrease = 0;
xIncrease = 0;
zDecrease = 0;
zIncrease++;
zIncrease();
}
else if (player.getPlayerX() - 80 > spiderX) {
xDecrease = 0;
xIncrease++;
zDecrease = 0;
zIncrease = 0;
xIncrease();
}
else
xIncrease();
break;
case DEAD:
dies();
break;
case MOTIONLESS:
animationController.setAnimation("run_ani_vor", 1);
break;
}
}
/**
* Makes the spider attack if it collides with player
*
* @param player
*/
private void attack(Player player) {
player.preventPlayerMovement();
animationController.setAnimation("Attack", -1);
player.getCamera().position.set(player.getPlayerX(), -80f, player.getPlayerZ());
player.getCamera().lookAt(spiderX, 7, spiderZ);
Timer.schedule(new Task() {
@Override
public void run() {
caughtPlayer = true;
}
}, 4);
}
/**
* Stuns the spider if it collides with a mine
*
* @param player
* @param grenadeDetonated
*/
private void hitsMine(Player player, int grenadeDetonanted) {
if (gotCalled) {
player.mineDetonated();
animationController.setAnimation("die", 1, 1, new AnimationController.AnimationListener() {
@Override
public void onEnd(AnimationController.AnimationDesc animation) {
state = State.TRACKING;
animationController.setAnimation("run_ani_vor", -1);
}
@Override
public void onLoop(AnimationController.AnimationDesc animation) {
}
});
player.getMines().removeIndex(grenadeDetonated);
gotCalled = false;
}
}
/**
* Called when the player wins the game
*/
public void dies() {
state = State.DEAD;
spiderInstance.transform.setToTranslation(1207, spiderY, 1007);
animationController.setAnimation("die", 1, 0.5f, new AnimationController.AnimationListener() {
@Override
public void onEnd(AnimationController.AnimationDesc animation) {
}
@Override
public void onLoop(AnimationController.AnimationDesc animation) {
}
});
}
}
|
Modeling phase transitions during the crystallization of a multicomponent fat under shear. The crystallization of multicomponent systems involves several competing physicochemical processes that depend on composition, temperature profiles, and shear rates applied. Research on these mechanisms is necessary in order to understand how natural materials form crystalline structures. Palm oil was crystallized in a Couette cell at 17 and 22 degrees C under shear rates ranging from 0 to 2880 s(-1) at a synchrotron beamline. Two-dimensional x-ray diffraction patterns were captured at short time intervals during the crystallization process. Radial analysis of these patterns showed shear-induced acceleration of the phase transition from alpha to beta('). This effect can be explained by a simple model where the alpha phase nucleates from the melt, a process which occurs independently of shear rate. The alpha phase grows according to an Avrami growth model. The beta(') phase nucleates on the alpha crystallites, with the amount of beta(') crystal formation dependent on the rate of transformation of alpha to beta(') as well as the growth rate of the beta(') phase from the melt. The shear induced alpha- beta(') phase transition acceleration occurs because under shear, the alpha nuclei form many distinct small crystallites which can easily transform to the beta(') form, while at lower shear rates, the alpha nuclei tend to aggregate, thus retarding the nucleation of the beta(') crystals. The displacement of the diffraction peak positions revealed that increased shear rate promotes the crystallization of the higher melting fraction, affecting the composition of the crystallites. Crystalline orientation was observed only at shear rates above 180 s(-1) at 17 degrees C and 720 s(-1) at 22 degrees C.
|
Analysis of Energy Consumption in the Production Chain of Heat from Pellet Wood pellets, as a renewable energy source, are an alternative to fossil fuels. Their use contributes to the quantitative protection of the traditional energy resources which are non-renewable and single use. The combustion of pellets has a neutral effect on increasing the concentration of carbon dioxide in the atmosphere. An important environmental aspect of the Life Cycle Assessment of pellets is the energy consumption over its life cycle. The results of this assessment can be helpful in improving the environmental management in the companies related to the pellet life cycle. They can also be used in the comparative assessment of pellets and other energy carriers in terms of the environmental load resulting from the energy consumption over the entire life cycle of the analyzed fuels. The work aimed at analyzing and assessing the energy efficiency of using wood pellet taking into account its life cycle. In order to achieve the purpose of the work, the energy efficiency index, calculated as the quotient of energy benefits and energy expenditure incurred at individual stages of the pellet life cycle, was used. The results of the analysis indicate that among the studied stages of the pellet life cycle, the highest energy consumption occurs in the pellet use and production phases. Research shows that the energy benefits expressed in the amount of energy emitted in the form of heat in the pellet combustion process outweigh the energy expenditure being the sum of energy spent in subsequent stages of the pellet life cycle. The obtained results indicate a positive energy balance. The use of pellets for heating purposes allows for the recovery of energy spent throughout its entire life cycle. INTRODUCTION Wood pellets are processed solid biofuel, i.e. a renewable energy carrier. The basic materials for pellet production are sawdust and wood chips from sawmills and woodworking companies as well as the waste and residues from forestry. Biomass processing in the pellet production process improves its properties. In particular, in drying, the humidity of the pellet decreases, and in pressing, the energy density in the pellet increases. Low moisture content of pellet allows its even combustion, and this reduces the emission of pollutants from the combustion process. The use of wood pellets as a substitute for fossil fuels has environmental benefits in terms of the CO 2 neutrality. This means that the biofuel combustion does not affect the intensification of the natural greenhouse effect. In addition, it helps to meet the requirements of sustainable development in the area of energy production. An important environmental aspect of the Wood Pellet LCA (Life Cycle Assessment) test is the energy consumption in the pellet's life cycle. The results of this assessment can be helpful in improving the environmental management in the companies related to the pellet life cycle (including pellet production plants, transport and sawmills) in order to limit the adverse environmental impact of these entities. They can also be used in the comparative assessment of pellets and other energy carriers in terms of the environmental load resulting from the energy consumption over the entire life cycle of the analyzed fuels. The results Analysis of Energy Consumption in the Production Chain of Heat from Pellet of this assessment may form the basis for making the decisions about promoting and co-financing the development of the energy carriers usagethe solutions that require the lowest energy expenditure should be supported. The purpose of the work was to determine the energy input for energy purposes (production of usable heat /electricity) of wood pellets, taking into account the stages of the pellet life cycle and comparing the energy input with the amount of chemical energy contained in the pellet. PRODUCTION AND CONSUMPTION OF WOOD PELLETS IN THE EUROPEAN UNION The consumption of wood pellets in the European Union (EU) in 2018 amounted to 27350 thousand t, with a production level of 16850 thousand t. The selected data regarding the wood pellet market in some EU countries are presented in Figure 1. A significant proportion of the use of wood pellets in EU is individual heating of residential buildings (40% of consumption in 2017). Other places where pellets are used are the industrial installations for the production of electricity and heat. There are different goals of pellet use in individual European countries. For example, in Germany, Italy and Austria, wood pellets, according to the GAIN report, are usually used for heating purposes in residential buildings and in medium-power industrial boilers. In the United Kingdom, the Netherlands and Belgium, the industrial use of wood pellets for electricity production is predominant. In Poland, the production of wood pellets in 2018 amounted to 1100 thousand t, and consumption at the level of 350 thousand t. In comparison to the annual demand for pellets in Italy amounting to approx. 3800 thousand t, the pellet consumption in Poland is relatively low. The increase in the level of pellet use for heating residential houses in Poland may be affected by the financial support (from national and EU funds) of households for the installation of the pellet heating installations. A positive effect in this area can be ensured by the introduction (e.g. on a German model) of the obligation to use, at a defined level, renewable energy for the heating and cooling of new buildings. An element of the program of measures to increase the demand for pellets for heating purposes in domestic housing can also be the implementation of an economic instrument in the form of a CO 2 tax. Similar incentive solutions are needed in industry, especially in power plants, to increase the energy use of solid wood and agricultural biomass. The industrial use of biomass requires investments in the biomass power plants as well as combined heat and power plants or for the adaptation of heating systems in the existing biomass combustion/cocombustion plants. In addition to the biomass in the form of wood pellets, wood chips and briquettes are also used in the EU. The consumption of briquettes and wood chips in the EU is estimated at 15-20 million tonnes. RESEARCH METHOD It was assumed that the energy expenditure on the use of wood pellets is the sum of energy consumption at individual stages of the pellet life cycle (Fig. 2). The reference (functional) unit was 1 kg of pellet. The values of the analyzed parameters will be converted into the functional unit (fu). The specific energy demand (C ej ) for the use of pellets is defined as follows: where: C ej,f -energy consumption at the forest breeding stage C ej,tr1 -energy consumption for transporting round wood to the sawmill C ej,s -energy consumption in the woodworking process in the sawmill C ej,tr2 -energy consumption for transporting the wood materials to the pellet production plant C ej,p -energy consumption in the pellet production process C ej,tr3 -energy consumption for the pellet transport to the energy processing site C ej,c -energy consumption in the pellet combustion process C ej,w -energy consumption for waste management from the pellet combustion process The sources of energy in the pellet life cycle are: electricity, liquid and gaseous fuels obtained from fossil fuels and biofuels. The value of the parameter (C ej ) allows for the calculation of energy efficiency index of the use of pellets (E e ) using the formula : where: B e -energy benefits from the use of pellets (of a given mass); C e -energy expenditure on pellet use (of a mass); Q j -calorific value of the pellet; C ej -as in Eq.. Energy benefits (B e ) is the potential amount of chemical energy contained in pellets, which is determined on the basis of the pellet calorific value. The value of (E e ) index means the efficiency limit. The solutions for which the index value (E e ) is higher than one are effective, and when lower than one -inefficient. The absolute value of the difference between the indicator value (E e ) obtained from the calculations and the efficiency limit determines the level of effectiveness or inefficiency. The analyzed solutions are the more effective, the higher the values of the index (E e ) compared to the value of the limit efficiency. On the other hand, they are less effective if the values of the index (E e ) are lower than the efficiency limit. In order to achieve the aim of the present work, the data from the review of environmental reports, statistical reports, LCA databases and scientific publications was used. ASSUMPTIONS AND RESEARCH RESULTS It was assumed that 6.5 m 3 of sawdust are needed to produce 1 t of pellet. They are produced in sawmills in the process of round wood processing. It was considered that the main material for the production of pellet is softwood (Norway spruce or Scots pine). As a result of sawmill operations, 0.441 m 3 of sawdust is generated from 1 m 3 of round wood. Obtaining 1 t of pellets requires processing 14.74 m 3 of wood. The sawdust parameters adopted for the purpose of calculations: bulk density 240 kg/m 3, humidity 50%. The calorific value of pellet is equal to 17.4 MJ/kg indicates that 0.0575 kg of pellet are The energy consumption at the stage of obtaining the wood material for pellet production depends on the energy consumption of subsequent phases of tree cultivation for the needs of the wood industry. Tree breeding includes the preparation of seedlings, soil preparation for trees, tree planting, care works (removal of undesirable plants, trimming trees, fertilization or irrigation), felling and transport of trees from the felling sites to a storage yard located on the edge of the forest, from where they will then be transported to the wood products factories. At this stage of the pellet life cycle, electricity (for growing seedlings, protective measures), diesel, gasoline, kerosene (for tractors, trucks, agricultural aircraft) are consumed. The consumption of energy carriers by machinery and equipment depends on their power, the intensity of its use and compliance with the emission standards. According to research, the total energy consumption in forest cultivation is 155.4 MJ/m 3 wood (including 9.1710 -5 MWh of electricity), which corresponds to 2.290 MJ/fu. When transporting wood from the forest to the sawmill, fuels are used to drive motor vehicles and vehicle combinations. The amount of transport fuel consumed depends on the transport distance, the design of the vehicle engines that meet the specified exhaust emission standard (Euro), the type of road and the vehicle load. It was assumed that lorry trailers with a permissible total weight of 40 t and a load capacity of 26 t will be used for road transport of round wood from the forest. The calculation of energy consumption in round wood transport used the indicators of specific energy consumption published by a Swedish organization NTM (Network for Transport Measures), shown in Table 1. These are the baseline levels of energy consumption that characterize the road transport in general in the EU. The energy consumption indicator (Table 1) applies to the entire fuel life cycle (diesel) -it covers the extraction, production, distribution and combustion of fuel in a vehicle. The calculations took into account the bulk density of freshly cut trees with a value of 775 kg/m 3 and their moisture content of 70%, and a transport distance of 80 km. Table 2 presents the results of the consumption of energy carriers for transporting round wood to the sawmill. The technological operations carried out at the sawmill (including cutting, machining, milling, drying) are aimed at producing commercial wood products. The sawmill consumes heat (in the drying of wood products) and electricity (to power machines and technical devices). The by-products with high energy potential, such as sawdust, bark, chips and wood chips, are also produced during woodworking. For further considerations, it was assumed that only sawdust and bark will be used in the pellet production process -sawdust as a wood material, and bark as a biofuel in the sawdust drying operation and for the production of steam used for pellet pelleting. The bark consumption was assumed to be 0.001 m 3 / fu, and sawdust consumption -0.0065 m 3 /fu. Concerning the purpose of the work, it is necessary to determine the energy consumption in the sawmill for sawdust and bark. Table 3 summarizes the product quantities and energy consumption at the entry and exit of the sawmill production process, respectively. The values in Table 3 were determined on the basis of the data 1) load capacity 26 t; 2) load capacity 11 t. from sawmill installations. The energy consumption is expressed per unit volume of sawdust obtained and per unit mass of pellet. Unit indicators of the quantity of individual sawmill products are given in m 3 /m 3 sawdust. The quantities of individual sawmill products were also illustrated by means of a unit consumption indicator per ton of pellet. The distribution of energy consumption for each of the sawmill products was based on the economic value of these products and literature data. The energy consumption allocated to the wood products (commercial and by-products) from the sawmill is shown in Table 4. The data contained in Tables 3 and 4 show that 0.008 kWh/fu of electricity is used for bark at 0.001 m 3 /fu, 0.212 MJ/fu for non-renewable energy carriers, and 0.098 MJ/fu for biofuels. The total allocation of energy consumption for sawdust and bark: electricity consumption -0.057 kWh/fu, consumption of non-renewable energy carriers -1.474 MJ/fu and renewable energy consumption -0.679 MJ/fu. The total energy consumption allocated to sawdust and bark is 2.358 MJ/fu. The energy consumption for transporting the wood materials from the sawmill to the pellet production plant is directly proportional to the transport performance of the means of transport. It was assumed that sawdust and bark will be imported into the pellet production plant, with 6.5 times more sawdust by volume than bark, unit consumption of sawdust and bark with a value of 6.5 m 3 /t pellet and 1 m 3 /t pellet, respectively, volumetric densities: sawdust 0.24 t/m 3, bark 0.32 t/m 3, transport distance 100 km, the value of the indicator of energy consumption by heavy goods vehicles 0.673 MJ/(tkm) ( Table 1). Considering the data above, the energy consumption for transporting the wood materials from the sawmill to the pellet production plant is 0.1265 MJ/fu. In the pellet production plant, wood material (sawdust) is processed, including such technological operations as drying, milling and granulating. In turn, the produced pellets are subjected to cooling. Wet sawdust contains 50-55% water, and the required water content in the production wood material reaches up to 10%. In order to meet the requirements, the wood material is dried. Typical ones include tumble driers with direct heating of the wood material. The drying agent for sawdust is a stream of hot air with flue gases. The drying gases often come from biomass combustion in the form of bark and wood chips. In addition, tumble dryers with indirect material heating are employed, which use the heat from the air heated from a hot exchanger or steam. In modern production lines, belt driers are used, in which hot air is the heating factor. In the belt dryer of the Stelmet S.A. production plant (one of the largest pellet plants in Poland), the drying temperature is 100°C, which eliminates combustion of the wood material and destruction of the binder. The aim of milling is to obtain the required degree of grinding and size homogeneity of the wood material (wood particles) for pellets. Hammer mills are used for this operation. Properly dried and homogeneous wood material is subject to granulation in presses. Before entering the matrices, the wood material is mixed with 1-2% water in the form of water vapor -as a result, it softens and warms up to ca. 70°C. Such conditions favor the release of lignins and the joining of wood particles into granules. Most often, ring matrices are used in the granulation technique, which can be rotary or flat. The wood material is pressed through rollers through the matrix holes, and then the pieces of granulated material are cut off with knives. Due to friction, when the wood mass is pressed through the matrix, its temperature increases (up to 150°C). A properly selected pressure value exerted by the press rollers ensures the correct course of the pressing process and the durability of the pellet. Cooling the produced pellet to ambient temperature is aimed at removing the moisture from it (appearing in the pelleting phase) and increasing the durability of the pellet. This translates into a reduction in the amount of dust generated during the transport and storage of the cooled pellets. It is most often implemented in countercurrent coolers, in which the stream of cooling air flows in the opposite direction to the direction of pellet movement. The pellets cooled to the required temperature are screened to remove small uncompressed wood particles. These particles have an adverse effect on the quality of the pellet, because they absorb water, which promotes the development of fungi and an increase in the temperature in the stored pellet. In addition, they increase the emission of nitrogen oxides from the pellet combustion process. The sieved fraction is recycled to the pellet production process. In order to carry out the pellet production process, cyclone separators are also needed; they are used to clean both the gas drying the pellet wood material and the pellet cooling air. The produced pellets are subject to packaging and storage. In internal pellet transport, mainly forklifts with combustion or electric drive are used. Electricity is used to power the machines and devices needed in the pellet production process. Specialized machines and devices with electric drive are used, among others during drying, milling and transporting wood materials, as well as granulating, cooling and transporting pellets. Energy carriers in the form of fossil fuels are consumed by means of transport with combustion drive. The enterprises producing pellets diversify the production capacity, e.g. Stelmet S.A. reaches a production capacity of 144000 t/year. Table 5 shows the consumption of wood materials, electricity and diesel oil in a company producing pellets with a capacity of 90000 t/ year. In addition, the value of the index of unit consumption of materials and energy per ton of pellets produced was given. Coniferous sawdust is used as wood material and bark as biofuel for the production purposes in this company. The technological line includes a drum dryer, a hammer mill, a granulating press with an annular matrix, a cold room for pellet cooling, a pellet screen and cyclone separators for cleaning sawdust drying gases and pellet cooling air. Diesel forklifts meeting the Euro 2 and Euro 3 emission standards are used in reloading works in warehouse halls. Assuming a calorific value of 8.2 MJ/kg and a bulk density of 320 kg/m 3 for the bark, the bark consumption index in energy units was calculated at 2.624 MJ/fu. For the data: calorific value of diesel oil is 36 MJ/dm 3, oil density 0.84 kg/dm 3, the diesel oil consumption in the pellet producing company was calculated at 0.0324 MJ/fu. The electricity consumption is 0.583 MJ/fu. The consumption of energy carriers and electricity in the pellet production process is a total of 3.24 MJ/fu. The energy consumption in the transport of pellets from the pellet plant to the place of its use was determined based on the assumptions: road transport using trucks with a trailer with a where: W ze -energy consumption indicator for means of transport, MJ/(tkm); P -transport distance, km. For the assumed transport distance and the value of the (W ze ) indicator for the full load of vehicles (Table 1), energy consumption for the needs of pellet transport to its users is at the level of 0.111 MJ/fu. In the use phase, the pellets are burned to produce heat -a secondary energy carrier. Heat is used to meet the thermal needs of buildings, both in terms of heating water and rooms. In addition, heat can be converted into electricity. The places where pellets are used can be divided, depending on the power of pellet burning devices, into large facilities (e.g. heat and power plants with a thermal power exceeding 2 MW), medium (e.g. heating plants with a thermal power from 50 kW to 2 MW) and small (detached houses and other facilities with a thermal power below 50 kW). The degree of utilization of the heating value of pellets depends on the thermal efficiency of the heating devices. Modern pellet boilers have a quite high thermal efficiency (approx. 85-91%) under nominal operating conditions. Under real boiler operation conditions, when the nominal power is not fully used, the boiler efficiency is lower. The heat transfer losses also have a significant impact on the thermal efficiency of the municipal heating networks. The literature data shows that the thermal efficiency of the municipal heating networks is around 70%, and in less populated areas -even below this value. The demand for electricity at the stage of pellet combustion results from the power consumption of the heating installation, in particular through its components such as the electronic ignition system, pellet feeder, fan supplying air for combustion and control devices (optimizing the combustion process) and to control boiler operating parameters. Individual boiler designs can be differentiated by the level of electricity consumption. The data below regarding the use of pellets in small combustion plants comes from the simulations of the pellet combustion process using the GEMIS 4.5 software (Global Emissions Model for Integrated Systems; 4.5 edition). The simulation parameters were selected so that the simulation results correspond to the combustion process in a typical 11 kW pellet boiler with a nominal thermal efficiency of 78%. According to the simulation results, the electricity consumption reaches 0.05 kWh/fu (0.18 MJ/fu), and the value of the unit pellet consumption indicator is 299.336 g/kWh heat. Comparison of the value of this indicator with the calorific value of the pellet 17.4 MJ/kg (206.897 g/kWh) shows that the boiler thermal efficiency for the simulation conditions is 69.1%. This means that 30.9% of energy is, lost as heat when burning a unit mass of pellets. The value of lost energy is 5.38 MJ/fu. It remains in the ash -in the form of unburned coal, while in the form of heat it penetrates into both waste gases and ash. Overall, in the pellet use phase, the energy consumption is 5.56 MJ/ fu. This value constitutes 31.95% of the calorific value of the pellet, and 46.2% of the amount of heat released in the boiler. As a result of burning pellets, waste is generated: combustion ash accumulating in the internal container in the boiler and fly ash retained in the waste gas purification devices. A certain amount of ash also comes from the cleaning of the combustion chamber and the pipes that remove gases from the combustion process. The amount and chemical composition of ash from pellet combustion depends on the type of wood materials used for pellet production. The ash content in spruce with bark is 0.6% DM (dry matter). Therefore, the pellets from the coniferous sawdust are characterized by low ash content, approx. 0.5% by mass. (approx. 5 kg of ash from 1 ton of burned pellet). The composition of ash may be affected by the contamination of wood materials both in the pellet production process and during their transport or storage. The ash left after burning pure biofuel does not contain harmful components. The increased amount of ash and the content of undesirable substances in it may indicate an increase in the pollution of the combusted biomass. The chemical composition of ash from wood pellet mainly contains mineral compounds (including SiO 2, CaO), small amounts of microelements (including copper, zinc, chromium, nickel, lead) and other substances formed at the stage of tree growth and in the remaining phases of the pellet production chain. The quantitative composition of ash depends on the properties of the biomass burned. The higher the pellet quality, the lower the ash content. The pellets intended for retail may have a quality certificate -a document confirming the compliance of the pellets with the requirements of the relevant standards. The EU has an EN 14961-16 standard. The second part of this standard (EN 14961-2) contains the requirements for the wood pellets. A pellet manufacturer that meets the parameters defined in the standard in class A1 can obtain a DINplus or ENPlus certificate. The ENPlus certificate in class A2 can be obtained by the pellets that meet the standard requirements for this class. The certificate logo indicates the certifying authority. For DINPlus, the certification supervisor is DIN CERTCO (German Institute for Standardization), and for ENPlus -European Pellet Associations. The A1 and A2 class pellets are differentiated by the dry ash limit content and for the class A1≤0.7%, for class A2≤1.5%. The Class A1 pellets are intended for central heating boilers with lower heat output, and Class A2 for the boilers with higher power. The pellets for industry belong to class B. The producers of waste from the combustion of wood pellets are subject to the legal requirements in the field of waste management. The requirements set depend on the power of heating installations with biofuel boilers. The users of small heating installations (in individual houses and small boiler rooms, e.g. in public buildings) are subject to the guidelines contained in the regulations for maintaining clean communes. The wastes from the combustion of wood pellets in boilers of low-power heating systems are classified as municipal waste. Some municipal authorities allow this waste to be directed to the containers with mixed waste. Others, in turn, demand that this waste be collected separately. In addition, when treated as seasonal waste, it can be collected by specialized companies only during the heating period. Due to the chemical composition and fertilizing properties of ash from pellet, it is possible to use it to fertilize mineral soil. Fuel combustion plants should classify the ash and dust from solid biomass combustion to the waste group no 10 (waste from thermal processes), subgroup 1001 (waste from power plants and other energy combustion plants). The above requirement also applies to local boiler houses using the wood fuel to generate the heat sold for the needs of central heating and hot water preparation in apartment blocks, schools, enterprises and other facilities, including service and administration. The furnace waste from these enterprises should be transferred to authorized entities based on the waste transfer card. The energy input for waste management from solid biomass combustion in small heating installations is negligible or even zero when this waste, as a plant fertilizer, is scattered over the soil surface. In large pellet combustion plants, the energy input for waste management depends on the fuel consumption per unit of transport work in the transport of waste to the landfill. With a small amount of waste and a short transport distance, the energy consumption for waste transport is insignificant. For the assumptions: the amount of combustion waste from large combustion plants with a value of 23.43 kg/fu, the distance of waste transport of 20 km, transport by truck with a maximum permissible weight of 24 t and load capacity of 11 t, fully utilized load capacity, unit consumption indicator energy by a vehicle with a value of 1.23 MJ/(tkm) ( Table 1), the energy consumption for transporting the combustion waste to a landfill from large pellet combustion units, calculated according to the formula, is 0.576 MJ/fu. Table 6 summarizes the results of calculations of energy expenditure on pellet use throughout its life cycle. The energy efficiency index in the study, calculated on the basis of formulas 1-2, is: ej tr ej p ej tr ej s ej tr ej f ej ej C C C C C C C C C,, 3,, 2 The value of the index (E e ) is higher than the efficiency limit, so the use of wood pellets for energy purposes in small heating installations is reasonable and effective. CONCLUSIONS The efficiency indicator adopted in the study enables to determine the energy and environmental benefits of using wood pellets for heat production. The efficiency ratio and energy value of the pellets are proportional. In turn, the relationship between the efficiency indicator and energy expenditure in subsequent stages of the pellet life cycle is inversely proportional. The results of the analysis indicate that among the studied stages of the pellet life cycle, the highest energy consumption occurs successively in the phases: pellet use and production. The high energy consumption during the pellet use phase is affected by the large amount of energy lost, which is emitted as heat when the pellet is burnt. The amount of energy lost depends on the thermal efficiency of the combustion installation. The research shows that the energy benefits expressed in the amount of energy emitted in the form of heat in the pellet combustion process outweigh the energy expenditure being the sum of the energy spent in subsequent stages of the pellet life cycle. The obtained results indicate a positive energy balance -1.217 MJ of energy is obtained from 1 MJ of energy spent for the production and use of pellets. The use of pellets for heating purposes allows for the recovery of energy spent throughout its entire life cycle. Wood pellets, as a renewable energy source, are still needed by the national power industry an alternative to fossil fuels. When used as a substitute for fossil fuels, it can reduce their consumption and decarbonize, especially the heat energy. Lower consumption of fossil fuels also means lower environmental burden of the emissions from the combustion installations of these fuels and, as a result, greater economic and human health benefits. Replacing the energy of fossil fuels with any form of renewable energy is a way of quantitative protection of non-renewable energy sources, the existence of which in the environment determines the sustainable social and economic development. In order for the pellet market to reach an appropriate level of development, it should be financially supported by means of subsidies and other economic incentives. On the basis of the energy efficiency index presented in the paper, it is possible to conduct a study on the impact on the energy consumption of selected factors, including the changes in the area of transport (e.g. use of low-emission means of transport) or technical parameters of the pellet combustion installations. The test results can then be used to compare different pellet heating installations and transport solutions for energy consumption throughout the pellet lifecycle.
|
/*=========================================================================
Program: ParaView
Module: vtkCPFileGridBuilder.h
Copyright (c) Kitware, Inc.
All rights reserved.
See Copyright.txt or http://www.paraview.org/HTML/Copyright.html for details.
This software is distributed WITHOUT ANY WARRANTY; without even
the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the above copyright notice for more information.
=========================================================================*/
// .NAME vtkCPFileGridBuilder - Class for creating grids from a VTK file.
// .SECTION Description
// Class for creating grids from a VTK file.
#ifndef vtkCPFileGridBuilder_h
#define vtkCPFileGridBuilder_h
#include "vtkCPGridBuilder.h"
#include "vtkPVCatalystTestDriverModule.h" // needed for export macros
class vtkDataObject;
class vtkCPFieldBuilder;
class VTKPVCATALYSTTESTDRIVER_EXPORT vtkCPFileGridBuilder : public vtkCPGridBuilder
{
public:
vtkTypeMacro(vtkCPFileGridBuilder, vtkCPGridBuilder);
void PrintSelf(ostream& os, vtkIndent indent);
// Description:
// Return a grid. BuiltNewGrid is set to 0 if the grids
// that were returned were already built before.
// vtkCPFileGridBuilder will also delete the grid.
virtual vtkDataObject* GetGrid(unsigned long timeStep, double time,
int & builtNewGrid);
// Description:
// Set/get the FileName.
vtkGetStringMacro(FileName);
vtkSetStringMacro(FileName);
// Description:
// Set/get KeepPointData.
vtkGetMacro(KeepPointData, bool);
vtkSetMacro(KeepPointData, bool);
// Description:
// Set/get KeepPointData.
vtkGetMacro(KeepCellData, bool);
vtkSetMacro(KeepCellData, bool);
// Description:
// Get the current grid.
vtkDataObject* GetGrid();
protected:
vtkCPFileGridBuilder();
~vtkCPFileGridBuilder();
// Description:
// Function to set the grid and take care of the reference counting.
virtual void SetGrid(vtkDataObject*);
private:
vtkCPFileGridBuilder(const vtkCPFileGridBuilder&); // Not implemented
void operator=(const vtkCPFileGridBuilder&); // Not implemented
// Description:
// The name of the VTK file to be read.
char * FileName;
// Description:
// Flag to indicate that any vtkPointData arrays that are set by the
// file reader are to be cleared out. By default this is true.
bool KeepPointData;
// Description:
// Flag to indicate that any vtkCellData arrays that are set by the
// file reader are to be cleared out. By default this is true.
bool KeepCellData;
// Description:
// The grid that is returned.
vtkDataObject* Grid;
};
#endif
|
// ReadDir reads a set of tract ids.
func (m *Manager) ReadDir(d interface{}) ([]core.TractID, core.Error) {
op, err := m.schedule(context.TODO(), readdirRequest{d.(*os.File)})
if err == core.NoError {
return op.(readdirReply).tracts, err
}
return nil, err
}
|
Q. I inherited part of a TSP account and got a check, with tax already withheld. Do I still have to pay state tax as well? I’m from Pennsylvania, and the TSP account is from my uncle who lived in Florida.
A. You should review the notice at https://www.tsp.gov/PDF/formspubs/tsp-583.pdf and consult a Pennsylvania CPA for further guidance.
|
<gh_stars>1-10
/*
$Id: $
Description: read test bench
$Log: $
*/
#define ALIGN_SZ 256
#define _FILE_OFFSET_BITS 64
//#define NDEBUG
#include <stdio.h>
#include <stdlib.h> /* malloc, free, exit, rand, atoi, atof, strtoul */
#include <ctype.h> /* toupper, tolower, isalpha, isspace, isdigit, isgraph */
#include <string.h> /* strcmp, strlen, memset, memcmp */
#include <math.h> /* pow, ceil */
#include <assert.h> /* assert */
#if defined(_OPENMP)
#include <omp.h>
#endif
#if defined(SYSTEMC)
#include <sys/time.h>
#endif
#include "fasta.h"
#include "path.h"
// Example main arguments
// #define MARGS "-e32Mi -l.60 -rsrr550.fa -w1Mi -h.90 -z.99"
// #define MARGS "-e32Mi -l.60 -c -w1Mi -h.90 -z.99" *
// #define MARGS "-p -e32Mi -l.50 -rsrr550.fa -w8Mi -h.90 -z.99"
// #define MARGS "-p -e32Mi -l.20 -rsrr_nr.fa -qsrr_sh.fa"
// #define MARGS "-p -e32Mi -l.20 -cd"
// #define MARGS "-p -e32Mi -l.20 -c -qsrr_sh.fa"
// #define MARGS "-e8Mi -ramr_cur.fa -qamr_cur.fa"
// #define MARGS "-e1K -k16 -rtestdb.fa -qtestqr.fa"
#include "lime.h"
#include "block_map.h"
#define DEFAULT_ENT 100000
#define DEFAULT_LOAD 0.95
#define DEFAULT_BLOCK_LSZ 10 /* log 2 size */
#define DEFAULT_KLEN 18
#define DEFAULT_HITR 0.98
#define DEFAULT_SKEW 0.0
#define DEFAULT_WORK 0
#define CFLAG 0x01
#define DFLAG 0x02
#define PFLAG 0x04
#define PROGRESS_COUNT 1000000
#if defined(ENABLE_PLOT)
#define PIPE 1 /* enable pipe of plot commands */
#define DIRC '/' /* directory separator character */
#define EXT_PLOT ".gpi"
#define EXT_DATA ".dat"
#define EXT_TERM ".png"
#endif // ENABLE_PLOT
typedef unsigned long long kmer_t;
#define KEY_SZ sizeof(kmer_t)
typedef unsigned int sid_t;
#define DAT_SZ sizeof(sid_t)
typedef unsigned long ulong_t;
int flags; /* argument flags */
unsigned earg = DEFAULT_ENT; /* maximum entries (keys) */
float larg = DEFAULT_LOAD; /* load factor */
unsigned barg = 1U<<DEFAULT_BLOCK_LSZ; /* buffer block length */
int karg = DEFAULT_KLEN; /* k-mer length */
float harg = DEFAULT_HITR; /* hit ratio */
float zarg = DEFAULT_SKEW; /* zipf skew */
char *qarg = NULL; /* query file name (in) */
char *rarg = NULL; /* reference file name (in) */
char *sarg = NULL; /* saved workload file name (out) */
unsigned warg = DEFAULT_WORK; /* workload length */
#if defined(ENABLE_PLOT)
int xarg = 0; /* plot x range */
char *garg = NULL; /* plot command file name (out) */
char *targ = NULL; /* plot terminal file name (out) */
#endif // ENABLE_PLOT
// TODO: find a better place for these globals
/* these are global to avoid overhead at runtime on the stack */
tick_t t0, t1;
tick_t start, finish;
unsigned long long tinsert, tlookup, toper, trun;
// TODO: consider using std::conditional<>::type for result type
#if defined(MULTIMAP)
typedef block_multimap<kmer_t,sid_t> kdb_t;
#if defined(USE_ACC)
typedef typename kdb_t::mapped_type* result_type;
inline bool is_valid(const result_type &v) {return v != nullptr;}
#else // USE_ACC
typedef std::pair<typename kdb_t::iterator, typename kdb_t::iterator> result_type;
inline bool is_valid(const result_type &v) {return v.first != typename kdb_t::iterator(nullptr);}
#endif // USE_ACC
#else // MULTIMAP
typedef block_map<kmer_t,sid_t> kdb_t;
#if 1 || defined(USE_ACC)
typedef typename kdb_t::mapped_type result_type;
inline bool is_valid(const result_type &v) {return v != 0;}
#else // USE_ACC
typedef typename kdb_t::iterator result_type;
inline bool is_valid(const result_type &v) {return v != typename kdb_t::iterator(nullptr);}
#endif // USE_ACC
#endif // MULTIMAP
typedef typename kdb_t::key_type key_type;
typedef typename kdb_t::mapped_type mapped_type;
typedef typename kdb_t::value_type value_type;
typedef typename kdb_t::size_type size_type;
typedef typename kdb_t::const_local_iterator const_local_iterator;
typedef std::pair<key_type, mapped_type> kvpair_type;
kdb_t kdb;
#define LOAD_COUNT kdb.size()
#define LOAD_MAX (kdb.bucket_count() * larg + 0.5)
#define MIN(a,b) ((a) < (b) ? (a) : (b))
#if 0
#define RAND_VAR \
key_type mask = (~(key_type)0 >> (sizeof(key_type)*8-karg*2))
#define RAND_SEED(n) \
srand(n)
#define RAND_GEN \
((((key_type)rand() << sizeof(int)*8) ^ rand()) & mask)
#else
#include <random>
#define RAND_VAR \
std::mt19937_64 generator; \
std::uniform_int_distribution<key_type> \
distribution(0,~(key_type)0 >> (sizeof(key_type)*8-karg*2))
#define RAND_SEED(n) \
generator.seed(n)
#define RAND_GEN \
distribution(generator)
#endif
struct {
size_type length; /* length of buffers */
size_type lcount; /* count of items in lookup buffers */
size_type acount; /* count of items in add buffer */
ulong_t hits; /* number of search hits */
key_type *keys;
result_type *result;
kvpair_type *kvpair;
void clear_counts(void)
{
hits = 0;
}
void init(size_type buf_len)
{
length = buf_len;
acount = 0;
lcount = 0;
keys = SP_NALLOC(key_type, length); /* used on lookup */
chk_alloc(keys, sizeof(key_type)*length, "SP_NALLOC keys in init()");
result = SP_NALLOC(result_type, length); /* used on lookup */
chk_alloc(result, sizeof(result_type)*length, "SP_NALLOC result in init()");
kvpair = SP_NALLOC(kvpair_type, length); /* used on add */
chk_alloc(kvpair, sizeof(kvpair_type)*length, "SP_NALLOC kvpair in init()");
}
#if defined(NO_BUFFER)
inline void lookup(key_type key)
{
tget(t0);
#if defined(MULTIMAP)
result_type res = kdb.equal_range(key);
#else
result_type res = kdb.find(key);
#endif
tget(t1);
tinc(tlookup, tdiff(t1,t0));
#if 1 // TODO: time without
if (is_valid(res)) hits++;
#endif
}
inline void add(key_type key, mapped_type mval)
{
tget(t0);
kdb.insert({key, mval});
tget(t1);
tinc(tinsert, tdiff(t1,t0));
}
#else
// TODO: use std::container to emplace
inline void lookup(key_type key)
{
if (lcount == length) flush_lookup();
keys[lcount++] = key;
}
// TODO: use std::container to emplace
inline void add(key_type key, mapped_type mval)
{
if (acount == length) flush_add();
kvpair[acount++] = {key, mval};
}
#endif // NO_BUFFER
void flush_lookup(void)
{
if (lcount == 0) return;
tget(t0);
#if defined(MULTIMAP)
kdb.equal_range(result, keys, lcount);
#else
kdb.find(result, keys, lcount);
#endif
tget(t1);
tinc(tlookup, tdiff(t1,t0));
#if 1 // TODO: time without
for (size_type i = 0; i < lcount; i++) {
if (is_valid(result[i])) hits++;
}
#endif
lcount = 0;
}
void flush_add(void)
{
if (acount == 0) return;
tget(t0);
kdb.insert(reinterpret_cast<typename kdb_t::const_pointer>(kvpair), acount);
tget(t1);
tinc(tinsert, tdiff(t1,t0));
acount = 0;
}
void block_lookup(key_type *keys, size_type blen)
{
for (size_type i = 0; i < blen; i += length) {
size_type count = MIN(length, blen-i);
tget(t0);
#if defined(MULTIMAP)
kdb.equal_range(result, keys+i, count);
#else
kdb.find(result, keys+i, count);
#endif
tget(t1);
tinc(tlookup, tdiff(t1,t0));
#if 1 // TODO: time without
for (size_type j = 0; j < count; j++) {
if (is_valid(result[j])) hits++;
}
#endif
}
}
} kbuf;
#if 0
static
void ktob(kmer_t kmer, int klen, char *buf)
{
int i, e = klen*2;
for (i = 0; i < e; i++) {
buf[e-i-1] = (kmer&1) ? '1' : '0';
kmer >>= 1;
}
buf[e] = '\0';
assert(kmer == 0);
}
#endif
static
void kton(kmer_t kmer, int klen, char *buf)
{
int i;
char ktab[4] = {'A', 'C', 'G', 'T'};
for (i = 0; i < klen; i++) {
buf[klen-i-1] = ktab[kmer&3];
kmer >>= 2;
}
buf[klen] = '\0';
assert(kmer == 0);
}
#if 0
static
void kputs(kmer_t kmer, int klen)
{
char buf[64];
kton(kmer, klen, buf);
puts(buf);
}
#endif
#define ENCODE(t, c, k) \
switch (c) { \
case 'a': case 'A': t = 0; break; \
case 'c': case 'C': t = 1; break; \
case 'g': case 'G': t = 2; break; \
case 't': case 'T': t = 3; break; \
default: k = 0; continue; \
}
/*
Given a sequence of nucleotide characters,
break it into canonical k-mers in one pass.
Nucleotides are encoded with two bits in
the k-mer. Any k-mers with ambiguous characters
are skipped.
str: DNA sequence (read)
slen: DNA sequence length in nucleotides
klen: k-mer length in nucleotides
*/
static
int seq_lookup(char *str, int slen, int klen)
{
int kcnt = 0;
int j; /* position of last nucleotide in sequence */
int k = 0; /* count of contiguous valid characters */
int highbits = (klen-1)*2; /* shift needed to reach highest k-mer bits */
kmer_t mask = (~(kmer_t)0 >> (sizeof(kmer_t)*8-klen*2)); /* bits covering encoded k-mer */
kmer_t forward = 0; /* forward k-mer */
kmer_t reverse = 0; /* reverse k-mer */
kmer_t kmer; /* canonical k-mer */
// fprintf(stderr, "str: %s\n", str);
for (j = 0; j < slen; j++) {
register int t;
ENCODE(t, str[j], k);
forward = ((forward << 2) | t) & mask;
reverse = ((kmer_t)(t^3) << highbits) | (reverse >> 2);
if (++k >= klen) {
kcnt++;
#if 0
{
char buf[64];
kton(forward, klen, buf);
fprintf(stderr, "forward: %s pos: %d\n", buf, j-klen+1);
kton(reverse, klen, buf);
fprintf(stderr, "reverse: %s pos: %d\n", buf, slen-j-1);
}
#endif
kmer = (forward < reverse) ? forward : reverse;
/* do k-mer lookup here... */
/* zero based position of forward k-mer is (j-klen+1) */
/* zero based position of reverse k-mer is (slen-j-1) */
kbuf.lookup(kmer);
}
}
return kcnt;
}
static
int seq_add(char *str, int slen, int klen, int sid)
{
int kcnt = 0;
int j; /* position of last nucleotide in sequence */
int k = 0; /* count of contiguous valid characters */
int highbits = (klen-1)*2; /* shift needed to reach highest k-mer bits */
kmer_t mask = (~(kmer_t)0 >> (sizeof(kmer_t)*8-klen*2)); /* bits covering encoded k-mer */
kmer_t forward = 0; /* forward k-mer */
kmer_t reverse = 0; /* reverse k-mer */
kmer_t kmer; /* canonical k-mer */
// fprintf(stderr, "str: %s\n", str);
for (j = 0; j < slen; j++) {
register int t;
ENCODE(t, str[j], k);
forward = ((forward << 2) | t) & mask;
reverse = ((kmer_t)(t^3) << highbits) | (reverse >> 2);
if (++k >= klen) {
kcnt++;
#if 0
{
char buf[64];
kton(forward, klen, buf);
fprintf(stderr, "forward: %s pos: %d\n", buf, j-klen+1);
kton(reverse, klen, buf);
fprintf(stderr, "reverse: %s pos: %d\n", buf, slen-j-1);
}
#endif
kmer = (forward < reverse) ? forward : reverse;
/* do k-mer add here... */
/* zero based position of forward k-mer is (j-klen+1) */
/* zero based position of reverse k-mer is (slen-j-1) */
kbuf.add(kmer, sid);
}
}
return kcnt;
}
static unsigned long atoulk(const char *s)
{
char *kptr;
unsigned long num = strtoul(s, &kptr, 0);
unsigned int k = (isalpha(kptr[0]) && toupper(kptr[1]) == 'I') ? 1024 : 1000;
switch (toupper(*kptr)) {
case 'K': num *= k; break;
case 'M': num *= k*k; break;
case 'G': num *= k*k*k; break;
}
return num;
}
int MAIN(int argc, char *argv[])
{
int nok = 0;
char *s;
#if defined(USE_ACC)
kdb.acc.wait(); /* wait for accelerator initialization */
#endif // USE_ACC
while (--argc > 0 && (*++argv)[0] == '-')
for (s = argv[0]+1; *s; s++)
switch (*s) {
/* * * * * parameters * * * * */
case 'e':
if (isdigit(s[1])) earg = atoulk(s+1);
else nok = 1;
s += strlen(s+1);
break;
case 'l':
if (s[1] == '.' || isdigit(s[1])) larg = atof(s+1);
else nok = 1;
s += strlen(s+1);
break;
case 'b':
if (isdigit(s[1])) barg = (1U << atoi(s+1));
else nok = 1;
s += strlen(s+1);
break;
case 'k':
if (isdigit(s[1])) karg = atoi(s+1);
else nok = 1;
s += strlen(s+1);
break;
case 'h':
if (s[1] == '.' || isdigit(s[1])) harg = atof(s+1);
else nok = 1;
s += strlen(s+1);
break;
case 'z':
if (s[1] == '.' || isdigit(s[1])) zarg = atof(s+1);
else nok = 1;
s += strlen(s+1);
break;
case 'p':
flags |= PFLAG;
break;
/* * * * * operations * * * * */
case 'c':
flags |= CFLAG;
break;
case 'd':
flags |= DFLAG;
break;
case 'q':
qarg = s+1;
s += strlen(s+1);
break;
case 'r':
rarg = s+1;
s += strlen(s+1);
break;
case 's':
sarg = s+1;
s += strlen(s+1);
break;
case 'w':
if (isdigit(s[1])) warg = atoulk(s+1);
else nok = 1;
s += strlen(s+1);
break;
/* * * * * output options * * * * */
#if defined(ENABLE_PLOT)
case 'g':
garg = s+1;
s += strlen(s+1);
break;
case 't':
targ = s+1;
s += strlen(s+1);
if (garg == NULL) garg = targ;
break;
case 'x':
if (isdigit(s[1])) xarg = atoi(s+1);
else nok = 1;
s += strlen(s+1);
break;
#endif // ENABLE_PLOT
default:
nok = 1;
fprintf(stderr, " -- not an option: %c\n", *s);
break;
}
if (barg < 2) {
fprintf(stderr, " -- block length must be 2 or greater.\n");
nok = 1;
}
if ((unsigned)karg*2 > sizeof(kmer_t)*8) {
fprintf(stderr, " -- k-mer length must be %lu or less.\n", (ulong_t)sizeof(kmer_t)*4);
nok = 1;
}
// TODO: bounds check larg and harg
if (nok || argc > 0) { // (argc > 0 && *argv[0] == '?')
fprintf(stderr, "Usage: rtb {-flag} {-option<arg>} (example: rtb -cp -e16Mi -l.20)\n");
fprintf(stderr, " ---------- parameters ----------\n");
fprintf(stderr, " -e max entries (keys) <int>, default %u\n", DEFAULT_ENT);
fprintf(stderr, " -l load factor <float>, default %.4f\n", DEFAULT_LOAD);
fprintf(stderr, " -b block length 2^n <int>, default: n=%u\n", DEFAULT_BLOCK_LSZ);
fprintf(stderr, " -k k-mer length <int>, default %d\n", DEFAULT_KLEN);
fprintf(stderr, " -h hit ratio <float>, default %.4f\n", DEFAULT_HITR);
fprintf(stderr, " -z zipf skew <float>, default %.4f\n", DEFAULT_SKEW);
fprintf(stderr, " -p show progress\n");
fprintf(stderr, " ---------- operations ----------\n");
fprintf(stderr, " -c add random keys\n");
fprintf(stderr, " -d lookup random keys\n");
fprintf(stderr, " -q query with FASTA file <in_file>\n");
fprintf(stderr, " -r read FASTA reference <in_file>\n");
fprintf(stderr, " -s save workload to FASTA file <out_file>\n");
fprintf(stderr, " -w workload length <int>, default %u\n", DEFAULT_WORK);
fprintf(stderr, " --------------------------------\n");
#if defined(ENABLE_PLOT)
fprintf(stderr, " -g plot command <out_file>\n");
fprintf(stderr, " -t plot terminal <out_file>\n");
fprintf(stderr, " -x plot x range <int>, default auto\n");
#endif // ENABLE_PLOT
exit(EXIT_FAILURE);
}
printf("########## RTB ##########\n");
#if defined(_OPENMP)
// to control the number of threads use: export OMP_NUM_THREADS=N
printf("threads: %d\n", omp_get_max_threads());
#endif
#if defined(USE_SP)
if (barg > SP_SIZE/sizeof(unsigned)/8) barg = SP_SIZE/sizeof(unsigned)/8; /* up to 1/8th of scratchpad */
#endif
printf("block length: %u\n", barg);
printf("k-mer length: %d\n", karg);
printf("max entries: %u\n", earg);
printf("key size: %lu data size: %lu\n", (ulong_t)KEY_SZ, (ulong_t)DAT_SZ);
#if defined(USE_ACC)
printf("slot size: %lu\n", (ulong_t)sizeof(kdb_t::slot_s));
#endif // USE_ACC
/* * * * * * * * * * Start Up * * * * * * * * * */
tget(start);
kdb.reserve(earg);
kbuf.init(barg);
tget(finish);
printf("Startup time: %f sec\n", tsec(finish, start));
if (flags & CFLAG || rarg != NULL) {
int klim = ceil(log(LOAD_MAX)/log(4));
/* test for k-mer length too small for specified load factor */
/* if k-mer length is too small, there are not enough permutations */
if (karg < klim) { /* 4^karg < LOAD_MAX */
fprintf(stderr, " -- k-mer length must be %u or greater.\n", klim);
exit(EXIT_FAILURE);
}
}
/* * * * * * * * * * Add Random Keys * * * * * * * * * */
if (flags & CFLAG) {
size_type pcount = PROGRESS_COUNT;
size_type acount = 0; /* add count */
size_type maxl = LOAD_MAX; /* max load count */
RAND_VAR;
if (flags & PFLAG) fprintf(stderr, "add");
RAND_SEED(42);
tinsert = tlookup = 0;
kbuf.clear_counts();
kdb.clear_time();
CLOCKS_EMULATE
CACHE_BARRIER(kdb.acc)
// TRACE_START
// STATS_START
tget(start);
/* Add k-mers until the specified load factor is reached */
/* The first test is faster, the second is more accurate. */
while (LOAD_COUNT < maxl || kdb.load_factor() < larg) {
kmer_t kmer = RAND_GEN;
sid_t sid = ++acount;
kbuf.add(kmer, sid);
if ((flags & PFLAG) && acount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
}
kbuf.flush_add();
CACHE_SEND_ALL(kdb.acc)
tget(finish);
CACHE_BARRIER(kdb.acc)
// STATS_STOP
// TRACE_STOP
CLOCKS_NORMAL
if (flags & PFLAG) fputc('\n', stderr);
trun = tdiff(finish, start);
toper = trun-tinsert-tlookup;
printf("Insert count: %lu\n", (ulong_t)acount);
printf("Insert rate: %f ops/sec\n", acount/tvesec(tinsert));
printf("Run time: %f sec\n", tvesec(trun));
printf("Oper. time: %f sec\n", tvesec(toper));
printf("Insert time: %f sec\n", tvesec(tinsert));
kdb.print_time();
// STATS_PRINT
}
/* * * * * * * * * * Read FASTA Reference * * * * * * * * * */
if (rarg != NULL) {
FILE *fin;
sequence entry;
int i, len;
size_type pcount = PROGRESS_COUNT;
size_type acount = 0; /* add count */
size_type tcount = 0; /* total bases count */
size_type maxl = LOAD_MAX; /* max load count */
if ((fin = fopen(rarg, "r")) == NULL) {
fprintf(stderr, " -- can't open file: %s\n", rarg);
exit(EXIT_FAILURE);
}
if (flags & PFLAG) fprintf(stderr, "read");
tinsert = tlookup = 0;
kbuf.clear_counts();
kdb.clear_time();
CLOCKS_EMULATE
CACHE_BARRIER(kdb.acc)
// TRACE_START
// STATS_START
tget(start);
for (i = 1; (len = Fasta_Read_Entry(fin, &entry)) > 0; i++) {
acount += seq_add(entry.str, len, karg, i);
tcount += len;
if ((flags & PFLAG) && acount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
free(entry.hdr);
free(entry.str);
/* Add k-mers until the specified load factor is reached */
/* The first test is faster, the second is more accurate. */
if (LOAD_COUNT >= maxl && kdb.load_factor() >= larg) break;
}
kbuf.flush_add();
CACHE_SEND_ALL(kdb.acc)
tget(finish);
CACHE_BARRIER(kdb.acc)
// STATS_STOP
// TRACE_STOP
CLOCKS_NORMAL
if (flags & PFLAG) fputc('\n', stderr);
trun = tdiff(finish, start);
toper = trun-tinsert-tlookup;
printf("Insert count: %lu\n", (ulong_t)acount);
printf("Insert rate: %f ops/sec\n", acount/tvesec(tinsert));
printf("Bases rate: %f bp/sec\n", tcount/tvesec(trun));
printf("Run time: %f sec\n", tvesec(trun));
printf("Oper. time: %f sec\n", tvesec(toper));
printf("Insert time: %f sec\n", tvesec(tinsert));
kdb.print_time();
// STATS_PRINT
fclose(fin);
if (kdb.load_factor() < larg) {
fprintf(stderr, " -- warning: did not reach load factor: %.4f\n", larg);
}
}
/* * * * * * * * * * Display Database Stats * * * * * * * * * */
if (flags & CFLAG || rarg != NULL) {
if (flags & PFLAG) fprintf(stderr, "gather stats...\n");
tget(start);
kdb.print_stats();
tget(finish);
SHOW_HEAP
printf("Stats time: %f sec\n", tsec(finish, start));
}
/* * * * * * * * * * Lookup Random Keys * * * * * * * * * */
if (flags & DFLAG) {
size_type pcount = PROGRESS_COUNT;
size_type lcount = 0; /* lookup count */
RAND_VAR;
if (flags & PFLAG) fprintf(stderr, "lookup");
RAND_SEED(42);
tinsert = tlookup = 0;
kbuf.clear_counts();
kdb.clear_time();
CLOCKS_EMULATE
CACHE_BARRIER(kdb.acc)
TRACE_START
STATS_START
tget(start);
while (lcount < kdb.size()) {
kmer_t kmer = RAND_GEN;
kbuf.lookup(kmer);
lcount++;
if ((flags & PFLAG) && lcount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
}
kbuf.flush_lookup();
tget(finish);
CACHE_BARRIER(kdb.acc)
STATS_STOP
TRACE_STOP
CLOCKS_NORMAL
if (flags & PFLAG) fputc('\n', stderr);
trun = tdiff(finish, start);
toper = trun-tinsert-tlookup;
printf("Lookup count: %lu\n", (ulong_t)lcount);
printf("Lookup hits: %lu %.2f%%\n", (ulong_t)kbuf.hits, (double)kbuf.hits/lcount*100.0);
printf("Lookup rate: %f ops/sec\n", lcount/tvesec(tlookup));
printf("Run time: %f sec\n", tvesec(trun));
printf("Oper. time: %f sec\n", tvesec(toper));
printf("Lookup time: %f sec\n", tvesec(tlookup));
kdb.print_time();
STATS_PRINT
}
/* * * * * * * * * * Query Database * * * * * * * * * */
if (qarg != NULL) {
FILE *fin;
sequence entry;
int len;
size_type pcount = PROGRESS_COUNT;
size_type lcount = 0; /* lookup count */
size_type tcount = 0; /* total bases count */
if ((fin = fopen(qarg, "r")) == NULL) {
fprintf(stderr, " -- can't open file: %s\n", qarg);
exit(EXIT_FAILURE);
}
if (flags & PFLAG) fprintf(stderr, "query");
tinsert = tlookup = 0;
kbuf.clear_counts();
kdb.clear_time();
CLOCKS_EMULATE
CACHE_BARRIER(kdb.acc)
TRACE_START
STATS_START
tget(start);
while ((len = Fasta_Read_Entry(fin, &entry)) > 0) {
lcount += seq_lookup(entry.str, len, karg);
tcount += len;
if ((flags & PFLAG) && lcount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
free(entry.hdr);
free(entry.str);
}
kbuf.flush_lookup();
tget(finish);
CACHE_BARRIER(kdb.acc)
STATS_STOP
TRACE_STOP
CLOCKS_NORMAL
if (flags & PFLAG) fputc('\n', stderr);
trun = tdiff(finish, start);
toper = trun-tinsert-tlookup;
printf("Lookup count: %lu\n", (ulong_t)lcount);
printf("Lookup hits: %lu %.2f%%\n", (ulong_t)kbuf.hits, (double)kbuf.hits/lcount*100.0);
printf("Lookup rate: %f ops/sec\n", lcount/tvesec(tlookup));
printf("Bases rate: %f bp/sec\n", tcount/tvesec(trun));
printf("Run time: %f sec\n", tvesec(trun));
printf("Oper. time: %f sec\n", tvesec(toper));
printf("Lookup time: %f sec\n", tvesec(tlookup));
kdb.print_time();
STATS_PRINT
fclose(fin);
}
/* * * * * * * * * * Workload * * * * * * * * * */
if (warg && kdb.size()) {
double zeta = 0.0;
size_type i, j, rank, samples;
key_type *wptr;
key_type temp;
size_type pcount = PROGRESS_COUNT;
size_type lcount = 0; /* lookup count */
RAND_VAR;
// TODO: optionally allocate from scratchpad
key_type *wload = NALLOC(key_type, warg);
chk_alloc(wload, sizeof(key_type)*warg, "NALLOC workload array");
RAND_SEED(43);
// When Zipf is active, the miss distribution is independent
// of the hit distribution. If a single distribution is wanted,
// random ranks could be selected until the required number of
// misses are generated. These ranks could be skipped during
// hit generation. Also a fully random approach could be used.
/* calculate zeta */
if (zarg != 0.0) {
// FIXME: for multimap, use key count
size_type zN = kdb.size(); /* Zipf N, dictionary size */
if (zN > 10000000U) zN = 10000000U;
if (flags & PFLAG) fprintf(stderr, "calc zeta... ");
for (i = 1; i <= zN; i++) zeta += 1.0/pow((double)i, zarg);
if (flags & PFLAG) fprintf(stderr, "%f\n", zeta);
}
/* generate misses by checking if random keys are in kdb */
if (flags & PFLAG) fprintf(stderr, "generate misses...\n");
wptr = wload;
samples = (1.0-harg)*warg+0.5;
if (samples > warg) samples = warg;
rank = 1;
for (i = 0; i < samples; i++) {
/* generate a random item */
do {
temp = RAND_GEN;
} while (kdb.count(temp));
wptr[i] = temp;
#if 1
/* repeat item */
if (zarg != 0.0) {
size_type count = ceil((1.0/(pow((double)rank, zarg)*zeta)) * samples);
if (samples-i < count) count = samples-i;
while (count > 1) {
wptr[++i] = temp;
count--;
}
rank++;
}
#endif
}
/* generate hits by randomly selecting items from kdb */
if (flags & PFLAG) fprintf(stderr, "generate hits...\n");
wptr = wload + i;
samples = warg - i;
rank = 1;
for (i = 0; i < samples; i++) {
size_type bucket, steps;
const_local_iterator clit;
/* pick a random bucket until a non-empty one is found */
do {
bucket = rand() % kdb.bucket_count();
clit = kdb.cbegin(bucket);
} while (clit == kdb.cend(bucket));
/* pick a random item from the bucket */
for (steps = rand() % kdb.bucket_size(bucket); steps; steps--) clit++;
wptr[i] = clit->first;
/* repeat item */
if (zarg != 0.0) {
size_type count = ceil((1.0/(pow((double)rank, zarg)*zeta)) * samples);
if (samples-i < count) count = samples-i;
while (count > 1) {
wptr[++i] = clit->first;
count--;
}
rank++;
}
}
/* shuffle keys */
if (flags & PFLAG) fprintf(stderr, "shuffle keys...\n");
/* Fisher-Yates shuffle or Knuth shuffle */
for (i = warg-1; i > 0; --i) {
j = rand() % (i+1);
temp = wload[i]; /* swap */
wload[i] = wload[j];
wload[j] = temp;
}
/* save workload */
if (sarg != NULL) {
FILE *fout;
char hdr[32];
char str[48];
sequence entry = {hdr, str};
size_type pcount = PROGRESS_COUNT-1;
size_type scount; /* save count */
if ((fout = fopen(sarg, "w")) == NULL) {
fprintf(stderr, " -- can't open file: %s\n", sarg);
exit(EXIT_FAILURE);
}
if (flags & PFLAG) fprintf(stderr, "save workload");
for (scount = 0; scount < warg; scount++) {
sprintf(entry.hdr, "%lu", (ulong_t)scount);
kton(wload[scount], karg, entry.str);
Fasta_Write_File(fout, &entry, 1, karg);
if ((flags & PFLAG) && scount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
}
if (flags & PFLAG) fputc('\n', stderr);
fclose(fout);
}
/* do lookup */
if (flags & PFLAG) fprintf(stderr, "workload");
tinsert = tlookup = 0;
kbuf.clear_counts();
kdb.clear_time();
CLOCKS_EMULATE
CACHE_BARRIER(kdb.acc)
TRACE_START
STATS_START
#if 1
(void)pcount; /* silence warning */
if (flags & PFLAG) fprintf(stderr, "...");
#if defined(SYSTEMC)
struct timeval rt0;
gettimeofday(&rt0,NULL);
#endif
tget(start);
kbuf.block_lookup(wload, warg);
lcount = warg;
tget(finish);
#if defined(SYSTEMC)
struct timeval rt1;
gettimeofday(&rt1,NULL);
#endif
#else
tget(start);
while (lcount < warg) {
kbuf.lookup(wload[lcount]);
lcount++;
if ((flags & PFLAG) && lcount >= pcount) {
fputc('.', stderr);
pcount += PROGRESS_COUNT;
}
}
kbuf.flush_lookup();
tget(finish);
#endif
CACHE_BARRIER(kdb.acc)
STATS_STOP
TRACE_STOP
CLOCKS_NORMAL
if (flags & PFLAG) fputc('\n', stderr);
trun = tdiff(finish, start);
toper = trun-tinsert-tlookup;
printf("Lookup count: %lu\n", (ulong_t)lcount);
printf("Lookup hits: %lu %.2f%%\n", (ulong_t)kbuf.hits, (double)kbuf.hits/lcount*100.0);
printf("Lookup zipf: %.2f\n", zarg);
printf("Lookup rate: %f ops/sec\n", lcount/tvesec(tlookup));
#if defined(SYSTEMC)
printf("Real time: %f sec\n",
(((unsigned long long)rt1.tv_sec*1000000UL+rt1.tv_usec) -
((unsigned long long)rt0.tv_sec*1000000UL+rt0.tv_usec)) /
(double)1000000UL);
#endif
printf("Run time: %f sec\n", tvesec(trun));
printf("Oper. time: %f sec\n", tvesec(toper));
printf("Lookup time: %f sec\n", tvesec(tlookup));
kdb.print_time();
STATS_PRINT
}
/* * * * * * * * * * Wrap Up * * * * * * * * * */
TRACE_CAP
return EXIT_SUCCESS;
}
/* TODO:
* Fix reaching load factor
* Verify uniform and Zipf distributions
*/
/* DONE:
*/
|
#pragma once
#include <pybind11/numpy.h>
#if __has_include(<mdspan>)
#include <mdspan>
PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
PYBIND11_NAMESPACE_BEGIN(detail)
using std::basic_mdspan;
using std::dynamic_extent;
using std::extents;
using std::layout_left;
using std::layout_right;
using std::layout_stride;
PYBIND11_NAMESPACE_END(detail)
PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
#elif __has_include(<experimental/mdspan>)
#include <experimental/mdspan>
PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
PYBIND11_NAMESPACE_BEGIN(detail)
using std::experimental::basic_mdspan;
using std::experimental::dynamic_extent;
using std::experimental::extents;
using std::experimental::layout_left;
using std::experimental::layout_right;
using std::experimental::layout_stride;
PYBIND11_NAMESPACE_END(detail)
PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
#else
#error "Could not find mdspan header!"
#endif
#include "log.h"
PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
PYBIND11_NAMESPACE_BEGIN(detail)
// Convert to fully-unconstrained numpy-compatible mapping
// TODO do this without triggering -Wunused-value
template<typename Extents>
struct fully_dynamic_extents;
template<ptrdiff_t... Extent>
struct fully_dynamic_extents<extents<Extent...>> {
using type = extents<(Extent, dynamic_extent)...>;
};
template<typename Extents>
struct fully_dynamic_layout;
template<ptrdiff_t... Extent>
struct fully_dynamic_layout<extents<Extent...>> {
using type = layout_stride<(Extent, dynamic_extent)...>;
};
// Converts a Numpy ndarray to a dynamic mdspan
template<typename Scalar, typename Extents, typename Layout, typename Access>
void ndarray_to_mdspan(array_t<Scalar>& arr, basic_mdspan<Scalar, Extents, Layout, Access>& ret) {
using Type = basic_mdspan<Scalar, Extents, Layout, Access>;
// Catch programmer errors
static_assert(Extents::rank() == Extents::rank_dynamic(),
"Extents must be fully dynamic");
static_assert(!Type::mapping_type::is_always_contiguous(),
"Layout must be fully strided");
// Arrays for ndarray shape + stride layout
std::array<ptrdiff_t, Extents::rank()> extents_array;
std::array<ptrdiff_t, Extents::rank()> strides_array;
for (size_t i = 0; i < Extents::rank(); i++) {
// TODO will this ever happen?
if (arr.strides(i) % sizeof(Scalar) != 0) {
PYMDSPAN_LOG("Bad stride(%ld)=%ld is not a multiple of elem size (%lu)\n",
arr.strides(i), extents_array[i], sizeof(Scalar));
exit(1);
}
extents_array[i] = arr.shape(i);
strides_array[i] = arr.strides(i) / sizeof(Scalar);
}
const auto extents = typename Type::extents_type(extents_array);
const auto mapping = typename Type::mapping_type(extents, strides_array);
ret = Type(arr.mutable_data(), mapping);
}
// NB we won't implement all conversions here, just the ones we care about.
// Specifically, from fully-dynamic to some level of static specialization.
template<typename SpanA, typename SpanB>
_MDSPAN_CONSTEXPR_14 bool convert_to(const SpanA&, SpanB&);
template<typename Span>
_MDSPAN_CONSTEXPR_14 bool convert_to(const Span& a, Span& b) {
b = a;
return true;
}
template<
typename Scalar, typename Extents, typename Access,
typename DynExtents, typename DynLayout
>
_MDSPAN_CONSTEXPR_14 bool convert_to(
const basic_mdspan<Scalar, DynExtents, DynLayout, Access> a,
basic_mdspan<Scalar, Extents, layout_right, Access>& b) {
using TypeA = basic_mdspan<Scalar, DynExtents, DynLayout, Access>;
using TypeB = basic_mdspan<Scalar, Extents, layout_right, Access>;
// Catch programmer errors
static_assert(DynExtents::rank() == DynExtents::rank_dynamic(),
"Extents must be fully dynamic");
static_assert(!TypeA::mapping_type::is_always_contiguous(),
"Layout must be fully strided");
static_assert(Extents::rank() == Extents::rank_dynamic(),
"Partial static extents not yet implemented :(");
typename TypeB::mapping_type map(a.extents());
for (size_t i = 0; i < Extents::rank(); i++) {
if (b.static_extent(i) != dynamic_extent &&
b.static_extent(i) != a.extent(i)) {
PYMDSPAN_LOG("Static extent does not match\n");
return false;
}
if (map.stride(i) != a.stride(i)) {
PYMDSPAN_LOG("Stride does not match (got %ld, expected %ld)\n", a.stride(i), b.stride(i));
return false;
}
}
b = TypeB(a.data(), map);
return true;
}
template<
typename Scalar, typename Extents, typename Layout, typename Access,
typename DynExtents, typename DynLayout,
typename = typename std::enable_if<
!std::is_same<
basic_mdspan<Scalar, DynExtents, DynLayout, Access>,
basic_mdspan<Scalar, Extents, Layout, Access>
>::value
&& Extents::rank() != Extents::rank_dynamic()
>::type
>
_MDSPAN_CONSTEXPR_14 bool convert_to(
const basic_mdspan<Scalar, DynExtents, DynLayout, Access> a,
basic_mdspan<Scalar, Extents, Layout, Access>& b) {
using TypeA = basic_mdspan<Scalar, DynExtents, DynLayout, Access>;
using TypeB = basic_mdspan<Scalar, Extents, Layout, Access>;
// Catch programmer errors
static_assert(DynExtents::rank() == DynExtents::rank_dynamic(),
"Extents must be fully dynamic");
static_assert(!TypeA::mapping_type::is_always_contiguous(),
"Layout must be fully strided");
std::array<ptrdiff_t, Extents::rank()> strides;
for (size_t i = 0; i < Extents::rank(); i++) {
strides[i] = a.stride(i);
}
typename TypeB::mapping_type map = {a.extents(), strides};
for (size_t i = 0; i < Extents::rank(); i++) {
if (b.static_extent(i) != dynamic_extent &&
b.static_extent(i) != a.extent(i)) {
PYMDSPAN_LOG("Static extent does not match\n");
return false;
}
}
b = TypeB(a.data(), map);
return true;
}
// The actual type caster - defined in terms of the above
// We trivially cast the ndarray to an underconstrained mdspan,
// and then check that the mdspan satisfies the type we actually want.
template<typename Scalar, typename Extents, typename Access, typename Layout>
struct type_caster<
basic_mdspan<Scalar, Extents, Layout, Access>
> {
private:
using Type = basic_mdspan<Scalar, Extents, Layout, Access>;
using Mapping = typename Type::mapping_type;
using DynExtents = typename fully_dynamic_extents<Extents>::type;
using DynLayout = typename fully_dynamic_layout <Extents>::type;
using DynType = basic_mdspan<Scalar, DynExtents, DynLayout, Access>;
using Array = array_t<Scalar, array::forcecast>;
Type ref;
public:
static constexpr auto name = _("mdspan-from-ndarray");
static constexpr bool need_writeable = !std::is_const<Scalar>::value;
bool load(handle src, bool /* TODO conversion not supported */) {
if (!isinstance<Array>(src)) {
PYMDSPAN_LOG("Not an instance of array<%s>\n", typeid(Scalar).name());
return false;
}
auto aref = reinterpret_borrow<Array>(src);
if (!aref || (need_writeable && !aref.writeable())) {
PYMDSPAN_LOG("Could not cast writeable array\n");
return false;
}
if (Extents::rank() != aref.ndim()) {
PYMDSPAN_LOG("Wrong rank (%ld vs %ld)\n", Extents::rank(), aref.ndim());
return false;
}
DynType dref;
ndarray_to_mdspan(aref, dref);
if (!convert_to(dref, ref)) {
PYMDSPAN_LOG("Could not specialize ndarray to specified mdspan\n");
return false;
}
return true;
}
operator Type*() { return &ref; }
operator Type&() { return ref; }
template<typename U>
using cast_op_type = pybind11::detail::cast_op_type<U>;
};
PYBIND11_NAMESPACE_END(detail)
PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
|
//NewConsumer creates a new ConsumerActor by a consumer implementation
//and returns both a pointer to the ConsumerActor itself and the PID
//of the consumer. The pointer to the ConsumerActor can be used to subscribe
//to topics of the stream.
func NewConsumer(consumer Consumer) (*ConsumerActor, *quacktors.Pid) {
actor := &ConsumerActor{
Consumer: consumer,
}
pid := quacktors.SpawnStateful(actor)
return actor, pid
}
|
/*
* Copyright 2018 JDCLOUD.COM
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http:#www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*
*
*
*
* Contact:
*
* NOTE: This class is auto generated by the jdcloud code generator program.
*/
package com.jdcloud.sdk.service.ag.model;
import java.util.List;
import java.util.ArrayList;
/**
* 可用组详情
*/
public class AvailabilityGroup implements java.io.Serializable {
private static final long serialVersionUID = 1L;
/**
* 可用组id
*/
private String id;
/**
* 可用组name
*/
private String name;
/**
* 描述,length:0-256
*/
private String description;
/**
* 实例模板的Id
*/
private String instanceTemplateId;
/**
* 可用区域
*/
private List<String> azs;
/**
* 可用组类型,支持vm
*/
private String agType;
/**
* 创建时间
*/
private String createTime;
/**
* 可用组中实例的数量
*/
private Number count;
/**
* 是否开启自动伸缩
*/
private Boolean autoScaling;
/**
* get 可用组id
*
* @return
*/
public String getId() {
return id;
}
/**
* set 可用组id
*
* @param id
*/
public void setId(String id) {
this.id = id;
}
/**
* get 可用组name
*
* @return
*/
public String getName() {
return name;
}
/**
* set 可用组name
*
* @param name
*/
public void setName(String name) {
this.name = name;
}
/**
* get 描述,length:0-256
*
* @return
*/
public String getDescription() {
return description;
}
/**
* set 描述,length:0-256
*
* @param description
*/
public void setDescription(String description) {
this.description = description;
}
/**
* get 实例模板的Id
*
* @return
*/
public String getInstanceTemplateId() {
return instanceTemplateId;
}
/**
* set 实例模板的Id
*
* @param instanceTemplateId
*/
public void setInstanceTemplateId(String instanceTemplateId) {
this.instanceTemplateId = instanceTemplateId;
}
/**
* get 可用区域
*
* @return
*/
public List<String> getAzs() {
return azs;
}
/**
* set 可用区域
*
* @param azs
*/
public void setAzs(List<String> azs) {
this.azs = azs;
}
/**
* get 可用组类型,支持vm
*
* @return
*/
public String getAgType() {
return agType;
}
/**
* set 可用组类型,支持vm
*
* @param agType
*/
public void setAgType(String agType) {
this.agType = agType;
}
/**
* get 创建时间
*
* @return
*/
public String getCreateTime() {
return createTime;
}
/**
* set 创建时间
*
* @param createTime
*/
public void setCreateTime(String createTime) {
this.createTime = createTime;
}
/**
* get 可用组中实例的数量
*
* @return
*/
public Number getCount() {
return count;
}
/**
* set 可用组中实例的数量
*
* @param count
*/
public void setCount(Number count) {
this.count = count;
}
/**
* get 是否开启自动伸缩
*
* @return
*/
public Boolean getAutoScaling() {
return autoScaling;
}
/**
* set 是否开启自动伸缩
*
* @param autoScaling
*/
public void setAutoScaling(Boolean autoScaling) {
this.autoScaling = autoScaling;
}
/**
* set 可用组id
*
* @param id
*/
public AvailabilityGroup id(String id) {
this.id = id;
return this;
}
/**
* set 可用组name
*
* @param name
*/
public AvailabilityGroup name(String name) {
this.name = name;
return this;
}
/**
* set 描述,length:0-256
*
* @param description
*/
public AvailabilityGroup description(String description) {
this.description = description;
return this;
}
/**
* set 实例模板的Id
*
* @param instanceTemplateId
*/
public AvailabilityGroup instanceTemplateId(String instanceTemplateId) {
this.instanceTemplateId = instanceTemplateId;
return this;
}
/**
* set 可用区域
*
* @param azs
*/
public AvailabilityGroup azs(List<String> azs) {
this.azs = azs;
return this;
}
/**
* set 可用组类型,支持vm
*
* @param agType
*/
public AvailabilityGroup agType(String agType) {
this.agType = agType;
return this;
}
/**
* set 创建时间
*
* @param createTime
*/
public AvailabilityGroup createTime(String createTime) {
this.createTime = createTime;
return this;
}
/**
* set 可用组中实例的数量
*
* @param count
*/
public AvailabilityGroup count(Number count) {
this.count = count;
return this;
}
/**
* set 是否开启自动伸缩
*
* @param autoScaling
*/
public AvailabilityGroup autoScaling(Boolean autoScaling) {
this.autoScaling = autoScaling;
return this;
}
/**
* add item to 可用区域
*
* @param az
*/
public void addAz(String az) {
if (this.azs == null) {
this.azs = new ArrayList<>();
}
this.azs.add(az);
}
}
|
<gh_stars>0
# coding=utf-8
import psutil
from pyprobe.sensors import *
__author__ = '<NAME>'
class DiskSpaceSensor(BaseSensor):
KIND = u'diskspace'
def define(self, configuration):
result = SensorDescription(u"Laufwerkskapazität", self.KIND)
result.description = u"Monitort den freien Speicherplatz eines Laufwerks"
group = GroupDescription(u"settings", u"Einstellungen")
field_drive = FieldDescription(u"device", u"Laufwerk")
field_drive.help = u"Geben Sie das Laufwerk an, dessen freier Speicherplatz überwacht werden soll (z.B. / " \
u"oder /mnt/drive. Sie können die gewünschten Grenzwerte in den Kanälen des Sensors" \
u"konfigurieren."
field_drive.required = True
group.fields.append(field_drive)
result.groups.append(group)
return result
def execute(self, sensorid, host, parameters, configuration):
if not "device" in parameters:
raise ValueError("Parameter device is a mandatory parameter.")
device = parameters["device"]
usage = psutil.disk_usage(device)
result = SensorResult(sensorid)
free = max(100.0 - usage.percent, 0)
channel = SensorChannel(u"Freier Platz", ModeType.FLOAT, ValueType.PERCENT, free)
result.channel.append(channel)
channel = SensorChannel(u"Freie Bytes", ModeType.INTEGER, ValueType.BYTES_DISK, usage.free)
result.channel.append(channel)
return result
|
"If the main opposition considered themselves as people's representatives and servants, then it was its obligation to discharge its responsibilities by joining parliament and expressing views through constructive discussions in and outside parliament," he said.
The president was addressing the opening sitting of the New Year session of the House amid boycott by the BNP-led opposition MPs.
Like yesterday, the opposition lawmakers, belonging to BNP, Jamaat-e-Islami and Bangladesh Jatiya Party, were not present when he addressed the Jatiya Sangsad in the last three New Year sessions.
They have already boycotted 283 of total 337 sittings of this parliament in the last four years, setting a new record in the House boycott culture of Bangladesh.
Experts say the culture of prolonged boycott has been crippling the parliamentary system since its restoration in 1991 and increasing the confrontational culture in politics.
The Awami League while in the opposition bench also boycotted the proceedings of fifth [1991-1996] and eighth parliament [2001-2006].
The BNP, too, while in opposition boycotted many sittings of the seventh parliament [1996-2001].
The president in his address yesterday said keeping alive political conflict by clinging to any particular position or refraining from carrying out responsibilities bestowed by the people through the adoption of a rigid stance on parliament boycott does not conform to democratic behaviour.
"I therefore once again urge the opposition in this final year of the present parliament and government to please shun the path of igniting fire, violence and anarchy. Please place all your complaints, proposals, recommendations and opinions in parliament and help democracy flourish."
Citing his efforts to constitute the present Election Commission through holding talks with the political parties, the president said this was the first time in Bangladesh that an independent, neutral and powerful EC has been constituted by pursuing the process of dialogue.
"I am firmly optimistic that we shall be able to present a free, neutral and credible parliamentary election in future under the pragmatic leadership of the present Election Commission and through our combined efforts."
In his 148-page address, the president described in detail the current government's various measures and successes in different sectors. However, he did not say if there was any failure on the part of the government.
"The present government has taken all out initiatives to ensure transparency and accountability, open and tolerant conduct, respect toward human rights and rule of law and governance through discussions with all stakeholders," he said.
Zillur Rahman, the head of state, took only 10 minutes to speak about some key features of his written address, which was earlier approved by the cabinet, and the rest was considered as read out.
About the rule of BNP-Jamaat-led alliance, the president said the alliance government had started politics of vengeance immediately after assuming office in 2001 and continued this trend up to 2006. But the history of the world has repeatedly proved that the politics of terrorism and hatred cannot bring benefits for the society and the economy, he said.
"The destructive politics of BNP-Jamaat government had given rise to Bhaban-centric, anti-people activities… but the hardworking people-peasants-labourers, men, women and youths of this country did not tolerate this aberration and waywardness. They, therefore, created another history through the Jatiya Sangsad elections of December 2008," the president said.
"We should take lessons from the past."
|
Q:
Meditation - how to practice it?
Is it more important to develop one's own meditation skills or follow a prescribed method of a teacher? After all I think the Buddha worked it out for himself. And I have noticed in my own practice that following someone else's method can be confusing, if I don't understand properly what they teach. Is discovering it for yourself, such as meditation, the best way?
N.B. I don't have any technique, I just sit there and try not to hang onto my thoughts. I think if we make out meditation to be difficult we set up obstacles to it in our minds. When I think its easy I find easy to do.
A:
Both. While it's pretty much impossible to make real progress in meditation on your own, under the guidance of any good teacher, you are still expected to figure things out for yourself. I know this seems contradictory, but a meditation master can only guide your progress. It's absolutely impossible for them to get into your head and show you how to practice. Think of sitting as a skill like wood working. While the master carpenter can give you instructions on how to cut a board, only through your own efforts and experience will you understand how much pressure to use on different wood types, how to work the grain to your advantage, etc. Likewise, on the cushion, while your teacher will tell you to watch your breath, it's up to you to figure out how firmly to apply your attention, where the best anchor point is for your mind, how much to relax, and how to use your posture to your advantage. Like any skill, mastery only comes through countless hours, many questions, ample feedback, and innumerable breaths.
A:
or follow a prescribed method of a teacher?
In Buddhism, when we are beginners, we generally follow methods of a teacher because generally the method of a teacher has a goal, has structure and can bring some preliminary results that give a taste of the goal.
After all I think the Buddha worked it out for himself.
The Buddha did work it out for himself; which is why he is so esteemed because working it out for yourself is very difficult, close to impossible, which is why it took the Buddha six years to work it out.
But you seem to have overlooked the fact that the Buddha did not actually know what he was trying to work out; apart from looking for real peace. He was searching or groping in the dark.
And I have noticed in my own practise
What practice? What exactly is your own practise? This has not been made clear.
that following someone else's method can be confusing, if I don't understand properly what they teach.
An effort is made to work out what they teach; such as asking questions about what they teach.
Is discovering it for yourself, such as meditation, the best way?
Discovering what for yourself? Respectfully, you haven't mentioned what you are trying to discover.
A:
First, the Buddha learned much from his teachers. See Ariyapariyesana Sutta: The Noble Search. Eventually he noticed that their approach didn't really reach the goal, so he discovered another approach.
So we can say it's more important to develop our own meditation skills than mindlessly follow a prescribed method.
However, proper methods are based on knowledge and experience. Experimenting by ourselves, without knowledge and experience, we can make very stupid mistakes.
On the other hand, not all teachers are very good, so they can use not the best methods for you or they might explain them not so well.
The practice can be confusing, if I don't understand properly what they teach. That's very true. It's dangerous to practise what you don't really understand, so don't do that. First try to understand well, then practice.
Without good guidance and correct attitude, some meditators harmed themselves very much.
Thus my answer is: definitely be attentive and let your practice be your own exploration. Only under that condition people really develop talent. On the other hand, study from others. Otherwise you hardly would go far. It's like tree and grass: exploring totally on our own, we grow at the height of grass. Exploring with the help of previous explorers, we grow at the height of tree.
|
Comparison of data mining algorithms in remote sensing using Lidar data fusion and feature selection Application of data mining techniques defines the basis of land use classification. Even though multispectral images can be very accurate in classifying land cover categories, using spectral reflectivity alone sometimes fails to distinguish between landcover types that share similar spectral signatures such as forest and wetlands. The problem aggravates owing to interpolation of neighbourhood pixel values. In this paper, we present a comparison of four classification and clustering algorithms and analyze their performance. These algorithms are applied both on spectral reflectivity values alone and along with Lidar data fusion. Experiments were performed in the Carlton County of Minnesota. Accuracy estimation was conducted for all models. Experiments indicate that accuracy increases when Lidar data is used to complement the spectral reflectivity values. Random Forest Classification and Support Vector Machines yield good results consistently due to their ensemble learning methods and the ability to represent non-linear relationship in the dataset, respectively. Maximum likelihood shows significant improvement with Lidar data fusion and ISODATA clustering approach has the lowest accuracy rate.
|
Application of least square estimation technique for image restoration using signal-noise correlation constraint A linear degradation model for digital image restoration is considered. It is assumed that the noise is additive, uncorrelated and independent of the degradation process and also of original image. A new objective criterion is used for the least square restoration method proposed here. The criterion is to minimize the correlation between image and noise gray levels. The method can be implemented in the Fourier domain and is computationally attractive. The test results on a chromosome picture are demonstrated for different signal-to-noise power ratios.
|
/*
* Copyright (C) 2009 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.newpos.mposlib.bluetooth;
import android.annotation.SuppressLint;
import android.bluetooth.BluetoothAdapter;
import android.bluetooth.BluetoothDevice;
import android.bluetooth.BluetoothServerSocket;
import android.bluetooth.BluetoothSocket;
import android.content.BroadcastReceiver;
import android.content.Context;
import android.content.Intent;
import android.content.IntentFilter;
import android.text.TextUtils;
import android.util.Log;
import android.widget.Toast;
import com.newpos.mposlib.BuildConfig;
import com.newpos.mposlib.api.IBluetoothDevListener;
import com.newpos.mposlib.exception.EventBusContent;
import com.newpos.mposlib.exception.SDKException;
import com.newpos.mposlib.impl.AckRequestMgr;
import com.newpos.mposlib.impl.Command;
import com.newpos.mposlib.model.EventActionInfo;
import com.newpos.mposlib.model.Packet;
import com.newpos.mposlib.model.ResponseData;
import com.newpos.mposlib.util.EventUtil;
import com.newpos.mposlib.util.LogUtil;
import com.newpos.mposlib.util.StringUtil;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;
import java.util.Queue;
import java.util.Timer;
import java.util.TimerTask;
import java.util.UUID;
import java.util.concurrent.ConcurrentLinkedQueue;
@SuppressLint("NewApi")
public class BluetoothService {
// Debugging
private static final String TAG = "bts";
private static final UUID UUID_OTHER_DEVICE =
UUID.fromString("00001101-0000-1000-8000-00805F9B34FB");
// Member fields
private final BluetoothAdapter mAdapter;
private ConnectThread mConnectThread;
private ConnectedThread mConnectedThread;
private ProcessMessageThread mProcessMessageThread;
private volatile int mState;
private boolean isAndroid = BluetoothState.DEVICE_ANDROID;
private final static int DELAY = 400; //ms
private final static int SLICE = 450;//256; //byte
// Constructor. Prepares a new BluetoothChat session
// context : The UI Activity Context
// handler : A Handler to send messages back to the UI Activity
private BluetoothService() {
mAdapter = BluetoothAdapter.getDefaultAdapter();
mState = BluetoothState.STATE_NONE;
}
private static BluetoothService instance = null;
public static BluetoothService I() {
if(instance == null) {
synchronized (BluetoothService.class) {
if(instance == null) {
instance = new BluetoothService();
}
}
}
return instance;
}
// Set the current state of the chat connection
// state : An integer defining the current connection state
private synchronized void setBtState(int state) {
LogUtil.d(TAG, "setState() " + mState + " -> " + state);
mState = state;
// Give the new state to the Handler so the UI Activity can update
//mHandler.obtainMessage(BluetoothState.MESSAGE_STATE_CHANGE, state, -1).sendToTarget();
}
// Return the current connection state.
public synchronized int getBtState() {
return mState;
}
// Start the ConnectThread to initiate a connection to a remote device
// device : The BluetoothDevice to connect
// secure : Socket Security type - Secure (true) , Insecure (false)
public synchronized void connect(BluetoothDevice device) {
// Cancel any thread attempting to make a connection
if (mState == BluetoothState.STATE_CONNECTING) {
if (mConnectThread != null) {
mConnectThread.cancel();
mConnectThread = null;
}
}
// Cancel any thread currently running a connection
if (mConnectedThread != null) {
mConnectedThread.cancel(false);
mConnectedThread = null;
}
// Cancel any thread currently running a connection
if (mProcessMessageThread != null) {
mProcessMessageThread.cancel();
mProcessMessageThread = null;
}
// Start the thread to connect with the given device
mConnectThread = new ConnectThread(device);
mConnectThread.start();
setBtState(BluetoothState.STATE_CONNECTING);
}
/**
* Start the ConnectedThread to begin managing a Bluetooth connection
* @param socket The BluetoothSocket on which the connection was made
* @param device The BluetoothDevice that has been connected
*/
public synchronized void connected(BluetoothSocket socket, BluetoothDevice
device, final String socketType) {
// Cancel the thread that completed the connection
if (mConnectThread != null) {
mConnectThread.cancel();
mConnectThread = null;
}
// Cancel any thread currently running a connection
if (mConnectedThread != null) {
mConnectedThread.cancel(false);
mConnectedThread = null;
}
// Cancel any thread currently running a connection
if (mProcessMessageThread != null) {
mProcessMessageThread.cancel();
mProcessMessageThread = null;
}
// Start the thread to manage the connection and perform transmissions
mConnectedThread = new ConnectedThread(socket, socketType);
mConnectedThread.start();
mProcessMessageThread = new ProcessMessageThread();
mProcessMessageThread.start();
}
// Stop all threads
public synchronized void stop(boolean needCallback) {
if (recvBuffer != null) {
recvBuffer.clear();
}
if (mConnectThread != null) {
mConnectThread.cancel();
mConnectThread = null;
}
if (mConnectedThread != null) {
mConnectedThread.cancel(needCallback);
mConnectedThread.interrupt();
mConnectedThread = null;
}
if (mProcessMessageThread != null) {
mProcessMessageThread.cancel();
mProcessMessageThread = null;
}
setBtState(BluetoothState.STATE_NONE);
}
// Write to the ConnectedThread in an unsynchronized manner
// out : The bytes to write
public void writeData(byte[] out) throws SDKException {
// Create temporary object
ConnectedThread r;
// Synchronize a copy of the ConnectedThread
synchronized (this) {
if (mState != BluetoothState.STATE_CONNECTED) {
throw new SDKException(SDKException.ERR_CODE_COMMUNICATE_ERROR);
}
r = mConnectedThread;
}
// Perform the write unsynchronized
r.write(out);
}
// Indicate that the connection attempt failed and notify the UI Activity
private void connectionFailed() {
setBtState(BluetoothState.STATE_NONE);
}
// Indicate that the connection was lost and notify the UI Activity
private void connectionLost() {
setBtState(BluetoothState.STATE_NONE);
}
// This thread runs while attempting to make an outgoing connection
// with a device. It runs straight through
// the connection either succeeds or fails
private class ConnectThread extends Thread {
private BluetoothSocket mmSocket;
private final BluetoothDevice mmDevice;
private String mSocketType;
private boolean isCancel;
public ConnectThread(BluetoothDevice device) {
mmDevice = device;
isCancel = false;
BluetoothSocket tmp = null;
// Get a BluetoothSocket for a connection with the
// given BluetoothDevice
try {
tmp = device.createInsecureRfcommSocketToServiceRecord(UUID_OTHER_DEVICE);
} catch (Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
mmSocket = tmp;
}
public void run() {
// Always cancel discovery because it will slow down a connection
mAdapter.cancelDiscovery();
// Make a connection to the BluetoothSocket
try {
// This is a blocking call and will only return on a
// successful connection or an exception
mmSocket.connect();
} catch (Throwable e) {
if (isCancel) {
return;
}
try {
Thread.sleep(1000);
} catch (Exception e0) {
}
if (isCancel || isConnected()) {
return;
}
try {
mmSocket = mmDevice.createInsecureRfcommSocketToServiceRecord(UUID_OTHER_DEVICE);
mmSocket.connect();
} catch (Throwable e1) {
if (isCancel || isConnected()) {
return;
}
// Close the socket
try {
if (mmSocket != null) {
mmSocket.close();
}
} catch (Throwable e2) {
if (LogUtil.DEBUG) {
e2.printStackTrace();
}
}
connectionFailed();
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onConnectedDevice(false);
}
return;
}
}
// Reset the ConnectThread because we're done
synchronized (BluetoothService.this) {
mConnectThread = null;
}
// Start the connected thread
connected(mmSocket, mmDevice, mSocketType);
}
public void cancel() {
try {
isCancel = true;
if (mmSocket != null) {
mmSocket.getInputStream().close();
}
} catch (Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
try {
if (mmSocket != null) {
mmSocket.close();
}
} catch (Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
}
}
private void doAckResponse(Packet packet) {
AckRequestMgr.RequestTask task = AckRequestMgr.I().getAndRemove(packet.packetID);
if (task != null) {
task.success(packet);
}
}
class ProcessMessageThread extends Thread {
/**
* thread cancel flag
*/
private volatile boolean isCancel = false;
public ProcessMessageThread() {
this.isCancel = false;
}
public void cancel() {
isCancel = true;
}
@Override
public void run() {
while (!isCancel) {
if (!recvBuffer.isEmpty()) {
byte[] data = recvBuffer.poll();
LogUtil.i("DP::poll_buffer << " + StringUtil.byte2HexStr(data));
final Packet packet = Command.parsePacket(data);
if (packet != null) {
if (packet.isAck()) {
doAckResponse(packet);
}
processResponse(packet);
}
}
try {
if (recvBuffer.isEmpty()) {
Thread.sleep(60);
}
} catch (InterruptedException e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
}
}
};
// This thread runs during a connection with a remote device.
// It handles all incoming and outgoing transmissions.
private class ConnectedThread extends Thread {
private final BluetoothSocket mmSocket;
private final InputStream mmInStream;
private final OutputStream mmOutStream;
private volatile boolean isCancel = false;
private volatile boolean needCallback = true;
public ConnectedThread(BluetoothSocket socket, String socketType) {
needCallback = true;
mmSocket = socket;
InputStream tmpIn = null;
OutputStream tmpOut = null;
// Get the BluetoothSocket input and output streams
try {
tmpIn = socket.getInputStream();
tmpOut = socket.getOutputStream();
} catch (IOException e) { }
mmInStream = tmpIn;
mmOutStream = tmpOut;
}
public void run() {
if (getBtState() != BluetoothState.STATE_CONNECTED) {
setBtState(BluetoothState.STATE_CONNECTED);
new Thread(new Runnable() {
@Override
public void run() {
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onConnectedDevice(true);
}
}
}).start();
}
// Keep listening to the InputStream while connected
while (!isCancel) {
try {
//int data = mmInStream.read();
byte[] respBytes = null;
byte[] headBytes = new byte[1];
byte[] LenBytes = new byte[2];
while (true) {
// read until return some data
headBytes = readFixedBytes(1);
if (Packet.PACKET_HEAD == headBytes[0]) {
LenBytes = readFixedBytes(2);
int length = Integer.valueOf(StringUtil.byte2HexStr(LenBytes)).intValue() + 2;
respBytes = readFixedBytes(length);
if (Packet.PACKET_TAIL == respBytes[respBytes.length - 2]) {
// get one data
break;
} else {
if (BuildConfig.DEBUG) {
//LogUtil.e("recv error packet:" + StringUtil.byte2HexStr(respBytes));
}
}
} else {
//LogUtil.i("recv error packet:" + headBytes[0]);
}
}
if (respBytes != null) {
byte[] readData = new byte[(headBytes.length + LenBytes.length + respBytes.length)];
System.arraycopy(headBytes, 0, readData, 0, 1);
System.arraycopy(LenBytes, 0, readData, 1, 2);
System.arraycopy(respBytes, 0, readData, 3, respBytes.length);
if (LogUtil.DEBUG) {
LogUtil.i("<< " + StringUtil.byte2HexStr(readData));
}
recvBuffer.offer(readData);
}
} catch (IOException e) {
connectionLost();
LogUtil.e( "Exception during read\n" + e);
if (recvBuffer != null) {
recvBuffer.clear();
}
AckRequestMgr.I().clear();
Command.I().clear();
if (needCallback) {
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onDisconnectedDevice();
}
}
break;
} catch (Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
}
}
private byte[] readFixedBytes(int len) throws IOException {
byte[] result = new byte[len];
byte[] data = new byte[len];
int lens = 0x0;
int position = 0x0;
while (len > 0) {
lens = mmInStream.read(data);
len -= lens;
System.arraycopy(data, 0x0, result, position, lens);
position += lens;
data = new byte[len];
}
return result;
}
// Write to the connected OutStream.
// @param buffer The bytes to write
public void write(byte[] data) {
// final int perSise = SLICE;
// if (data.length > perSise) {
// int totalLen = data.length;
// int cnt = totalLen / perSise;
// int left = totalLen % perSise;
// for (int i = 0; i < cnt; i++) {
// byte[] toSend = new byte[perSise];
// System.arraycopy(data, i * perSise, toSend, 0, perSise);
// try {
// mmOutStream.write(toSend);
// } catch (Throwable e) {
// LogUtil.e("Exception during write\n" + e);
// setState(BluetoothState.STATE_NONE);
// }
//
// try {
// Thread.sleep(DELAY);
// } catch (InterruptedException e) {
// e.printStackTrace();
// }
// }
//
// if (left != 0) {
// byte[] toSend = new byte[left];
// System.arraycopy(data, cnt * perSise, toSend, 0, left);
// try {
// mmOutStream.write(toSend);
// } catch (Throwable e) {
// LogUtil.e("Exception during write\n" + e);
// setState(BluetoothState.STATE_NONE);
// }
// }
// } else {
try {
mmOutStream.write(data);
} catch (Throwable e) {
LogUtil.e("Exception during write\n" + e);
setBtState(BluetoothState.STATE_NONE);
}
// }
}
public synchronized void cancel(boolean needCallback) {
isCancel = true;
this.needCallback = needCallback;
try {
mmSocket.close();
} catch (IOException e) { }
}
}
/**
* receive buffer queue
*/
private final Queue<byte[]> recvBuffer = new ConcurrentLinkedQueue<>();
private List<BluetoothDevice> deviceList;
private IBluetoothDevListener mIBluetoothDevListener;
private Context mContext;
private Timer timer = new Timer();
private TimerTask searchTimerTask;
private BluetoothReceiver bluetoothReceiver;
private void registerBluetoothReceiver(Context context) {
mContext = context;
IntentFilter filter = new IntentFilter();
filter.addAction(BluetoothDevice.ACTION_FOUND);
filter.addAction(BluetoothAdapter.ACTION_DISCOVERY_FINISHED);
if((bluetoothReceiver == null) && (mContext != null)) {
bluetoothReceiver = new BluetoothReceiver();
mContext.registerReceiver(bluetoothReceiver, filter);
}
}
private void unregisterBluetoothReceiver() {
try {
if((bluetoothReceiver != null) && (mContext != null)) {
mContext.unregisterReceiver(bluetoothReceiver);
bluetoothReceiver = null;
return;
}
} catch(Exception ex) {
if (LogUtil.DEBUG) {
ex.printStackTrace();
}
}
}
/**
* search bluetooth device
* @param timeOut ms
*/
public void searchBluetoothDev(IBluetoothDevListener iBluetoothDevListener, Context context, int timeOut) {
LogUtil.d("searchBluetoothDev");
if (timeOut <= 0 || timeOut > StringUtil.MIN_MS) {
timeOut = StringUtil.MIN_MS;
}
mIBluetoothDevListener = iBluetoothDevListener;
// start timer to process timeout event
if (searchTimerTask != null) {
searchTimerTask.cancel();
}
searchTimerTask = new TimerTask() {
@Override
public void run() {
stopSearch();
}
};
timer.schedule(searchTimerTask, timeOut);
if (!mAdapter.isEnabled()) {
mAdapter.enable();
}
if (mAdapter.isDiscovering()) {
mAdapter.cancelDiscovery();
try {
Thread.sleep(600);
} catch (InterruptedException e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
}
if (deviceList == null) {
deviceList = new ArrayList<>();
} else {
deviceList.clear();
}
registerBluetoothReceiver(context);
int tryCount = 3;
while (!mAdapter.startDiscovery() && tryCount > 0) {
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
tryCount--;
}
if (tryCount > 0) {
registerBluetoothReceiver(context);
}
}
/**
* bluetooth broadcast receiver
*/
class BluetoothReceiver extends BroadcastReceiver {
public BluetoothReceiver() {
}
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
LogUtil.d("action:" + action);
if (BluetoothDevice.ACTION_FOUND.equals(action)) {
BluetoothDevice bDevice = (BluetoothDevice) intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE);
if (bDevice != null && bDevice.getName() != null) {
if (deviceList != null) {
boolean add = true;
for(BluetoothDevice dev : deviceList) {
if (dev.getAddress().equals(bDevice.getAddress())) {
add = false;
break;
}
}
if (add) {
deviceList.add(bDevice);
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onSearchOneDevice(bDevice);
}
}
}
}
} else if(BluetoothAdapter.ACTION_DISCOVERY_FINISHED.equals(action)) {
unregisterBluetoothReceiver();
}
}
}
public synchronized void stopSearch() {
LogUtil.i("stopSearch");
try {
if (mAdapter.isDiscovering()) {
mAdapter.cancelDiscovery();
}
} catch (Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
if (deviceList != null) {
deviceList.clear();
}
unregisterBluetoothReceiver();
}
public void setCallback(IBluetoothDevListener listener) {
mIBluetoothDevListener = listener;
}
/**
* connect remoute bluetooth device with mac address, block until return
* @param macAddress remote device address
* @return true:connected false:connect fail
*/
public synchronized void connectDevice(String macAddress) {
LogUtil.d("connectDevice MAC:" + macAddress + " mState:" + mState);
if (searchTimerTask != null) {
searchTimerTask.cancel();
}
if (TextUtils.isEmpty(macAddress)) {
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onConnectedDevice(false);
}
return;
}
if (isConnected()) {
if (mIBluetoothDevListener != null) {
mIBluetoothDevListener.onConnectedDevice(true);
}
return;
} else if (mState == BluetoothState.STATE_CONNECTING) {
LogUtil.d("STATE_CONNECTING");
return;
}
try {
BluetoothDevice bluetoothDevice = mAdapter.getRemoteDevice(macAddress);
connect(bluetoothDevice);
} catch(Throwable e) {
if (LogUtil.DEBUG) {
e.printStackTrace();
}
}
}
public boolean isConnected() {
return (getBtState() == BluetoothState.STATE_CONNECTED);
}
private void processResponse(Packet packet) {
int packetID = packet.getPacketID();
if (packetID == 1) {
packetID = 255;
} else {
packetID = packetID -1;
}
Command.RequestTask task = Command.I().getAndRemove(packet.getStrCommand());
if (task != null) {
LogUtil.i("DP::Get Task OK <<<<<<<<<<< ");
ResponseData responseData = new ResponseData(packet);
task.setResponse(responseData);
} else {
//if specific command, send tips to android
if(packet.getStrCommand().equals("A1FF")) {
byte[] result = packet.getParams();
if(result[0] == 0x01) {
LogUtil.i("DP::Get Pin Start Tips <<<<<<<<<<< ");
EventActionInfo info = new EventActionInfo(EventBusContent.EVENT_BUS_ONLINE_PIN_START);
//Send eventbus msg
EventUtil.post(info);
}else if(result[0] == 0x02) {
LogUtil.i("DP::Get Pin End Tips <<<<<<<<<<< ");
EventActionInfo info = new EventActionInfo(EventBusContent.EVENT_BUS_ONLINE_PIN_END);
//Send eventbus msg
EventUtil.post(info);
}
}
}
}
}
|
A CASE EXAMPLE FOR LASER DATA TREATMENT TO STUDY ROCKY FACESWITH PROTECTION BARRIERS LIDAR is a useful technique for natural and architectural metric surveying. Acquisition is the first step in laser surveying. The second step is the data treatment that is necessary to obtain a correct digital model of the object. This set of elaborations can be subdivided into preliminary data treatment and creation of the final model. The first operations provide a single noise-free point cloud in a specific reference system. The elaborations studied to reduce the noise in laser data due to the location of protection barriers on the rocky faces of the Longeborgne mountain (Switzerland) are here explained.
|
//
// Wiimote communication abstraction layer
//
#pragma once
#include <stddef.h>
#ifdef __cplusplus
extern "C" {
#endif
int wiimote_init();
int wiimote_shutdown();
// Handle to a Wiimote device
typedef struct wiimote_device *HWIIMOTE;
typedef void (*wiimote_device_found)(HWIIMOTE hDev, void *user);
struct wiimote_listener {
wiimote_device_found on_device_found;
};
// Initiates a scan for Wiimotes
// Calls wiimote_listener::on_device_found for every device found.
int wiimote_scan(struct wiimote_listener *l, void *user);
// Disconnect from a Wiimote
// This invalidates the handle
int wiimote_disconnect(HWIIMOTE hDev);
// Send a raw packet to the Wiimote
int wiimote_send(HWIIMOTE hDev, void const *data, size_t length);
// Receive a packet from the Wiimote
// Returns the length of the received packet, zero if no packet was
// received since the last call or -1 on error.
int wiimote_recv(HWIIMOTE hDev, void *data, size_t length);
#ifdef __cplusplus
}
#endif
|
<gh_stars>100-1000
/**************************************************************
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*
*************************************************************/
#ifndef _ANNOTATIONMENUBUTTON_HXX
#define _ANNOTATIONMENUBUTTON_HXX
#include <vcl/menubtn.hxx>
namespace sw { namespace sidebarwindows {
class SwSidebarWin;
} }
namespace sw { namespace annotation {
class AnnotationMenuButton : public MenuButton
{
public:
AnnotationMenuButton( sw::sidebarwindows::SwSidebarWin& rSidebarWin );
~AnnotationMenuButton();
// overloaded <MenuButton> methods
virtual void Select();
// overloaded <Window> methods
virtual void MouseButtonDown( const MouseEvent& rMEvt );
virtual void Paint( const Rectangle& rRect );
virtual void KeyInput( const KeyEvent& rKeyEvt );
private:
sw::sidebarwindows::SwSidebarWin& mrSidebarWin;
};
} } // end of namespace sw::annotation
#endif
|
def create_ruby_src_file(basedir, daystr, sample_count):
filename = ''
if sample_count == 1:
filename = '../input/sample.txt'
else:
filename = '../input/sample1.txt'
with open(os.path.join(basedir, 'src', daystr + '.rb'), 'w') as file:
message = (
"#!/usr/bin/ruby\n"
"\n"
"require 'optparse'\n"
"\n"
"options = {\n"
" :part => 1,\n"
f" :filename => \"{filename}\"\n"
"}\n"
"\n"
"OptionParser.new do |opts|\n"
f" opts.banner = \"Usage: {daystr}.rb [options]\"\n"
" opts.on(\"-p PART\", \"--part=PART\", Integer, \"Part 1 or 2\")\n"
" opts.on(\"-f FILENAME\", \"--filename=FILENAME\", String, \"Which filename to use?\")\n"
"end.parse!(into: options)\n"
"\n"
"if options[:part] == 1\n"
" # do part 1\n"
"elsif options[:part] == 2\n"
" # do part 2\n"
"end\n"
)
file.write(message)
|
/*
* Copyright 2019, Cypress Semiconductor Corporation or a subsidiary of
* Cypress Semiconductor Corporation. All Rights Reserved.
*
* This software, associated documentation and materials ("Software"),
* is owned by Cypress Semiconductor Corporation
* or one of its subsidiaries ("Cypress") and is protected by and subject to
* worldwide patent protection (United States and foreign),
* United States copyright laws and international treaty provisions.
* Therefore, you may use this Software only as provided in the license
* agreement accompanying the software package from which you
* obtained this Software ("EULA").
* If no EULA applies, Cypress hereby grants you a personal, non-exclusive,
* non-transferable license to copy, modify, and compile the Software
* source code solely for use in connection with Cypress's
* integrated circuit products. Any reproduction, modification, translation,
* compilation, or representation of this Software except as specified
* above is prohibited without the express written permission of Cypress.
*
* Disclaimer: THIS SOFTWARE IS PROVIDED AS-IS, WITH NO WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, NONINFRINGEMENT, IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Cypress
* reserves the right to make changes to the Software without notice. Cypress
* does not assume any liability arising out of the application or use of the
* Software or any product or circuit described in the Software. Cypress does
* not authorize its products for use in any products where a malfunction or
* failure of the Cypress product may reasonably be expected to result in
* significant property damage, injury or death ("High Risk Product"). By
* including Cypress's product in a High Risk Product, the manufacturer
* of such system or application assumes all risk of such use and in doing
* so agrees to indemnify Cypress against all liability.
*/
/** @file
* Implementation of wiced_rtos.c for ThreadX
*
* This is the ThreadX implementation of the Wiced RTOS
* abstraction layer.
* It provides Wiced with standard ways of using threads,
* semaphores and time functions
*
*/
#include "wwd_rtos.h"
#include <stdint.h>
#include "wwd_constants.h"
#include "RTOS/wwd_rtos_interface.h"
#include "wwd_assert.h"
#ifdef __GNUC__
#include <reent.h>
#endif /* #ifdef __GNUC__ */
#include <tx_api.h>
#include <tx_thread.h>
#include "platform/wwd_platform_interface.h"
#include "platform_config.h"
#ifdef __GNUC__
void __malloc_lock(struct _reent *ptr);
void __malloc_unlock(struct _reent *ptr);
#endif /* ifdef __GNUC__ */
/******************************************************
* Constants
******************************************************/
/******************************************************
* Variables
******************************************************/
#ifdef __GNUC__
static UINT malloc_mutex_inited = TX_FALSE;
static TX_MUTEX malloc_mutex;
#endif /* ifdef __GNUC__ */
/******************************************************
* Function definitions
******************************************************/
#ifdef DEBUG
static VOID wiced_threadx_stack_error_handler( TX_THREAD * thread );
#endif /* ifdef DEBUG */
static wwd_result_t host_rtos_create_thread_common( /*@out@*/ host_thread_type_t* thread, void(*entry_function)( uint32_t ), const char* name, /*@null@*/ void* stack, uint32_t stack_size, uint32_t priority, host_rtos_thread_config_type_t *config );
/**
* Creates a new thread
*
* @param thread : pointer to variable which will receive handle of created thread
* @param entry_function : main thread function
* @param name : a string thread name used for a debugger
*
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_create_thread( /*@out@*/ host_thread_type_t* thread, void(*entry_function)( uint32_t ), const char* name, /*@null@*/ void* stack, uint32_t stack_size, uint32_t priority )
{
host_rtos_thread_config_type_t config;
config.arg = 0;
config.time_slice = 0;
return host_rtos_create_thread_common( thread, entry_function, (char*) name, stack, (ULONG) stack_size, (UINT) priority, &config );
}
wwd_result_t host_rtos_create_thread_with_arg( /*@out@*/ host_thread_type_t* thread, void(*entry_function)( uint32_t ), const char* name, /*@null@*/ void* stack, uint32_t stack_size, uint32_t priority, uint32_t arg )
{
host_rtos_thread_config_type_t config;
config.arg = arg;
config.time_slice = 0;
return host_rtos_create_thread_common( thread, entry_function, (char*) name, stack, (ULONG) stack_size, (UINT) priority, &config );
}
/**
* Creates a new thread
*
* @param thread : pointer to variable which will receive handle of created thread
* @param entry_function : main thread function
* @param name : a string thread name used for a debugger
* @param config : os specific thread creation params
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_create_configed_thread( /*@out@*/ host_thread_type_t* thread, void(*entry_function)( uint32_t ), const char* name, /*@null@*/ void* stack, uint32_t stack_size, uint32_t priority, host_rtos_thread_config_type_t *config )
{
return host_rtos_create_thread_common( thread, entry_function, name, stack, stack_size, priority, config );
}
static wwd_result_t host_rtos_create_thread_common( /*@out@*/ host_thread_type_t* thread, void(*entry_function)( uint32_t ), const char* name, /*@null@*/ void* stack, uint32_t stack_size, uint32_t priority, host_rtos_thread_config_type_t *config )
{
UINT status;
ULONG time_slice = TX_NO_TIME_SLICE;
uint32_t arg = 0;
if ( stack == NULL )
{
wiced_assert("host_rtos_create_thread: stack is null\n", 0 != 0 );
return WWD_THREAD_STACK_NULL;
}
if ( config != NULL )
{
time_slice = config->time_slice;
arg = config->arg;
}
#ifdef DEBUG
tx_thread_stack_error_notify( wiced_threadx_stack_error_handler );
#endif /* ifdef DEBUG */
status = tx_thread_create( thread, (char*) name, (void(*)( ULONG )) entry_function, arg, stack, (ULONG) stack_size, (UINT) priority, (UINT) priority, time_slice, (UINT) TX_AUTO_START );
return ( status == TX_SUCCESS ) ? WWD_SUCCESS : WWD_THREAD_CREATE_FAILED;
}
/**
* Terminates the current thread
*
* This does nothing since ThreadX threads can exit by just returning
*
* @param thread : handle of the thread to terminate
*
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_finish_thread( host_thread_type_t* thread )
{
UINT status;
status = tx_thread_terminate( thread );
return ( status == TX_SUCCESS ) ? WWD_SUCCESS : WWD_THREAD_FINISH_FAIL;
}
/**
* Deletes a terminated thread
*
* ThreadX requires that another thread deletes any terminated threads
*
* @param thread : handle of the terminated thread to delete
*
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_delete_terminated_thread( host_thread_type_t* thread )
{
UINT status;
status = tx_thread_delete( thread );
return ( status == TX_SUCCESS )? WWD_SUCCESS : WWD_THREAD_DELETE_FAIL;
}
/**
* Blocks the current thread until the indicated thread is complete
*
* @param thread : handle of the thread to terminate
*
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_join_thread( host_thread_type_t* thread )
{
if (thread == NULL || thread->tx_thread_id != TX_THREAD_ID)
{
/*
* Invalid thread pointer.
*/
return WWD_BADARG;
}
while ( ( thread->tx_thread_state != TX_COMPLETED ) && ( thread->tx_thread_state != TX_TERMINATED ) )
{
host_rtos_delay_milliseconds( 10 );
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_init_mutex( host_mutex_type_t* mutex )
{
/* Mutex uses priority inheritance */
if ( tx_mutex_create( mutex, (CHAR*) "", TX_INHERIT ) != TX_SUCCESS )
{
return WWD_SEMAPHORE_ERROR;
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_lock_mutex( host_mutex_type_t* mutex )
{
if ( tx_mutex_get( mutex, TX_WAIT_FOREVER ) != TX_SUCCESS )
{
return WWD_SEMAPHORE_ERROR;
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_unlock_mutex( host_mutex_type_t* mutex )
{
if ( tx_mutex_put( mutex ) != TX_SUCCESS )
{
return WWD_SEMAPHORE_ERROR;
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_deinit_mutex( host_mutex_type_t* mutex )
{
if ( tx_mutex_delete( mutex ) != TX_SUCCESS )
{
return WWD_SEMAPHORE_ERROR;
}
return WWD_SUCCESS;
}
/**
* Creates a semaphore
*
* @param semaphore : pointer to variable which will receive handle of created semaphore
*
* @returns WWD_SUCCESS on success, WICED_ERROR otherwise
*/
wwd_result_t host_rtos_init_semaphore( /*@out@*/ host_semaphore_type_t* semaphore ) /*@modifies *semaphore@*/
{
return ( tx_semaphore_create( semaphore, (char*) "", 0 ) == TX_SUCCESS ) ? WWD_SUCCESS : WWD_SEMAPHORE_ERROR;
}
/**
* Gets a semaphore
*
* If value of semaphore is larger than zero, then the semaphore is decremented and function returns
* Else If value of semaphore is zero, then current thread is suspended until semaphore is set.
* Value of semaphore should never be below zero
*
* Must not be called from interrupt context, since it could block, and since an interrupt is not a
* normal thread, so could cause RTOS problems if it tries to suspend it.
*
* @param semaphore : Pointer to variable which will receive handle of created semaphore
* @param timeout_ms : Maximum period to block for. Can be passed NEVER_TIMEOUT to request no timeout
* @param will_set_in_isr : True if the semaphore will be set in an ISR. Currently only used for NoOS/NoNS
*
*/
wwd_result_t host_rtos_get_semaphore( host_semaphore_type_t* semaphore, uint32_t timeout_ms, wiced_bool_t will_set_in_isr )
{
UINT result;
UNUSED_PARAMETER( will_set_in_isr );
result = tx_semaphore_get( semaphore, ( timeout_ms == NEVER_TIMEOUT ) ? TX_WAIT_FOREVER : (ULONG) ( timeout_ms * SYSTICK_FREQUENCY / 1000 ) );
if ( result == TX_SUCCESS )
{
return WWD_SUCCESS;
}
else if ( result == TX_NO_INSTANCE )
{
return WWD_TIMEOUT;
}
else if ( result == TX_WAIT_ABORTED )
{
return WWD_WAIT_ABORTED;
}
else
{
wiced_assert( "semaphore error ", 0 );
return WWD_SEMAPHORE_ERROR;
}
}
/**
* Sets a semaphore
*
* If any threads are waiting on the semaphore, the first thread is resumed
* Else increment semaphore.
*
* Can be called from interrupt context, so must be able to handle resuming other
* threads from interrupt context.
*
* @param semaphore : Pointer to variable which will receive handle of created semaphore
* @param called_from_ISR : Value of WICED_TRUE indicates calling from interrupt context
* Value of WICED_FALSE indicates calling from normal thread context
*
* @return wwd_result_t : WWD_SUCCESS if semaphore was successfully set
* : WICED_ERROR if an error occurred
*
*/
wwd_result_t host_rtos_set_semaphore( host_semaphore_type_t* semaphore, wiced_bool_t called_from_ISR )
{
UNUSED_PARAMETER( called_from_ISR );
return ( tx_semaphore_put( semaphore ) == TX_SUCCESS )? WWD_SUCCESS : WWD_SEMAPHORE_ERROR;
}
/**
* Deletes a semaphore
*
* WICED uses this function to delete a semaphore.
*
* @param semaphore : Pointer to the semaphore handle
*
* @return wwd_result_t : WWD_SUCCESS if semaphore was successfully deleted
* : WICED_ERROR if an error occurred
*
*/
wwd_result_t host_rtos_deinit_semaphore( host_semaphore_type_t* semaphore )
{
return ( tx_semaphore_delete( semaphore ) == TX_SUCCESS )? WWD_SUCCESS : WWD_SEMAPHORE_ERROR;
}
/**
* Gets time in milliseconds since RTOS start
*
* @Note: since this is only 32 bits, it will roll over every 49 days, 17 hours.
*
* @returns Time in milliseconds since RTOS started.
*/
wwd_time_t host_rtos_get_time( void ) /*@modifies internalState@*/
{
return (wwd_time_t) ( tx_time_get( ) * ( 1000 / SYSTICK_FREQUENCY ) );
}
/**
* Delay for a number of milliseconds
*
* Processing of this function depends on the minimum sleep
* time resolution of the RTOS.
* The current thread sleeps for the longest period possible which
* is less than the delay required, then makes up the difference
* with a tight loop
*
* @return wwd_result_t : WWD_SUCCESS if delay was successful
* : WICED_ERROR if an error occurred
*
*/
wwd_result_t host_rtos_delay_milliseconds( uint32_t num_ms )
{
if ( ( num_ms * SYSTICK_FREQUENCY / 1000 ) != 0 )
{
if ( ( tx_thread_sleep( (ULONG) ( num_ms * SYSTICK_FREQUENCY / 1000 ) ) ) != TX_SUCCESS )
{
return WWD_SLEEP_ERROR;
}
}
else
{
uint32_t time_reference = host_platform_get_cycle_count( );
int32_t wait_time = (int32_t)num_ms * (int32_t)CPU_CLOCK_HZ / 1000;
while ( wait_time > 0 )
{
uint32_t current_time = host_platform_get_cycle_count( );
wait_time -= (int32_t)( current_time - time_reference );
time_reference = current_time;
}
}
return WWD_SUCCESS;
}
unsigned long host_rtos_get_tickrate( void )
{
return SYSTICK_FREQUENCY;
}
wwd_result_t host_rtos_init_queue( host_queue_type_t* queue, void* buffer, uint32_t buffer_size, uint32_t message_size )
{
if ( ( message_size % 4 ) > 0 )
{
wiced_assert("Unaligned message size\n", 0);
return WWD_QUEUE_ERROR;
}
if ( tx_queue_create( queue, (CHAR*) "queue", (UINT) ( message_size / 4 ), (VOID *) buffer, (ULONG) buffer_size ) != TX_SUCCESS )
{
return WWD_QUEUE_ERROR;
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_push_to_queue( host_queue_type_t* queue, void* message, uint32_t timeout_ms )
{
if ( tx_queue_send( queue, (VOID*) message, ( timeout_ms == NEVER_TIMEOUT ) ? TX_WAIT_FOREVER : (ULONG) ( timeout_ms * SYSTICK_FREQUENCY / 1000 ) ) != TX_SUCCESS )
{
return WWD_QUEUE_ERROR;
}
return WWD_SUCCESS;
}
wwd_result_t host_rtos_pop_from_queue( host_queue_type_t* queue, void* message, uint32_t timeout_ms )
{
UINT result = tx_queue_receive( queue, (VOID*)message, ( timeout_ms == NEVER_TIMEOUT ) ? TX_WAIT_FOREVER : (ULONG)( timeout_ms * SYSTICK_FREQUENCY / 1000 ) );
if ( result == TX_SUCCESS )
{
return WWD_SUCCESS;
}
else if ( result == TX_QUEUE_EMPTY )
{
return WWD_TIMEOUT;
}
else if ( result == TX_WAIT_ABORTED )
{
return WWD_WAIT_ABORTED;
}
else
{
wiced_assert( "queue error ", 0 );
return WWD_QUEUE_ERROR;
}
}
wwd_result_t host_rtos_deinit_queue( host_queue_type_t* queue )
{
if ( tx_queue_delete( queue ) != TX_SUCCESS )
{
return WWD_QUEUE_ERROR;
}
return WWD_SUCCESS;
}
#ifdef DEBUG
static VOID wiced_threadx_stack_error_handler( TX_THREAD * thread )
{
UNUSED_PARAMETER( thread );
wiced_assert( "ThreadX stack overflow", 0 != 0 );
}
#endif /* ifdef DEBUG */
#ifdef __GNUC__
void __malloc_lock(struct _reent *ptr)
{
UNUSED_PARAMETER( ptr );
if ( malloc_mutex_inited == TX_FALSE )
{
UINT status;
status = tx_mutex_create( &malloc_mutex, (CHAR*) "malloc_mutex", TX_FALSE );
wiced_assert( "Error creating mutex", status == TX_SUCCESS );
REFERENCE_DEBUG_ONLY_VARIABLE( status );
malloc_mutex_inited = TX_TRUE;
}
tx_mutex_get( &malloc_mutex, TX_WAIT_FOREVER );
}
void __malloc_unlock(struct _reent *ptr)
{
UNUSED_PARAMETER( ptr );
tx_mutex_put( &malloc_mutex );
}
#endif
|
<commit_msg>Add primitive server option for easier debugging<commit_before>from common.bounty import *
from common.peers import *
from common import settings
def main():
settings.setup()
print "settings are:"
print settings.config
if __name__ == "__main__":
main()
<commit_after>from common.bounty import *
from common.peers import *
from common import settings
def main():
settings.setup()
print "settings are:"
print settings.config
if settings.config.get('server') is not True:
initializePeerConnections()
else:
listen()
if __name__ == "__main__":
main()
|
Field of the Disclosure
The embodiments described herein relate to power charge connectors for downhole setting tools and methods of using the same.
Description of the Related Art
A downhole setting tool may use a power charge to set a device within a wellbore. The power charge is detonated to generate the force required to set the device. For example, the force from the detonated power charge may move a piston causing the setting of the device. The power charge of the downhole setting tool may be used to set various devices in a wellbore as would be appreciated by one of ordinary skill in the art. For example, a downhole setting tool with a power charge may be used to set bridge plugs, cement retainers, packers, and various other downhole devices.
An electrical signal is typically sent down a conduit to the setting tool to actuate a primary igniter in the firing head of the setting tool. The actuation of the primary igniter is used to detonate the power charge, which is typically located downhole from the primary igniter in a chamber connected to the firing head via a cartridge seat. The downhole setting tool may include a secondary igniter that is used to detonate the power charge upon the actuation of the primary igniter. The primary igniter often comprises black powder (e.g., gun powder, a mixture of sulfur, charcoal, and saltpeter) that is ignited from the electrical signal.
It has been recognized that it would be beneficial to increase the reliability with which the power charge of downhole setting tools detonates and sets the downhole device. For example, on Jan. 13, 2017, Applicant filed U.S. patent application Ser. No. 15/406,040 entitled “SETTING TOOL POWER CHARGE INITIATION” that is directed to devices and methods for initiating or setting off a power charge, which is incorporated by reference herein in its entirety.
FIG. 6 shows an embodiment of known conventional downhole setting tool 200. The setting tool 200 may be the E-4 packer setting device which is available commercially from Baker Hughes Incorporated of Houston, Tex. The setting tool 200 includes a firing head 210 connected to an adapter 230, which is also referred to as a cartridge seat. The adapter 230 houses the primary igniter 220. The E-4 packer setting device also includes a secondary igniter 235 housed within the adapter 230, which is ignited by the actuation or ignition of the primary igniter 220. The actuation of the primary igniter 220 pushes the secondary igniter 235 towards the power charge 250 as shown by secondary igniter 235′ shown in dash.
The power charge 250 includes an outer housing 255 and is positioned within a chamber 245 of a housing 240 connected to the firing head 210. The downhole side of the housing 240 is connected to a sub 280 that is connected to the device (not shown) to be set within the wellbore. The sub 280 provides communication with an actuation mechanism, such as a piston, configured to move and set the device upon the detonation of the power charge 250 as would be appreciated by one of ordinary skill in the art. The downhole end 252 of the power charge 250 is inserted into the chamber 245 of the housing 240 and the power charge 250 and the power charge 250 is pushed into the chamber 245 until the downhole end 252 is positioned within the sub 280. The housing 240 containing the power charge 250 is then connected to the firing head 210 and the adapter 230. The uphole end 251 of the power charge 250 includes an igniter 260 that helps to detonate the power charge 250 upon the ignition of the primary igniter 220 and the secondary igniter 235. As used herein, the uphole end refers to the end of an object that is closer to the opening of a wellbore at the surface in comparison to the other end of the object, referred to herein as the downhole end.
Conventional downhole setting tools that include power charges are very reliable and are used to set a large number of devices in a wellbore. However, even if conventional setting tools are 99% reliable, the removal of one setting tool and device out of one hundred from the wellbore is a potentially costly and time consuming operation. As the downhole tool 200 is run into the wellbore the power charge 250 may become misaligned with the primary igniter 220 and/or secondary igniter 235 potentially causing the power charge 250 to not detonate when the igniter(s) 220, 235 are actuated. For example, as the tool 200 traverses around a lateral in a wellbore the secondary igniter 235 may fall into a cavity as shown as 235′ in FIG. 6 becoming misaligned with the power charge 250. A secondary igniter 235′ in a misaligned positioned may fail to detonate the power charge 250. Other disadvantages may exist.
|
Stress-Induced Martensitic Transformation of Metastable Ti-Nb-Ge Alloys for Biomedical Application Pseudoelastic behavior of Ti-xNb-yGe alloys, where x=22~28at.% and y=0.5~2.0at.%, was investigated by controlling martensite start temperature and phase stability of phase. Cyclic tensile test was carried out to display a pseudoelastic behavior at room temperature. Determination of the martensitic transformation temperature (Ms and Mf) and reverse transformation temperature (As and Af) of the alloys were carried out using differential scanning calorimetry (DSC). Optical microscopy, X-ray diffraction (XRD) and DSC results revealed that Ge is stronger stabilizer of -phase than Nb. XRD spectra of the deformed specimens confirmed that the crystal structure of stress-induced martensite phase is orthorhombic structured. It is concluded that pseudoelasticity of the present Ti-Nb-Ge alloy is closely associated with phase stability, and metastable -phase is better to increase pseudoelasticity than stable one.
|
Richard Harvey
Early life and career
Born in London, Harvey soon became involved in music, learning the recorder when he was four years old, switching first to percussion and later playing clarinet in the British Youth Symphony Orchestra. By the time he graduated from London's Royal College of Music in 1972, he was accomplished on the recorder, flute, krumhorn, and other mediaeval and Renaissance-era instruments, as well as the mandolin and various keyboards. He could have joined the London Philharmonic Orchestra, but instead chose to work with Musica Reservata, an early music ensemble. He subsequently met another RCM graduate, Brian Gulland, and went on to form the progressive rock and folk band Gryphon. During that period, he also worked with other folk rock musicians such as Richard and Linda Thompson and Ashley Hutchings. When Gryphon wound down in the late 1970s, he became a session musician, playing on Kate Bush's Lionheart, Gerry Rafferty's Night Owl, Sweet's Level Headed and Gordon Giltrap's Fear of the Dark and The Peacock Party, among others. He also had a brief spell in New Wave outfit The Banned.
Film and television career
After working with film composer Maurice Jarre in the mid 1970s, he became involved in composing for film and television. His first work was to provide music for the television series Tales of the Unexpected in 1979. He has subsequently supplied scores to over 80 television and film projects.
Notable works include 1979's Martian Chronicles ending titles, the horror film House of the Long Shadows (1983), 1984's wistful Shroud for a Nightingale theme for the PD James detective series, the action sequel The Dirty Dozen: Next Mission (1985), British films such as The Assam Garden (1985), Steaming (1985), Defence of the Realm (1986) and Half Moon Street (1986), Alan Bleasdale's G.B.H in 1991, which he co-wrote with Elvis Costello (and which won them, jointly, a British Academy of Film and Television Arts award), Luther (2003) and, most recently, in 2006, Ron Howard's The Da Vinci Code and Gabriel Range's Death of a President.
In addition he has been a musician on such films as The Lion King, Enemy of the State and Harry Potter and the Prisoner of Azkaban.
In 1981, Richard Harvey's "Exchange" and "Water Course" from Harvey's "Nifty Digits" release (KPM Library #1251) were featured in a popular Sesame Street segment filmed at the Binney and Smith Crayola crayon factory in Easton, PA.
Harvey also composed the theme song for TBS' World Championship Wrestling, called "Dynamics".
Harvey is also a prolific composer of production music and founding partner of West One Music Group along with Edwin Cox and Tony Prior. Arguably his most widely recognized piece is "Reach for the Stars," which has been used in numerous movie trailers, commercials, and television shows. For example, the piece is used as the theme for Powdered Toast Man in The Ren and Stimpy Show, Help Wanted in SpongeBob SquarePants, and promotional material for Disney
Other projects
In 1984, he was a conductor on one of a series of classic rock albums by the London Symphony Orchestra. He has frequently toured and recorded with the guitarist John Williams on projects including the 2002 album Magic Box. He also played on the 2004 album The Opera Band by pop/classical crossover act Amici Forever, which reached #74 on the Billboard Top 200 albums and #2 on the Billboard Top Classical crossover chart. He worked with Elvis Costello on his 2006 album My Flame Burns Blue. A skilled multi-instrumentalist, he has a collection of over 700 different instruments from around the world.
Since 2005, "John Williams & Richard Harvey's World Tour" has appeared in many different countries, from Japan and China to Ireland and Luxembourg, with the duo playing a mixture of world and classical music spanning five continents and five centuries, featuring Chinese, African and European instruments.
Harvey's first recorder concerto (Concerto Incantato) enjoyed its world premiere on Michala Petri's CD English Recorder Concertos in March 2012, alongside works by Malcolm Arnold and Gordon Jacob.
|
<reponame>EsdrasAmora/blog<gh_stars>0
import { Field, InputType } from '@nestjs/graphql';
import { IsUUID } from 'class-validator';
@InputType()
export class DeletePostInput {
@IsUUID()
@Field()
postId!: string;
}
|
<reponame>dmgerman/camel<filename>components/camel-as2/camel-as2-api/src/main/java/org/apache/camel/component/as2/api/entity/EntityParser.java<gh_stars>0
begin_unit|revision:0.9.5;language:Java;cregit-version:0.0.1
begin_comment
comment|/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
end_comment
begin_package
DECL|package|org.apache.camel.component.as2.api.entity
package|package
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|entity
package|;
end_package
begin_import
import|import
name|java
operator|.
name|io
operator|.
name|ByteArrayInputStream
import|;
end_import
begin_import
import|import
name|java
operator|.
name|io
operator|.
name|IOException
import|;
end_import
begin_import
import|import
name|java
operator|.
name|io
operator|.
name|InputStream
import|;
end_import
begin_import
import|import
name|java
operator|.
name|nio
operator|.
name|charset
operator|.
name|Charset
import|;
end_import
begin_import
import|import
name|java
operator|.
name|nio
operator|.
name|charset
operator|.
name|CharsetDecoder
import|;
end_import
begin_import
import|import
name|java
operator|.
name|security
operator|.
name|PrivateKey
import|;
end_import
begin_import
import|import
name|java
operator|.
name|util
operator|.
name|ArrayList
import|;
end_import
begin_import
import|import
name|java
operator|.
name|util
operator|.
name|Collection
import|;
end_import
begin_import
import|import
name|java
operator|.
name|util
operator|.
name|Iterator
import|;
end_import
begin_import
import|import
name|java
operator|.
name|util
operator|.
name|List
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|AS2Charset
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|AS2Header
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|AS2MimeType
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|io
operator|.
name|AS2SessionInputBuffer
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|util
operator|.
name|AS2HeaderUtils
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|util
operator|.
name|ContentTypeUtils
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|util
operator|.
name|DispositionNotificationContentUtils
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|util
operator|.
name|EntityUtils
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|camel
operator|.
name|component
operator|.
name|as2
operator|.
name|api
operator|.
name|util
operator|.
name|HttpMessageUtils
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|Header
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|HttpEntity
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|HttpException
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|HttpMessage
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|NameValuePair
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|ParseException
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|entity
operator|.
name|ContentType
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|impl
operator|.
name|io
operator|.
name|AbstractMessageParser
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|impl
operator|.
name|io
operator|.
name|HttpTransportMetricsImpl
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|message
operator|.
name|BasicLineParser
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|message
operator|.
name|BasicNameValuePair
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|message
operator|.
name|LineParser
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|message
operator|.
name|ParserCursor
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|util
operator|.
name|Args
import|;
end_import
begin_import
import|import
name|org
operator|.
name|apache
operator|.
name|http
operator|.
name|util
operator|.
name|CharArrayBuffer
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|CMSCompressedData
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|CMSEnvelopedData
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|CMSException
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|Recipient
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|RecipientInformation
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|RecipientInformationStore
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|cms
operator|.
name|jcajce
operator|.
name|JceKeyTransEnvelopedRecipient
import|;
end_import
begin_import
import|import
name|org
operator|.
name|bouncycastle
operator|.
name|operator
operator|.
name|InputExpanderProvider
import|;
end_import
begin_class
DECL|class|EntityParser
specifier|public
specifier|final
class|class
name|EntityParser
block|{
DECL|field|DEFAULT_BUFFER_SIZE
specifier|private
specifier|static
specifier|final
name|int
name|DEFAULT_BUFFER_SIZE
init|=
literal|8
operator|*
literal|1024
decl_stmt|;
DECL|method|EntityParser ()
specifier|private
name|EntityParser
parameter_list|()
block|{ }
DECL|method|isBoundaryCloseDelimiter (final CharArrayBuffer buffer, ParserCursor cursor, String boundary)
specifier|public
specifier|static
name|boolean
name|isBoundaryCloseDelimiter
parameter_list|(
specifier|final
name|CharArrayBuffer
name|buffer
parameter_list|,
name|ParserCursor
name|cursor
parameter_list|,
name|String
name|boundary
parameter_list|)
block|{
name|Args
operator|.
name|notNull
argument_list|(
name|buffer
argument_list|,
literal|"Buffer"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|boundary
argument_list|,
literal|"Boundary"
argument_list|)
expr_stmt|;
name|String
name|boundaryCloseDelimiter
init|=
literal|"--"
operator|+
name|boundary
operator|+
literal|"--"
decl_stmt|;
comment|// boundary
comment|// close-delimiter
comment|// - RFC2046
comment|// 5.1.1
if|if
condition|(
name|cursor
operator|==
literal|null
condition|)
block|{
name|cursor
operator|=
operator|new
name|ParserCursor
argument_list|(
literal|0
argument_list|,
name|boundaryCloseDelimiter
operator|.
name|length
argument_list|()
argument_list|)
expr_stmt|;
block|}
name|int
name|indexFrom
init|=
name|cursor
operator|.
name|getPos
argument_list|()
decl_stmt|;
name|int
name|indexTo
init|=
name|cursor
operator|.
name|getUpperBound
argument_list|()
decl_stmt|;
if|if
condition|(
operator|(
name|indexFrom
operator|+
name|boundaryCloseDelimiter
operator|.
name|length
argument_list|()
operator|)
operator|>
name|indexTo
condition|)
block|{
return|return
literal|false
return|;
block|}
for|for
control|(
name|int
name|i
init|=
name|indexFrom
init|;
name|i
operator|<
name|indexTo
condition|;
operator|++
name|i
control|)
block|{
if|if
condition|(
name|buffer
operator|.
name|charAt
argument_list|(
name|i
argument_list|)
operator|!=
name|boundaryCloseDelimiter
operator|.
name|charAt
argument_list|(
name|i
argument_list|)
condition|)
block|{
return|return
literal|false
return|;
block|}
block|}
return|return
literal|true
return|;
block|}
DECL|method|isBoundaryDelimiter (final CharArrayBuffer buffer, ParserCursor cursor, String boundary)
specifier|public
specifier|static
name|boolean
name|isBoundaryDelimiter
parameter_list|(
specifier|final
name|CharArrayBuffer
name|buffer
parameter_list|,
name|ParserCursor
name|cursor
parameter_list|,
name|String
name|boundary
parameter_list|)
block|{
name|Args
operator|.
name|notNull
argument_list|(
name|buffer
argument_list|,
literal|"Buffer"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|boundary
argument_list|,
literal|"Boundary"
argument_list|)
expr_stmt|;
name|String
name|boundaryDelimiter
init|=
literal|"--"
operator|+
name|boundary
decl_stmt|;
comment|// boundary delimiter -
comment|// RFC2046 5.1.1
if|if
condition|(
name|cursor
operator|==
literal|null
condition|)
block|{
name|cursor
operator|=
operator|new
name|ParserCursor
argument_list|(
literal|0
argument_list|,
name|boundaryDelimiter
operator|.
name|length
argument_list|()
argument_list|)
expr_stmt|;
block|}
name|int
name|indexFrom
init|=
name|cursor
operator|.
name|getPos
argument_list|()
decl_stmt|;
name|int
name|indexTo
init|=
name|cursor
operator|.
name|getUpperBound
argument_list|()
decl_stmt|;
if|if
condition|(
operator|(
name|indexFrom
operator|+
name|boundaryDelimiter
operator|.
name|length
argument_list|()
operator|)
operator|>
name|indexTo
condition|)
block|{
return|return
literal|false
return|;
block|}
for|for
control|(
name|int
name|i
init|=
name|indexFrom
init|;
name|i
operator|<
name|indexTo
condition|;
operator|++
name|i
control|)
block|{
if|if
condition|(
name|buffer
operator|.
name|charAt
argument_list|(
name|i
argument_list|)
operator|!=
name|boundaryDelimiter
operator|.
name|charAt
argument_list|(
name|i
argument_list|)
condition|)
block|{
return|return
literal|false
return|;
block|}
block|}
return|return
literal|true
return|;
block|}
DECL|method|skipPreambleAndStartBoundary (AS2SessionInputBuffer inbuffer, String boundary)
specifier|public
specifier|static
name|void
name|skipPreambleAndStartBoundary
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|)
throws|throws
name|HttpException
block|{
name|boolean
name|foundStartBoundary
decl_stmt|;
try|try
block|{
name|foundStartBoundary
operator|=
literal|false
expr_stmt|;
name|CharArrayBuffer
name|lineBuffer
init|=
operator|new
name|CharArrayBuffer
argument_list|(
literal|1024
argument_list|)
decl_stmt|;
while|while
condition|(
name|inbuffer
operator|.
name|readLine
argument_list|(
name|lineBuffer
argument_list|)
operator|!=
operator|-
literal|1
condition|)
block|{
specifier|final
name|ParserCursor
name|cursor
init|=
operator|new
name|ParserCursor
argument_list|(
literal|0
argument_list|,
name|lineBuffer
operator|.
name|length
argument_list|()
argument_list|)
decl_stmt|;
if|if
condition|(
name|isBoundaryDelimiter
argument_list|(
name|lineBuffer
argument_list|,
name|cursor
argument_list|,
name|boundary
argument_list|)
condition|)
block|{
name|foundStartBoundary
operator|=
literal|true
expr_stmt|;
break|break;
block|}
name|lineBuffer
operator|.
name|clear
argument_list|()
expr_stmt|;
block|}
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to read start boundary for body part"
argument_list|,
name|e
argument_list|)
throw|;
block|}
if|if
condition|(
operator|!
name|foundStartBoundary
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find start boundary for body part"
argument_list|)
throw|;
block|}
block|}
DECL|method|skipToBoundary (AS2SessionInputBuffer inbuffer, String boundary)
specifier|public
specifier|static
name|void
name|skipToBoundary
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|)
throws|throws
name|HttpException
block|{
name|boolean
name|foundEndBoundary
decl_stmt|;
try|try
block|{
name|foundEndBoundary
operator|=
literal|false
expr_stmt|;
name|CharArrayBuffer
name|lineBuffer
init|=
operator|new
name|CharArrayBuffer
argument_list|(
literal|1024
argument_list|)
decl_stmt|;
while|while
condition|(
name|inbuffer
operator|.
name|readLine
argument_list|(
name|lineBuffer
argument_list|)
operator|!=
operator|-
literal|1
condition|)
block|{
specifier|final
name|ParserCursor
name|cursor
init|=
operator|new
name|ParserCursor
argument_list|(
literal|0
argument_list|,
name|lineBuffer
operator|.
name|length
argument_list|()
argument_list|)
decl_stmt|;
if|if
condition|(
name|isBoundaryDelimiter
argument_list|(
name|lineBuffer
argument_list|,
name|cursor
argument_list|,
name|boundary
argument_list|)
condition|)
block|{
name|foundEndBoundary
operator|=
literal|true
expr_stmt|;
break|break;
block|}
name|lineBuffer
operator|.
name|clear
argument_list|()
expr_stmt|;
block|}
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to read start boundary for body part"
argument_list|,
name|e
argument_list|)
throw|;
block|}
if|if
condition|(
operator|!
name|foundEndBoundary
operator|&&
name|boundary
operator|!=
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find start boundary for body part"
argument_list|)
throw|;
block|}
block|}
DECL|method|parseCompressedEntity (byte[] compressedData, InputExpanderProvider expanderProvider)
specifier|public
specifier|static
name|MimeEntity
name|parseCompressedEntity
parameter_list|(
name|byte
index|[]
name|compressedData
parameter_list|,
name|InputExpanderProvider
name|expanderProvider
parameter_list|)
throws|throws
name|HttpException
block|{
name|byte
index|[]
name|uncompressedContent
init|=
name|uncompressData
argument_list|(
name|compressedData
argument_list|,
name|expanderProvider
argument_list|)
decl_stmt|;
return|return
name|parseEntity
argument_list|(
name|uncompressedContent
argument_list|)
return|;
block|}
DECL|method|parseEnvelopedEntity (byte[] envelopedContent, PrivateKey privateKey)
specifier|public
specifier|static
name|MimeEntity
name|parseEnvelopedEntity
parameter_list|(
name|byte
index|[]
name|envelopedContent
parameter_list|,
name|PrivateKey
name|privateKey
parameter_list|)
throws|throws
name|HttpException
block|{
name|byte
index|[]
name|decryptedContent
init|=
name|decryptData
argument_list|(
name|envelopedContent
argument_list|,
name|privateKey
argument_list|)
decl_stmt|;
return|return
name|parseEntity
argument_list|(
name|decryptedContent
argument_list|)
return|;
block|}
DECL|method|parseEntity (byte[] content)
specifier|public
specifier|static
name|MimeEntity
name|parseEntity
parameter_list|(
name|byte
index|[]
name|content
parameter_list|)
throws|throws
name|HttpException
block|{
try|try
block|{
name|InputStream
name|is
init|=
operator|new
name|ByteArrayInputStream
argument_list|(
name|content
argument_list|)
decl_stmt|;
name|AS2SessionInputBuffer
name|inbuffer
init|=
operator|new
name|AS2SessionInputBuffer
argument_list|(
operator|new
name|HttpTransportMetricsImpl
argument_list|()
argument_list|,
name|DEFAULT_BUFFER_SIZE
argument_list|)
decl_stmt|;
name|inbuffer
operator|.
name|bind
argument_list|(
name|is
argument_list|)
expr_stmt|;
comment|// Read Text Report Body Part Headers
name|Header
index|[]
name|headers
init|=
name|AbstractMessageParser
operator|.
name|parseHeaders
argument_list|(
name|inbuffer
argument_list|,
operator|-
literal|1
argument_list|,
operator|-
literal|1
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
decl_stmt|;
comment|// Get Content-Type and Content-Transfer-Encoding
name|ContentType
name|entityContentType
init|=
literal|null
decl_stmt|;
name|String
name|entityContentTransferEncoding
init|=
literal|null
decl_stmt|;
for|for
control|(
name|Header
name|header
range|:
name|headers
control|)
block|{
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
condition|)
block|{
name|entityContentType
operator|=
name|ContentType
operator|.
name|parse
argument_list|(
name|header
operator|.
name|getValue
argument_list|()
argument_list|)
expr_stmt|;
block|}
elseif|else
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
condition|)
block|{
name|entityContentTransferEncoding
operator|=
name|header
operator|.
name|getValue
argument_list|()
expr_stmt|;
block|}
block|}
if|if
condition|(
name|entityContentType
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find Content-Type header in enveloped entity"
argument_list|)
throw|;
block|}
name|MimeEntity
name|entity
init|=
name|parseEntityBody
argument_list|(
name|inbuffer
argument_list|,
literal|null
argument_list|,
name|entityContentType
argument_list|,
name|entityContentTransferEncoding
argument_list|,
name|headers
argument_list|)
decl_stmt|;
name|entity
operator|.
name|removeAllHeaders
argument_list|()
expr_stmt|;
name|entity
operator|.
name|setHeaders
argument_list|(
name|headers
argument_list|)
expr_stmt|;
return|return
name|entity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|uncompressData (byte[] compressedData, InputExpanderProvider expanderProvider)
specifier|public
specifier|static
name|byte
index|[]
name|uncompressData
parameter_list|(
name|byte
index|[]
name|compressedData
parameter_list|,
name|InputExpanderProvider
name|expanderProvider
parameter_list|)
throws|throws
name|HttpException
block|{
try|try
block|{
name|CMSCompressedData
name|cmsCompressedData
init|=
operator|new
name|CMSCompressedData
argument_list|(
name|compressedData
argument_list|)
decl_stmt|;
return|return
name|cmsCompressedData
operator|.
name|getContent
argument_list|(
name|expanderProvider
argument_list|)
return|;
block|}
catch|catch
parameter_list|(
name|CMSException
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to decompress data"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|decryptData (byte[] encryptedData, PrivateKey privateKey)
specifier|public
specifier|static
name|byte
index|[]
name|decryptData
parameter_list|(
name|byte
index|[]
name|encryptedData
parameter_list|,
name|PrivateKey
name|privateKey
parameter_list|)
throws|throws
name|HttpException
block|{
try|try
block|{
comment|// Create enveloped data from encrypted data
name|CMSEnvelopedData
name|cmsEnvelopedData
init|=
operator|new
name|CMSEnvelopedData
argument_list|(
name|encryptedData
argument_list|)
decl_stmt|;
comment|// Extract recipient information form enveloped data.
name|RecipientInformationStore
name|recipientsInformationStore
init|=
name|cmsEnvelopedData
operator|.
name|getRecipientInfos
argument_list|()
decl_stmt|;
name|Collection
argument_list|<
name|RecipientInformation
argument_list|>
name|recipients
init|=
name|recipientsInformationStore
operator|.
name|getRecipients
argument_list|()
decl_stmt|;
name|Iterator
argument_list|<
name|RecipientInformation
argument_list|>
name|it
init|=
name|recipients
operator|.
name|iterator
argument_list|()
decl_stmt|;
comment|// Decrypt if enveloped data contains recipient information
if|if
condition|(
name|it
operator|.
name|hasNext
argument_list|()
condition|)
block|{
comment|// Create recipient from private key.
name|Recipient
name|recipient
init|=
operator|new
name|JceKeyTransEnvelopedRecipient
argument_list|(
name|privateKey
argument_list|)
decl_stmt|;
comment|// Extract decrypted data from recipient information
name|RecipientInformation
name|recipientInfo
init|=
name|it
operator|.
name|next
argument_list|()
decl_stmt|;
return|return
name|recipientInfo
operator|.
name|getContent
argument_list|(
name|recipient
argument_list|)
return|;
block|}
block|}
catch|catch
parameter_list|(
name|CMSException
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to decrypt data"
argument_list|,
name|e
argument_list|)
throw|;
block|}
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to decrypt data: bno recipeint information"
argument_list|)
throw|;
block|}
DECL|method|parseApplicationPkcs7MimeCompressedEntity (HttpMessage message, AS2SessionInputBuffer inBuffer, ContentType contentType, String contentTransferEncoding)
specifier|private
specifier|static
name|void
name|parseApplicationPkcs7MimeCompressedEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|,
name|AS2SessionInputBuffer
name|inBuffer
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|HttpException
block|{
name|ApplicationPkcs7MimeCompressedDataEntity
name|applicationPkcs7MimeCompressedDataEntity
init|=
literal|null
decl_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|message
argument_list|,
literal|"message"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|inBuffer
argument_list|,
literal|"inBuffer"
argument_list|)
expr_stmt|;
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|ApplicationPkcs7MimeCompressedDataEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
name|Args
operator|.
name|check
argument_list|(
name|entity
operator|.
name|isStreaming
argument_list|()
argument_list|,
literal|"Message entity can not be parsed: entity is not streaming"
argument_list|)
expr_stmt|;
try|try
block|{
name|applicationPkcs7MimeCompressedDataEntity
operator|=
name|parseApplicationPkcs7MimeCompressedDataEntityBody
argument_list|(
name|inBuffer
argument_list|,
literal|null
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|applicationPkcs7MimeCompressedDataEntity
operator|.
name|setMainBody
argument_list|(
literal|true
argument_list|)
expr_stmt|;
name|EntityUtils
operator|.
name|setMessageEntity
argument_list|(
name|message
argument_list|,
name|applicationPkcs7MimeCompressedDataEntity
argument_list|)
expr_stmt|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|parseApplicationPkcs7MimeEnvelopedEntity (HttpMessage message, AS2SessionInputBuffer inBuffer, ContentType contentType, String contentTransferEncoding)
specifier|private
specifier|static
name|void
name|parseApplicationPkcs7MimeEnvelopedEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|,
name|AS2SessionInputBuffer
name|inBuffer
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|HttpException
block|{
name|ApplicationPkcs7MimeEnvelopedDataEntity
name|applicationPkcs7MimeEnvelopedDataEntity
init|=
literal|null
decl_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|message
argument_list|,
literal|"message"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|inBuffer
argument_list|,
literal|"inBuffer"
argument_list|)
expr_stmt|;
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|ApplicationPkcs7MimeCompressedDataEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
name|Args
operator|.
name|check
argument_list|(
name|entity
operator|.
name|isStreaming
argument_list|()
argument_list|,
literal|"Message entity can not be parsed: entity is not streaming"
argument_list|)
expr_stmt|;
try|try
block|{
name|applicationPkcs7MimeEnvelopedDataEntity
operator|=
name|parseApplicationPkcs7MimeEnvelopedDataEntityBody
argument_list|(
name|inBuffer
argument_list|,
literal|null
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|applicationPkcs7MimeEnvelopedDataEntity
operator|.
name|setMainBody
argument_list|(
literal|true
argument_list|)
expr_stmt|;
name|EntityUtils
operator|.
name|setMessageEntity
argument_list|(
name|message
argument_list|,
name|applicationPkcs7MimeEnvelopedDataEntity
argument_list|)
expr_stmt|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|parseMultipartSignedEntity (HttpMessage message, AS2SessionInputBuffer inBuffer, String boundary, String charsetName, String contentTransferEncoding)
specifier|private
specifier|static
name|void
name|parseMultipartSignedEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|,
name|AS2SessionInputBuffer
name|inBuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|HttpException
block|{
name|MultipartSignedEntity
name|multipartSignedEntity
init|=
literal|null
decl_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|message
argument_list|,
literal|"message"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|inBuffer
argument_list|,
literal|"inBuffer"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|boundary
argument_list|,
literal|"boundary"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|charsetName
argument_list|,
literal|"charsetName"
argument_list|)
expr_stmt|;
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|MultipartSignedEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
name|Args
operator|.
name|check
argument_list|(
name|entity
operator|.
name|isStreaming
argument_list|()
argument_list|,
literal|"Message entity can not be parsed: entity is not streaming"
argument_list|)
expr_stmt|;
try|try
block|{
comment|// Get Micalg Value
name|String
name|micalg
init|=
name|HttpMessageUtils
operator|.
name|getParameterValue
argument_list|(
name|message
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|,
literal|"micalg"
argument_list|)
decl_stmt|;
if|if
condition|(
name|micalg
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to retrieve 'micalg' parameter from content type header"
argument_list|)
throw|;
block|}
name|multipartSignedEntity
operator|=
name|parseMultipartSignedEntityBody
argument_list|(
name|inBuffer
argument_list|,
name|boundary
argument_list|,
name|micalg
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|multipartSignedEntity
operator|.
name|setMainBody
argument_list|(
literal|true
argument_list|)
expr_stmt|;
name|EntityUtils
operator|.
name|setMessageEntity
argument_list|(
name|message
argument_list|,
name|multipartSignedEntity
argument_list|)
expr_stmt|;
block|}
catch|catch
parameter_list|(
name|HttpException
name|e
parameter_list|)
block|{
throw|throw
name|e
throw|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|parseApplicationEDIEntity (HttpMessage message, AS2SessionInputBuffer inBuffer, ContentType contentType, String contentTransferEncoding)
specifier|private
specifier|static
name|void
name|parseApplicationEDIEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|,
name|AS2SessionInputBuffer
name|inBuffer
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|HttpException
block|{
name|ApplicationEDIEntity
name|applicationEDIEntity
init|=
literal|null
decl_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|message
argument_list|,
literal|"message"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|inBuffer
argument_list|,
literal|"inBuffer"
argument_list|)
expr_stmt|;
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|ApplicationEDIEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
name|Args
operator|.
name|check
argument_list|(
name|entity
operator|.
name|isStreaming
argument_list|()
argument_list|,
literal|"Message entity can not be parsed: entity is not streaming"
argument_list|)
expr_stmt|;
try|try
block|{
name|applicationEDIEntity
operator|=
name|parseEDIEntityBody
argument_list|(
name|inBuffer
argument_list|,
literal|null
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|applicationEDIEntity
operator|.
name|setMainBody
argument_list|(
literal|true
argument_list|)
expr_stmt|;
name|EntityUtils
operator|.
name|setMessageEntity
argument_list|(
name|message
argument_list|,
name|applicationEDIEntity
argument_list|)
expr_stmt|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
DECL|method|parseMessageDispositionNotificationReportEntity (HttpMessage message, AS2SessionInputBuffer inBuffer, String boundary, String charsetName, String contentTransferEncoding)
specifier|private
specifier|static
name|void
name|parseMessageDispositionNotificationReportEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|,
name|AS2SessionInputBuffer
name|inBuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|HttpException
block|{
name|DispositionNotificationMultipartReportEntity
name|dispositionNotificationMultipartReportEntity
init|=
literal|null
decl_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|message
argument_list|,
literal|"message"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|inBuffer
argument_list|,
literal|"inBuffer"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|boundary
argument_list|,
literal|"boundary"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|charsetName
argument_list|,
literal|"charsetName"
argument_list|)
expr_stmt|;
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|DispositionNotificationMultipartReportEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
name|Args
operator|.
name|check
argument_list|(
name|entity
operator|.
name|isStreaming
argument_list|()
argument_list|,
literal|"Message entity can not be parsed: entity is not streaming"
argument_list|)
expr_stmt|;
try|try
block|{
name|dispositionNotificationMultipartReportEntity
operator|=
name|parseMultipartReportEntityBody
argument_list|(
name|inBuffer
argument_list|,
name|boundary
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|EntityUtils
operator|.
name|setMessageEntity
argument_list|(
name|message
argument_list|,
name|dispositionNotificationMultipartReportEntity
argument_list|)
expr_stmt|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
comment|/** * Parses message's entity and replaces it with mime entity. * * @param message - message whose entity is parsed. * @throws HttpException when things go wrong. */
DECL|method|parseAS2MessageEntity (HttpMessage message)
specifier|public
specifier|static
name|void
name|parseAS2MessageEntity
parameter_list|(
name|HttpMessage
name|message
parameter_list|)
throws|throws
name|HttpException
block|{
if|if
condition|(
name|EntityUtils
operator|.
name|hasEntity
argument_list|(
name|message
argument_list|)
condition|)
block|{
name|HttpEntity
name|entity
init|=
name|Args
operator|.
name|notNull
argument_list|(
name|EntityUtils
operator|.
name|getMessageEntity
argument_list|(
name|message
argument_list|)
argument_list|,
literal|"message entity"
argument_list|)
decl_stmt|;
if|if
condition|(
name|entity
operator|instanceof
name|MimeEntity
condition|)
block|{
comment|// already parsed
return|return;
block|}
try|try
block|{
comment|// Determine Content Type of Message
name|String
name|contentTypeStr
init|=
name|HttpMessageUtils
operator|.
name|getHeaderValue
argument_list|(
name|message
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
decl_stmt|;
name|ContentType
name|contentType
init|=
name|ContentType
operator|.
name|parse
argument_list|(
name|contentTypeStr
argument_list|)
decl_stmt|;
comment|// Determine Charset
name|String
name|charsetName
init|=
name|AS2Charset
operator|.
name|US_ASCII
decl_stmt|;
name|Charset
name|charset
init|=
name|contentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|!=
literal|null
condition|)
block|{
name|charsetName
operator|=
name|charset
operator|.
name|name
argument_list|()
expr_stmt|;
block|}
comment|// Get any Boundary Value
name|String
name|boundary
init|=
name|HttpMessageUtils
operator|.
name|getParameterValue
argument_list|(
name|message
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|,
literal|"boundary"
argument_list|)
decl_stmt|;
comment|// Determine content transfer encoding
name|String
name|contentTransferEncoding
init|=
name|HttpMessageUtils
operator|.
name|getHeaderValue
argument_list|(
name|message
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
decl_stmt|;
name|AS2SessionInputBuffer
name|inBuffer
init|=
operator|new
name|AS2SessionInputBuffer
argument_list|(
operator|new
name|HttpTransportMetricsImpl
argument_list|()
argument_list|,
literal|8
operator|*
literal|1024
argument_list|)
decl_stmt|;
name|inBuffer
operator|.
name|bind
argument_list|(
name|entity
operator|.
name|getContent
argument_list|()
argument_list|)
expr_stmt|;
switch|switch
condition|(
name|contentType
operator|.
name|getMimeType
argument_list|()
operator|.
name|toLowerCase
argument_list|()
condition|)
block|{
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDIFACT
case|:
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDI_X12
case|:
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDI_CONSENT
case|:
name|parseApplicationEDIEntity
argument_list|(
name|message
argument_list|,
name|inBuffer
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|MULTIPART_SIGNED
case|:
name|parseMultipartSignedEntity
argument_list|(
name|message
argument_list|,
name|inBuffer
argument_list|,
name|boundary
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|APPLICATION_PKCS7_MIME
case|:
switch|switch
condition|(
name|contentType
operator|.
name|getParameter
argument_list|(
literal|"smime-type"
argument_list|)
condition|)
block|{
case|case
literal|"compressed-data"
case|:
name|parseApplicationPkcs7MimeCompressedEntity
argument_list|(
name|message
argument_list|,
name|inBuffer
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
literal|"enveloped-data"
case|:
name|parseApplicationPkcs7MimeEnvelopedEntity
argument_list|(
name|message
argument_list|,
name|inBuffer
argument_list|,
name|contentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
default|default:
block|}
break|break;
case|case
name|AS2MimeType
operator|.
name|MULTIPART_REPORT
case|:
name|parseMessageDispositionNotificationReportEntity
argument_list|(
name|message
argument_list|,
name|inBuffer
argument_list|,
name|boundary
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
default|default:
break|break;
block|}
block|}
catch|catch
parameter_list|(
name|HttpException
name|e
parameter_list|)
block|{
throw|throw
name|e
throw|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to parse entity content"
argument_list|,
name|e
argument_list|)
throw|;
block|}
block|}
block|}
DECL|method|parseMultipartSignedEntityBody (AS2SessionInputBuffer inbuffer, String boundary, String micalg, String charsetName, String contentTransferEncoding)
specifier|public
specifier|static
name|MultipartSignedEntity
name|parseMultipartSignedEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|micalg
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
if|if
condition|(
name|charsetName
operator|==
literal|null
condition|)
block|{
name|charsetName
operator|=
name|AS2Charset
operator|.
name|US_ASCII
expr_stmt|;
block|}
name|Charset
name|charset
init|=
name|Charset
operator|.
name|forName
argument_list|(
name|charsetName
argument_list|)
decl_stmt|;
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|MultipartSignedEntity
name|multipartSignedEntity
init|=
operator|new
name|MultipartSignedEntity
argument_list|(
name|boundary
argument_list|,
literal|false
argument_list|)
decl_stmt|;
comment|// Skip Preamble and Start Boundary line
name|skipPreambleAndStartBoundary
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
expr_stmt|;
comment|//
comment|// Parse Signed Entity Part
comment|//
comment|// Read Text Report Body Part Headers
name|Header
index|[]
name|headers
init|=
name|AbstractMessageParser
operator|.
name|parseHeaders
argument_list|(
name|inbuffer
argument_list|,
operator|-
literal|1
argument_list|,
operator|-
literal|1
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
decl_stmt|;
comment|// Get Content-Type and Content-Transfer-Encoding
name|ContentType
name|signedEntityContentType
init|=
literal|null
decl_stmt|;
name|String
name|signedEntityContentTransferEncoding
init|=
literal|null
decl_stmt|;
for|for
control|(
name|Header
name|header
range|:
name|headers
control|)
block|{
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
condition|)
block|{
name|signedEntityContentType
operator|=
name|ContentType
operator|.
name|parse
argument_list|(
name|header
operator|.
name|getValue
argument_list|()
argument_list|)
expr_stmt|;
block|}
elseif|else
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
condition|)
block|{
name|signedEntityContentTransferEncoding
operator|=
name|header
operator|.
name|getValue
argument_list|()
expr_stmt|;
block|}
block|}
if|if
condition|(
name|signedEntityContentType
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find Content-Type header in signed entity body part"
argument_list|)
throw|;
block|}
name|MimeEntity
name|signedEntity
init|=
name|parseEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|signedEntityContentType
argument_list|,
name|signedEntityContentTransferEncoding
argument_list|,
name|headers
argument_list|)
decl_stmt|;
name|signedEntity
operator|.
name|removeAllHeaders
argument_list|()
expr_stmt|;
name|signedEntity
operator|.
name|setHeaders
argument_list|(
name|headers
argument_list|)
expr_stmt|;
name|multipartSignedEntity
operator|.
name|addPart
argument_list|(
name|signedEntity
argument_list|)
expr_stmt|;
comment|//
comment|// End Signed Entity Part
comment|//
comment|// Parse Signature Body Part
comment|//
comment|// Read Signature Body Part Headers
name|headers
operator|=
name|AbstractMessageParser
operator|.
name|parseHeaders
argument_list|(
name|inbuffer
argument_list|,
operator|-
literal|1
argument_list|,
operator|-
literal|1
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
expr_stmt|;
comment|// Get Content-Type and Content-Transfer-Encoding
name|ContentType
name|signatureContentType
init|=
literal|null
decl_stmt|;
name|String
name|signatureContentTransferEncoding
init|=
literal|null
decl_stmt|;
for|for
control|(
name|Header
name|header
range|:
name|headers
control|)
block|{
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
condition|)
block|{
name|signatureContentType
operator|=
name|ContentType
operator|.
name|parse
argument_list|(
name|header
operator|.
name|getValue
argument_list|()
argument_list|)
expr_stmt|;
block|}
elseif|else
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
condition|)
block|{
name|signatureContentTransferEncoding
operator|=
name|header
operator|.
name|getValue
argument_list|()
expr_stmt|;
block|}
block|}
if|if
condition|(
name|signatureContentType
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find Content-Type header in signature body part"
argument_list|)
throw|;
block|}
if|if
condition|(
operator|!
name|ContentTypeUtils
operator|.
name|isPkcs7SignatureType
argument_list|(
name|signatureContentType
argument_list|)
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Invalid content type '"
operator|+
name|signatureContentType
operator|.
name|getMimeType
argument_list|()
operator|+
literal|"' for signature body part"
argument_list|)
throw|;
block|}
name|ApplicationPkcs7SignatureEntity
name|applicationPkcs7SignatureEntity
init|=
name|parseApplicationPkcs7SignatureEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|signatureContentType
argument_list|,
name|signatureContentTransferEncoding
argument_list|)
decl_stmt|;
name|applicationPkcs7SignatureEntity
operator|.
name|removeAllHeaders
argument_list|()
expr_stmt|;
name|applicationPkcs7SignatureEntity
operator|.
name|setHeaders
argument_list|(
name|headers
argument_list|)
expr_stmt|;
name|multipartSignedEntity
operator|.
name|addPart
argument_list|(
name|applicationPkcs7SignatureEntity
argument_list|)
expr_stmt|;
comment|//
comment|// End Signature Body Part
name|NameValuePair
index|[]
name|parameters
init|=
operator|new
name|NameValuePair
index|[]
block|{
operator|new
name|BasicNameValuePair
argument_list|(
literal|"protocol"
argument_list|,
name|AS2MimeType
operator|.
name|APPLICATION_PKCS7_SIGNATURE
argument_list|)
block|,
operator|new
name|BasicNameValuePair
argument_list|(
literal|"boundary"
argument_list|,
name|boundary
argument_list|)
block|,
operator|new
name|BasicNameValuePair
argument_list|(
literal|"micalg"
argument_list|,
name|micalg
argument_list|)
block|,
operator|new
name|BasicNameValuePair
argument_list|(
literal|"charset"
argument_list|,
name|charsetName
argument_list|)
block|}
decl_stmt|;
name|ContentType
name|contentType
init|=
name|ContentType
operator|.
name|create
argument_list|(
name|AS2MimeType
operator|.
name|MULTIPART_SIGNED
argument_list|,
name|parameters
argument_list|)
decl_stmt|;
name|multipartSignedEntity
operator|.
name|setContentType
argument_list|(
name|contentType
argument_list|)
expr_stmt|;
name|multipartSignedEntity
operator|.
name|setContentTransferEncoding
argument_list|(
name|contentTransferEncoding
argument_list|)
expr_stmt|;
return|return
name|multipartSignedEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse text entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseMultipartReportEntityBody (AS2SessionInputBuffer inbuffer, String boundary, String charsetName, String contentTransferEncoding)
specifier|public
specifier|static
name|DispositionNotificationMultipartReportEntity
name|parseMultipartReportEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
if|if
condition|(
name|charsetName
operator|==
literal|null
condition|)
block|{
name|charsetName
operator|=
name|AS2Charset
operator|.
name|US_ASCII
expr_stmt|;
block|}
name|Charset
name|charset
init|=
name|Charset
operator|.
name|forName
argument_list|(
name|charsetName
argument_list|)
decl_stmt|;
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|DispositionNotificationMultipartReportEntity
name|dispositionNotificationMultipartReportEntity
init|=
operator|new
name|DispositionNotificationMultipartReportEntity
argument_list|(
name|boundary
argument_list|,
literal|false
argument_list|)
decl_stmt|;
comment|// Skip Preamble and Start Boundary line
name|skipPreambleAndStartBoundary
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
expr_stmt|;
comment|//
comment|// Parse Text Report Body Part
comment|//
comment|// Read Text Report Body Part Headers
name|Header
index|[]
name|headers
init|=
name|AbstractMessageParser
operator|.
name|parseHeaders
argument_list|(
name|inbuffer
argument_list|,
operator|-
literal|1
argument_list|,
operator|-
literal|1
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
decl_stmt|;
comment|// Get Content-Type and Content-Transfer-Encoding
name|ContentType
name|textReportContentType
init|=
literal|null
decl_stmt|;
name|String
name|textReportContentTransferEncoding
init|=
literal|null
decl_stmt|;
for|for
control|(
name|Header
name|header
range|:
name|headers
control|)
block|{
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
condition|)
block|{
name|textReportContentType
operator|=
name|ContentType
operator|.
name|parse
argument_list|(
name|header
operator|.
name|getValue
argument_list|()
argument_list|)
expr_stmt|;
block|}
elseif|else
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
condition|)
block|{
name|textReportContentTransferEncoding
operator|=
name|header
operator|.
name|getValue
argument_list|()
expr_stmt|;
block|}
block|}
if|if
condition|(
name|textReportContentType
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find Content-Type header in EDI message body part"
argument_list|)
throw|;
block|}
if|if
condition|(
operator|!
name|textReportContentType
operator|.
name|getMimeType
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2MimeType
operator|.
name|TEXT_PLAIN
argument_list|)
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Invalid content type '"
operator|+
name|textReportContentType
operator|.
name|getMimeType
argument_list|()
operator|+
literal|"' for first body part of disposition notification"
argument_list|)
throw|;
block|}
name|String
name|textReportCharsetName
init|=
name|textReportContentType
operator|.
name|getCharset
argument_list|()
operator|==
literal|null
condition|?
name|AS2Charset
operator|.
name|US_ASCII
else|:
name|textReportContentType
operator|.
name|getCharset
argument_list|()
operator|.
name|name
argument_list|()
decl_stmt|;
name|TextPlainEntity
name|textReportEntity
init|=
name|parseTextPlainEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|textReportCharsetName
argument_list|,
name|textReportContentTransferEncoding
argument_list|)
decl_stmt|;
name|textReportEntity
operator|.
name|setHeaders
argument_list|(
name|headers
argument_list|)
expr_stmt|;
name|dispositionNotificationMultipartReportEntity
operator|.
name|addPart
argument_list|(
name|textReportEntity
argument_list|)
expr_stmt|;
comment|//
comment|// End Text Report Body Part
comment|//
comment|// Parse Disposition Notification Body Part
comment|//
comment|// Read Disposition Notification Body Part Headers
name|headers
operator|=
name|AbstractMessageParser
operator|.
name|parseHeaders
argument_list|(
name|inbuffer
argument_list|,
operator|-
literal|1
argument_list|,
operator|-
literal|1
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
expr_stmt|;
comment|// Get Content-Type and Content-Transfer-Encoding
name|ContentType
name|dispositionNotificationContentType
init|=
literal|null
decl_stmt|;
name|String
name|dispositionNotificationContentTransferEncoding
init|=
literal|null
decl_stmt|;
for|for
control|(
name|Header
name|header
range|:
name|headers
control|)
block|{
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|)
condition|)
block|{
name|dispositionNotificationContentType
operator|=
name|ContentType
operator|.
name|parse
argument_list|(
name|header
operator|.
name|getValue
argument_list|()
argument_list|)
expr_stmt|;
block|}
elseif|else
if|if
condition|(
name|header
operator|.
name|getName
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2Header
operator|.
name|CONTENT_TRANSFER_ENCODING
argument_list|)
condition|)
block|{
name|dispositionNotificationContentTransferEncoding
operator|=
name|header
operator|.
name|getValue
argument_list|()
expr_stmt|;
block|}
block|}
if|if
condition|(
name|dispositionNotificationContentType
operator|==
literal|null
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Failed to find Content-Type header in body part"
argument_list|)
throw|;
block|}
if|if
condition|(
operator|!
name|dispositionNotificationContentType
operator|.
name|getMimeType
argument_list|()
operator|.
name|equalsIgnoreCase
argument_list|(
name|AS2MimeType
operator|.
name|MESSAGE_DISPOSITION_NOTIFICATION
argument_list|)
condition|)
block|{
throw|throw
operator|new
name|HttpException
argument_list|(
literal|"Invalid content type '"
operator|+
name|dispositionNotificationContentType
operator|.
name|getMimeType
argument_list|()
operator|+
literal|"' for second body part of disposition notification"
argument_list|)
throw|;
block|}
name|String
name|dispositionNotificationCharsetName
init|=
name|dispositionNotificationContentType
operator|.
name|getCharset
argument_list|()
operator|==
literal|null
condition|?
name|AS2Charset
operator|.
name|US_ASCII
else|:
name|dispositionNotificationContentType
operator|.
name|getCharset
argument_list|()
operator|.
name|name
argument_list|()
decl_stmt|;
name|AS2MessageDispositionNotificationEntity
name|messageDispositionNotificationEntity
init|=
name|parseMessageDispositionNotificationEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|dispositionNotificationCharsetName
argument_list|,
name|dispositionNotificationContentTransferEncoding
argument_list|)
decl_stmt|;
name|messageDispositionNotificationEntity
operator|.
name|setHeaders
argument_list|(
name|headers
argument_list|)
expr_stmt|;
name|dispositionNotificationMultipartReportEntity
operator|.
name|addPart
argument_list|(
name|messageDispositionNotificationEntity
argument_list|)
expr_stmt|;
comment|//
comment|// End Disposition Notification Body Part
name|dispositionNotificationMultipartReportEntity
operator|.
name|setContentTransferEncoding
argument_list|(
name|contentTransferEncoding
argument_list|)
expr_stmt|;
return|return
name|dispositionNotificationMultipartReportEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse text entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseTextPlainEntityBody (AS2SessionInputBuffer inbuffer, String boundary, String charsetName, String contentTransferEncoding)
specifier|public
specifier|static
name|TextPlainEntity
name|parseTextPlainEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
if|if
condition|(
name|charsetName
operator|==
literal|null
condition|)
block|{
name|charsetName
operator|=
name|AS2Charset
operator|.
name|US_ASCII
expr_stmt|;
block|}
name|Charset
name|charset
init|=
name|Charset
operator|.
name|forName
argument_list|(
name|charsetName
argument_list|)
decl_stmt|;
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|String
name|text
init|=
name|parseBodyPartText
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
decl_stmt|;
if|if
condition|(
name|contentTransferEncoding
operator|!=
literal|null
condition|)
block|{
name|text
operator|=
name|EntityUtils
operator|.
name|decode
argument_list|(
name|text
argument_list|,
name|charset
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
block|}
return|return
operator|new
name|TextPlainEntity
argument_list|(
name|text
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|,
literal|false
argument_list|)
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse text entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseMessageDispositionNotificationEntityBody (AS2SessionInputBuffer inbuffer, String boundary, String charsetName, String contentTransferEncoding)
specifier|public
specifier|static
name|AS2MessageDispositionNotificationEntity
name|parseMessageDispositionNotificationEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|String
name|charsetName
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
if|if
condition|(
name|charsetName
operator|==
literal|null
condition|)
block|{
name|charsetName
operator|=
name|AS2Charset
operator|.
name|US_ASCII
expr_stmt|;
block|}
name|Charset
name|charset
init|=
name|Charset
operator|.
name|forName
argument_list|(
name|charsetName
argument_list|)
decl_stmt|;
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|List
argument_list|<
name|CharArrayBuffer
argument_list|>
name|dispositionNotificationFields
init|=
name|parseBodyPartFields
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|BasicLineParser
operator|.
name|INSTANCE
argument_list|,
operator|new
name|ArrayList
argument_list|<
name|CharArrayBuffer
argument_list|>
argument_list|()
argument_list|)
decl_stmt|;
name|AS2MessageDispositionNotificationEntity
name|as2MessageDispositionNotificationEntity
init|=
name|DispositionNotificationContentUtils
operator|.
name|parseDispositionNotification
argument_list|(
name|dispositionNotificationFields
argument_list|)
decl_stmt|;
name|ContentType
name|contentType
init|=
name|ContentType
operator|.
name|create
argument_list|(
name|AS2MimeType
operator|.
name|MESSAGE_DISPOSITION_NOTIFICATION
argument_list|,
name|charset
argument_list|)
decl_stmt|;
name|as2MessageDispositionNotificationEntity
operator|.
name|setContentType
argument_list|(
name|contentType
argument_list|)
expr_stmt|;
return|return
name|as2MessageDispositionNotificationEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse MDN entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseEntityBody (AS2SessionInputBuffer inbuffer, String boundary, ContentType entityContentType, String contentTransferEncoding, Header[] headers)
specifier|public
specifier|static
name|MimeEntity
name|parseEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|ContentType
name|entityContentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|,
name|Header
index|[]
name|headers
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
name|Charset
name|charset
init|=
name|entityContentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|==
literal|null
condition|)
block|{
name|charset
operator|=
name|Charset
operator|.
name|forName
argument_list|(
name|AS2Charset
operator|.
name|US_ASCII
argument_list|)
expr_stmt|;
block|}
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|MimeEntity
name|entity
init|=
literal|null
decl_stmt|;
switch|switch
condition|(
name|entityContentType
operator|.
name|getMimeType
argument_list|()
operator|.
name|toLowerCase
argument_list|()
condition|)
block|{
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDIFACT
case|:
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDI_X12
case|:
case|case
name|AS2MimeType
operator|.
name|APPLICATION_EDI_CONSENT
case|:
name|entity
operator|=
name|parseEDIEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|entityContentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|MULTIPART_SIGNED
case|:
name|String
name|multipartSignedBoundary
init|=
name|AS2HeaderUtils
operator|.
name|getParameterValue
argument_list|(
name|headers
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|,
literal|"boundary"
argument_list|)
decl_stmt|;
name|String
name|micalg
init|=
name|AS2HeaderUtils
operator|.
name|getParameterValue
argument_list|(
name|headers
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|,
literal|"micalg"
argument_list|)
decl_stmt|;
name|entity
operator|=
name|parseMultipartSignedEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|multipartSignedBoundary
argument_list|,
name|micalg
argument_list|,
name|charset
operator|.
name|name
argument_list|()
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|skipToBoundary
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|MESSAGE_DISPOSITION_NOTIFICATION
case|:
name|entity
operator|=
name|parseMessageDispositionNotificationEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|charset
operator|.
name|name
argument_list|()
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|MULTIPART_REPORT
case|:
name|String
name|multipartReportBoundary
init|=
name|AS2HeaderUtils
operator|.
name|getParameterValue
argument_list|(
name|headers
argument_list|,
name|AS2Header
operator|.
name|CONTENT_TYPE
argument_list|,
literal|"boundary"
argument_list|)
decl_stmt|;
name|entity
operator|=
name|parseMultipartReportEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|multipartReportBoundary
argument_list|,
name|charset
operator|.
name|name
argument_list|()
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
name|skipToBoundary
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|TEXT_PLAIN
case|:
name|entity
operator|=
name|parseTextPlainEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|charset
operator|.
name|name
argument_list|()
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|APPLICATION_PKCS7_SIGNATURE
case|:
name|entity
operator|=
name|parseApplicationPkcs7SignatureEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|entityContentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
name|AS2MimeType
operator|.
name|APPLICATION_PKCS7_MIME
case|:
switch|switch
condition|(
name|entityContentType
operator|.
name|getParameter
argument_list|(
literal|"smime-type"
argument_list|)
condition|)
block|{
case|case
literal|"compressed-data"
case|:
name|entity
operator|=
name|parseApplicationPkcs7MimeCompressedDataEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|entityContentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
case|case
literal|"enveloped-data"
case|:
name|entity
operator|=
name|parseApplicationPkcs7MimeEnvelopedDataEntityBody
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|,
name|entityContentType
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
break|break;
default|default:
break|break;
block|}
break|break;
default|default:
break|break;
block|}
return|return
name|entity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse EDI entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseEDIEntityBody (AS2SessionInputBuffer inbuffer, String boundary, ContentType ediMessageContentType, String contentTransferEncoding)
specifier|public
specifier|static
name|ApplicationEDIEntity
name|parseEDIEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|ContentType
name|ediMessageContentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
name|Charset
name|charset
init|=
name|ediMessageContentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|==
literal|null
condition|)
block|{
name|charset
operator|=
name|Charset
operator|.
name|forName
argument_list|(
name|AS2Charset
operator|.
name|US_ASCII
argument_list|)
expr_stmt|;
block|}
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|String
name|ediMessageBodyPartContent
init|=
name|parseBodyPartText
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
decl_stmt|;
if|if
condition|(
name|contentTransferEncoding
operator|!=
literal|null
condition|)
block|{
name|ediMessageBodyPartContent
operator|=
name|EntityUtils
operator|.
name|decode
argument_list|(
name|ediMessageBodyPartContent
argument_list|,
name|charset
argument_list|,
name|contentTransferEncoding
argument_list|)
expr_stmt|;
block|}
name|ApplicationEDIEntity
name|applicationEDIEntity
init|=
name|EntityUtils
operator|.
name|createEDIEntity
argument_list|(
name|ediMessageBodyPartContent
argument_list|,
name|ediMessageContentType
argument_list|,
name|contentTransferEncoding
argument_list|,
literal|false
argument_list|)
decl_stmt|;
return|return
name|applicationEDIEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse EDI entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseApplicationPkcs7SignatureEntityBody (AS2SessionInputBuffer inbuffer, String boundary, ContentType contentType, String contentTransferEncoding)
specifier|public
specifier|static
name|ApplicationPkcs7SignatureEntity
name|parseApplicationPkcs7SignatureEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
name|Charset
name|charset
init|=
name|contentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|==
literal|null
condition|)
block|{
name|charset
operator|=
name|Charset
operator|.
name|forName
argument_list|(
name|AS2Charset
operator|.
name|US_ASCII
argument_list|)
expr_stmt|;
block|}
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|String
name|pkcs7SignatureBodyContent
init|=
name|parseBodyPartText
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
decl_stmt|;
name|byte
index|[]
name|signature
init|=
name|EntityUtils
operator|.
name|decode
argument_list|(
name|pkcs7SignatureBodyContent
operator|.
name|getBytes
argument_list|(
name|charset
argument_list|)
argument_list|,
name|contentTransferEncoding
argument_list|)
decl_stmt|;
name|String
name|charsetName
init|=
name|charset
operator|.
name|toString
argument_list|()
decl_stmt|;
name|ApplicationPkcs7SignatureEntity
name|applicationPkcs7SignatureEntity
init|=
operator|new
name|ApplicationPkcs7SignatureEntity
argument_list|(
name|signature
argument_list|,
name|charsetName
argument_list|,
name|contentTransferEncoding
argument_list|,
literal|false
argument_list|)
decl_stmt|;
return|return
name|applicationPkcs7SignatureEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse PKCS7 Signature entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseApplicationPkcs7MimeEnvelopedDataEntityBody (AS2SessionInputBuffer inbuffer, String boundary, ContentType contentType, String contentTransferEncoding)
specifier|public
specifier|static
name|ApplicationPkcs7MimeEnvelopedDataEntity
name|parseApplicationPkcs7MimeEnvelopedDataEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
name|Charset
name|charset
init|=
name|contentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|==
literal|null
condition|)
block|{
name|charset
operator|=
name|Charset
operator|.
name|forName
argument_list|(
name|AS2Charset
operator|.
name|US_ASCII
argument_list|)
expr_stmt|;
block|}
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|String
name|pkcs7EncryptedBodyContent
init|=
name|parseBodyPartText
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
decl_stmt|;
name|byte
index|[]
name|encryptedContent
init|=
name|EntityUtils
operator|.
name|decode
argument_list|(
name|pkcs7EncryptedBodyContent
operator|.
name|getBytes
argument_list|(
name|charset
argument_list|)
argument_list|,
name|contentTransferEncoding
argument_list|)
decl_stmt|;
name|ApplicationPkcs7MimeEnvelopedDataEntity
name|applicationPkcs7MimeEntity
init|=
operator|new
name|ApplicationPkcs7MimeEnvelopedDataEntity
argument_list|(
name|encryptedContent
argument_list|,
name|contentTransferEncoding
argument_list|,
literal|false
argument_list|)
decl_stmt|;
return|return
name|applicationPkcs7MimeEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse PKCS7 Mime entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseApplicationPkcs7MimeCompressedDataEntityBody (AS2SessionInputBuffer inbuffer, String boundary, ContentType contentType, String contentTransferEncoding)
specifier|public
specifier|static
name|ApplicationPkcs7MimeCompressedDataEntity
name|parseApplicationPkcs7MimeCompressedDataEntityBody
parameter_list|(
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
name|String
name|boundary
parameter_list|,
name|ContentType
name|contentType
parameter_list|,
name|String
name|contentTransferEncoding
parameter_list|)
throws|throws
name|ParseException
block|{
name|CharsetDecoder
name|previousDecoder
init|=
name|inbuffer
operator|.
name|getCharsetDecoder
argument_list|()
decl_stmt|;
try|try
block|{
name|Charset
name|charset
init|=
name|contentType
operator|.
name|getCharset
argument_list|()
decl_stmt|;
if|if
condition|(
name|charset
operator|==
literal|null
condition|)
block|{
name|charset
operator|=
name|Charset
operator|.
name|forName
argument_list|(
name|AS2Charset
operator|.
name|US_ASCII
argument_list|)
expr_stmt|;
block|}
name|CharsetDecoder
name|charsetDecoder
init|=
name|charset
operator|.
name|newDecoder
argument_list|()
decl_stmt|;
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|charsetDecoder
argument_list|)
expr_stmt|;
name|String
name|pkcs7CompressedBodyContent
init|=
name|parseBodyPartText
argument_list|(
name|inbuffer
argument_list|,
name|boundary
argument_list|)
decl_stmt|;
name|byte
index|[]
name|compressedContent
init|=
name|EntityUtils
operator|.
name|decode
argument_list|(
name|pkcs7CompressedBodyContent
operator|.
name|getBytes
argument_list|(
name|charset
argument_list|)
argument_list|,
name|contentTransferEncoding
argument_list|)
decl_stmt|;
name|ApplicationPkcs7MimeCompressedDataEntity
name|applicationPkcs7MimeEntity
init|=
operator|new
name|ApplicationPkcs7MimeCompressedDataEntity
argument_list|(
name|compressedContent
argument_list|,
name|contentTransferEncoding
argument_list|,
literal|false
argument_list|)
decl_stmt|;
return|return
name|applicationPkcs7MimeEntity
return|;
block|}
catch|catch
parameter_list|(
name|Exception
name|e
parameter_list|)
block|{
name|ParseException
name|parseException
init|=
operator|new
name|ParseException
argument_list|(
literal|"failed to parse PKCS7 Mime entity"
argument_list|)
decl_stmt|;
name|parseException
operator|.
name|initCause
argument_list|(
name|e
argument_list|)
expr_stmt|;
throw|throw
name|parseException
throw|;
block|}
finally|finally
block|{
name|inbuffer
operator|.
name|setCharsetDecoder
argument_list|(
name|previousDecoder
argument_list|)
expr_stmt|;
block|}
block|}
DECL|method|parseBodyPartText (final AS2SessionInputBuffer inbuffer, final String boundary)
specifier|public
specifier|static
name|String
name|parseBodyPartText
parameter_list|(
specifier|final
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
specifier|final
name|String
name|boundary
parameter_list|)
throws|throws
name|IOException
block|{
name|CharArrayBuffer
name|buffer
init|=
operator|new
name|CharArrayBuffer
argument_list|(
name|DEFAULT_BUFFER_SIZE
argument_list|)
decl_stmt|;
name|CharArrayBuffer
name|line
init|=
operator|new
name|CharArrayBuffer
argument_list|(
name|DEFAULT_BUFFER_SIZE
argument_list|)
decl_stmt|;
while|while
condition|(
literal|true
condition|)
block|{
specifier|final
name|int
name|l
init|=
name|inbuffer
operator|.
name|readLine
argument_list|(
name|line
argument_list|)
decl_stmt|;
if|if
condition|(
name|l
operator|==
operator|-
literal|1
condition|)
block|{
break|break;
block|}
if|if
condition|(
name|boundary
operator|!=
literal|null
operator|&&
name|isBoundaryDelimiter
argument_list|(
name|line
argument_list|,
literal|null
argument_list|,
name|boundary
argument_list|)
condition|)
block|{
comment|// remove last CRLF from buffer which belongs to boundary
name|int
name|length
init|=
name|buffer
operator|.
name|length
argument_list|()
decl_stmt|;
name|buffer
operator|.
name|setLength
argument_list|(
name|length
operator|-
literal|2
argument_list|)
expr_stmt|;
break|break;
block|}
name|buffer
operator|.
name|append
argument_list|(
name|line
argument_list|)
expr_stmt|;
if|if
condition|(
name|inbuffer
operator|.
name|isLastLineReadTerminatedByLineFeed
argument_list|()
condition|)
block|{
name|buffer
operator|.
name|append
argument_list|(
literal|"\r\n"
argument_list|)
expr_stmt|;
block|}
name|line
operator|.
name|clear
argument_list|()
expr_stmt|;
block|}
return|return
name|buffer
operator|.
name|toString
argument_list|()
return|;
block|}
DECL|method|parseBodyPartFields (final AS2SessionInputBuffer inbuffer, final String boundary, final LineParser parser, final List<CharArrayBuffer> fields)
specifier|public
specifier|static
name|List
argument_list|<
name|CharArrayBuffer
argument_list|>
name|parseBodyPartFields
parameter_list|(
specifier|final
name|AS2SessionInputBuffer
name|inbuffer
parameter_list|,
specifier|final
name|String
name|boundary
parameter_list|,
specifier|final
name|LineParser
name|parser
parameter_list|,
specifier|final
name|List
argument_list|<
name|CharArrayBuffer
argument_list|>
name|fields
parameter_list|)
throws|throws
name|IOException
block|{
name|Args
operator|.
name|notNull
argument_list|(
name|parser
argument_list|,
literal|"parser"
argument_list|)
expr_stmt|;
name|Args
operator|.
name|notNull
argument_list|(
name|fields
argument_list|,
literal|"fields"
argument_list|)
expr_stmt|;
name|CharArrayBuffer
name|current
init|=
literal|null
decl_stmt|;
name|CharArrayBuffer
name|previous
init|=
literal|null
decl_stmt|;
while|while
condition|(
literal|true
condition|)
block|{
if|if
condition|(
name|current
operator|==
literal|null
condition|)
block|{
name|current
operator|=
operator|new
name|CharArrayBuffer
argument_list|(
literal|64
argument_list|)
expr_stmt|;
block|}
specifier|final
name|int
name|l
init|=
name|inbuffer
operator|.
name|readLine
argument_list|(
name|current
argument_list|)
decl_stmt|;
if|if
condition|(
name|l
operator|==
operator|-
literal|1
operator|||
name|current
operator|.
name|length
argument_list|()
operator|<
literal|1
condition|)
block|{
break|break;
block|}
if|if
condition|(
name|boundary
operator|!=
literal|null
operator|&&
name|isBoundaryDelimiter
argument_list|(
name|current
argument_list|,
literal|null
argument_list|,
name|boundary
argument_list|)
condition|)
block|{
break|break;
block|}
comment|// check if current line part of folded headers
if|if
condition|(
operator|(
name|current
operator|.
name|charAt
argument_list|(
literal|0
argument_list|)
operator|==
literal|' '
operator|||
name|current
operator|.
name|charAt
argument_list|(
literal|0
argument_list|)
operator|==
literal|'\t'
operator|)
operator|&&
name|previous
operator|!=
literal|null
condition|)
block|{
comment|// we have continuation of folded header : append value
name|int
name|i
init|=
literal|0
decl_stmt|;
while|while
condition|(
name|i
operator|<
name|current
operator|.
name|length
argument_list|()
condition|)
block|{
specifier|final
name|char
name|ch
init|=
name|current
operator|.
name|charAt
argument_list|(
name|i
argument_list|)
decl_stmt|;
if|if
condition|(
name|ch
operator|!=
literal|' '
operator|&&
name|ch
operator|!=
literal|'\t'
condition|)
block|{
break|break;
block|}
name|i
operator|++
expr_stmt|;
block|}
comment|// Just append current line to previous line
name|previous
operator|.
name|append
argument_list|(
literal|' '
argument_list|)
expr_stmt|;
name|previous
operator|.
name|append
argument_list|(
name|current
argument_list|,
name|i
argument_list|,
name|current
operator|.
name|length
argument_list|()
operator|-
name|i
argument_list|)
expr_stmt|;
comment|// leave current line buffer for reuse for next header
name|current
operator|.
name|clear
argument_list|()
expr_stmt|;
block|}
else|else
block|{
name|fields
operator|.
name|add
argument_list|(
name|current
argument_list|)
expr_stmt|;
name|previous
operator|=
name|current
expr_stmt|;
name|current
operator|=
literal|null
expr_stmt|;
block|}
block|}
return|return
name|fields
return|;
block|}
block|}
end_class
end_unit
|
Application of Passive Wireless RFID Asset Management in Warehousing of Cross-Border E-Commerce Enterprises As an important part of modern logistics, warehousing provides a guarantee for the sustainable stability of enterprises, so as to realize the economy of enterprise production and transportation and so as to realize the economy of enterprise production and transportation. With the advent of the information age, the traditional logistics management model has been difficult to adapt to the current increasingly fierce market competition environment. In the management of cross-border e-commerce enterprises, warehouse management must establish a systematic and information-based logistics warehouse management system to improve the level of logistics management. Passive wireless RFID asset management provides a new idea for the construction of warehousing management system of cross-border e-commerce enterprises. Through the application of RFID technology, the warehousing design scheme and process of cross-border e-commerce enterprises can be optimized and controlled to the greatest extent, so as to strengthen the optimization and integration of supply chain and improve the market competitiveness of logistics enterprises. Based on the wireless passive RFID asset management technology, this paper analyses the construction and optimization of the logistics warehouse management system of cross-border e-commerce enterprises. The simulation results show that after the application of wireless passive RFID asset management technology, the warehouse operation efficiency of cross-border e-commerce enterprises is improved, the employee utilization rate is relatively reduced, and at the same time, it can also save a lot of manpower and reduce the labour cost.
|
/**
* A base class for pieces that are big rectangles rather than a chain of
* 1x1 segments. These pieces are not intended for player control, nor to
* move around the board. Just to sit quietly and be large.
*/
public abstract class BigPiece extends Piece
{
/** A constructor used when unserializing. */
public BigPiece ()
{
}
/**
* Returns the bounds of this big piece. <em>Do not</em> modify the
* returned rectangle.
*/
public Rectangle getBounds ()
{
return _bounds;
}
/** Checks whether this piece is "tall," meaning that even air units cannot
* pass over it. */
public boolean isTall ()
{
return false;
}
/** Checks whether this piece is "penetrable," meaning that units can shoot
* through it.
*/
public boolean isPenetrable ()
{
return false;
}
@Override // documentation inherited
public int computeElevation (BangBoard board, int tx, int ty)
{
if (_bounds.width == 1 && _bounds.height == 1) {
return board.getWaterElevation(tx, ty);
}
int elevation = Integer.MIN_VALUE;
for (int y = ty, ymax = ty + _bounds.height; y < ymax; y++) {
for (int x = tx, xmax = tx + _bounds.width; x < xmax; x++) {
elevation = Math.max(elevation,
board.getWaterElevation(x, y));
}
}
return elevation;
}
@Override // documentation inherited
public boolean intersects (Rectangle bounds)
{
return _bounds.intersects(bounds);
}
@Override // documentation inherited
public boolean intersects (int tx, int ty)
{
return _bounds.contains(tx, ty);
}
@Override // documentation inherited
public boolean intersects (Piece other)
{
if (other instanceof BigPiece) {
return _bounds.intersects(((BigPiece)other).getBounds());
} else {
return intersects(other.x, other.y);
}
}
@Override // documentation inherited
public int getWidth ()
{
return _bounds.width;
}
@Override // documentation inherited
public int getLength ()
{
return _bounds.height;
}
/**
* Extends default behavior to initialize transient members.
*/
public void readObject (ObjectInputStream in)
throws IOException, ClassNotFoundException
{
in.defaultReadObject();
recomputeBounds();
}
@Override // documentation inherited
public Object clone ()
{
// make a deep copy of the bounds object
BigPiece piece = (BigPiece)super.clone();
piece._bounds = (Rectangle)_bounds.clone();
return piece;
}
@Override // documentation inherited
protected int computeOrientation (int nx, int ny)
{
// our orientation never changes
return orientation;
}
/** Require that our derived classes tell us how big they are (in the
* north/south orientation). */
protected BigPiece (int width, int length)
{
_width = width;
_length = length;
recomputeBounds();
}
@Override // documentation inherited
protected void recomputeBounds ()
{
if (orientation == NORTH || orientation == SOUTH) {
_bounds.setBounds(x, y, _width, _length);
} else {
_bounds.setBounds(x, y, _length, _width);
}
}
protected int _width, _length;
protected transient Rectangle _bounds = new Rectangle();
}
|
Rethinking the Thin-Thick Distinction among Theories of Evil (and Then Rereading Arendt) According to a standard interpretation of Hannah Arendts remarks about evil, she had a psychologically thin conception of evil action. This paper has two aims. First, I argue that the distinction between psychological thinness and thickness is poorly conceived, at least as it commonly applies to theories of evil action. And second, I argue that, according to a better conception of the thin-thick distinction, Arendt is being misinterpreted.
|
<reponame>zhangxiaoxiaoshuai/algorithm-go
package main
/**
输入一个链表,输出该链表中倒数第k个节点。为了符合大多数人的习惯,本题从1开始计数,即链表的尾节点是倒数第1个节点。
例如,一个链表有 6 个节点,从头节点开始,它们的值依次是 1、2、3、4、5、6。这个链表的倒数第 3 个节点是值为 4 的节点。
示例:
给定一个链表: 1->2->3->4->5, 和 k = 2.
返回链表 4->5.
*/
type ListNode struct {
Val int
Next *ListNode
}
func getKthFromEnd(head *ListNode, k int) *ListNode {
slow, fast := head, head
i := 0
for fast != nil {
if i >= k {
slow = slow.Next
}
fast = fast.Next
i++
}
return slow
}
func main() {
}
|
<gh_stars>0
package errwriter_test
import (
"bytes"
"fmt"
"testing"
errwriter "github.com/nasa9084/go-errwriter"
)
func TestErrWriter(t *testing.T) {
testErrWriterSuccess(t)
}
func testErrWriterSuccess(t *testing.T) {
buf := &bytes.Buffer{}
ew := errwriter.New(buf)
ew.Write([]byte("foo"))
ew.Write([]byte("bar"))
if _, err := ew.Write(nil); err != nil {
t.Errorf("error should not be occured, but: %s\n", err)
return
}
if s := buf.String(); s != "foobar" {
t.Errorf("written string does not match: %s != foobar\n", s)
return
}
}
type ErrWriter struct {
n int
}
func (e *ErrWriter) Write([]byte) (int, error) {
e.n++
return -1, fmt.Errorf("something error %d", e.n)
}
func testErrWriterError(t *testing.T) {
ew := errwriter.New(&ErrWriter{})
ew.Write([]byte("foo"))
ew.Write([]byte("bar"))
_, err := ew.Write(nil)
if err == nil {
t.Error("error should be occured, but nil")
return
}
if err.Error() != "something error 1" {
t.Errorf("error number does not match: %s != something error 1\n", err.Error())
return
}
}
|
"""
"""
import celery
import celeryconf
import uuid
app = celery.Celery(__name__)
app.config_from_object(celeryconf)
@app.task(bind=True)
def test_task(self, x):
return x
@app.task(bind=True)
def print_task(self, x):
print x
return x
@app.task(bind=True)
def add_task(self, x, y):
return x+y
@app.task(bind=True)
def bad_task(self):
raise RuntimeError("intentional error")
@app.task(bind=True)
def chaining_task(self, add):
return celery.chain(add_task.s(*add), print_task.s()).apply_async()
# you can do this from a separate threads
from redbeat import RedBeatSchedulerEntry as Entry
e = Entry(
'thingo',
'cluster.chaining_task',
10,
args=([5, 6], ),
options={'schedule_id': 'testid'},
app=app)
e.save()
|
<reponame>bakesaled/moose-watch
import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';
import { SharedModule } from '../shared/shared.module';
import { MainSidenavContainerComponent } from './main-sidenav-container.component';
import { NavigationModule } from '../navigation/navigation.module';
import { RouterModule } from '@angular/router';
@NgModule({
imports: [CommonModule, SharedModule, NavigationModule, RouterModule],
declarations: [MainSidenavContainerComponent],
exports: [MainSidenavContainerComponent]
})
export class MainSidenavContainerModule {}
|
Thyroid Malignancy in Long Standing Multinodular Goitre A Case Series with Review of Literature Clinical presentation of patients with multinodular goitre is variable. Fine-needle aspiration (FNA) has been widely accepted as an initial step in the management of thyroid nodules. However, the usefulness of FNA to assess the risk of malignancy in thyroid nodules occurring within a long standing multinodular goitre (MNG) has not been completely clarified. Moreover, an unresolved issue is whether MNG is significantly associated with malignancy. MNG had been traditionally thought to be at a low risk for malignancy as compared to a solitary thyroid nodule (STN). However, review of literature showed no statistical difference in incidence of malignancy in both MNG and solitary nodular goitre. The incidence of malignancy in multinodular goitres has been found to vary from 7.5% to 17%. The duration of the associated goitre varies and ranges from the shortest of one month to the longest of more than 20 years. Considering this background, we are reporting a series of three cases of thyroid malignancies diagnosed on FNAC in patients having long standing goitre. Introduction Clinical presentation of patients with multinodular goitre is variable. Fine-needle aspiration (FNA) has been widely accepted as an initial step in the management of thyroid nodules. It is relied upon to distinguish benign from neoplastic thyroid nodules, thus, influencing therapeutic decisions. However, the usefulness of FNA to assess the risk of malignancy in thyroid nodules occurring within a longstanding multinodular goitre (MNG) has not been completely clarified. Ultrasonography, an imaging study of choice for thyroid gland, provides accurate measurements of the nodular diameter for interval monitoring if needed. Additionally it allows characterization of nodules by sonographic features such as solid appearance, increased vascularity, micro-calcifications, irregular margins, all of which suggest malignancy. MNG is one of the common presentations of various thyroid diseases. A long standing unresolved issue is whether MNG is significantly associated with malignancy. MNG had been traditionally thought to be at a low risk for malignancy as compared to a solitary thyroid nodule (STN). However, review of literature showed no statistical difference in incidence of malignancy in both MNG and solitary nodular goitre. Several studies have suggested that the frequency of carcinoma in nodular goitres is about 25-60% of that in solitary nodules whereas incidence of malignancy in multinodular goitres has been found to vary from 7.5% to 17%. The duration of the associated goitre varies and ranges from the shortest of one month to the longest of more than 20 years. Considering this background, we are reporting a series of three cases with the aim to see whether FNAC in a patient with recent development of new clinical symptoms can help in detecting malignancy in long standing MNG. Case Reports Case 1: This was a 47 years female who gave history of Thyroid swelling since 15 years. Patient was a known case of Thyrotoxicosis and was taking treatment for the same. She also gave history of discontinuing this medication against medical advice some time back. She came to us because she observed sudden increase in the size of the longstanding thyroid swelling since the last 3 months. On examination, a 5 x 4 cms firm thyroid swelling on right side was noticed. USG findings were hyperechoic nodule in right lobe of thyroid with central anechoic component (0.9x 0.8 cm) suggestive of colloid goiter. USG guided FNAC was done from both cystic and solid areas. Smears from cystic area showed scanty colloid along with follicular cells. Whereas smears from surrounding solid area showed abundant cellularity comprising of small groups and sheets of plasmacytoid cells scattered singly and also arranged in a follicular pattern. Anisonucleosis and binucleation was seen. Individual cells showed red cytoplasmic granules on May-Grunwald Geimsa (MGG) stain. Amorphous acellular hyaline material and dense amorphous clumps (suggesting amyloid like material) surrounded by tumour cells was also seen ( Figure 1). Our cytologic diagnosis was Medullary Carcinoma of Thyroid. Thyroidectomy was done, which confirmed the cytologic diagnosis ( Figure 2). nodule was 4x3 cm, was firm to hard and moved with deglutition. Multiple firm to hard lymph nodes on right side of the neck formed a mass of 7 x 6 cm. USG showed enlarged left lobe of thyroid gland with heterogenous echo texture. Evidence of another well-defined hypoechoic lesion was seen on right side of midline extending into superior mediastinum. All these features were suggestive of neoplastic lesion in left lobe of thyroid with cervical lymphadenopathy. USG guided FNAC showed cellular smears comprising of groups, sheets and singly scattered spindle shaped and plamacytoid cells with anisonucleosis and hyperchromasia. Patches of amyloid in the form of dense amorphous hyaline material surrounded by tumor cells were also seen ( Figure 3). Diagnosis was given as Medullary carcinoma of thyroid. Thyroidectomy was done, which confirmed the cytologic diagnosis ( Figure 4). Case 3: This was a 58 years male patient who presented with anterior neck swelling of 30 years duration. He recently observed change in voice since 1 month. The swelling moved with deglutition and measured 12 x 11 x 8 cms. CT scan showed an ill-defined soft tissue lesion on right side of thyroid and the left lobe could not be visualized separately. Calcification was seen within the lesion. The lesion was seen extending into superior mediastinum displacing neck muscles and larynx and pharynx. CT impression was a malignant lesion of right lobe of thyroid with lung metastasis. USG guided FNAC smears were cellular and showed sheets and groups of round to oval bizarre cells having high N:C ratio, irregular nuclear membrane and prominent punched out nucleoli ( Figure 5). Binucleate and multinucleate tumor giant cells were seen. Also seen were singly scattered keratinized squamous cells with dense pyknotic nuclei ( Figure 6). Background showed keratinous and necrotic material. Our cytologic diagnosis was anaplastic carcinoma of Thyroid with squamous differentiation. Patient was inoperable, hence, thyroidectomy was not done. Discussion FNAC under USG guidance is a useful diagnostic modality in the evaluation of thyroid nodules in patients with MNG. Because the risk of thyroid malignancy in these nodules is comparable to that which exists in solitary thyroid nodules, the possibility of thyroid malignancy should be considered in all patients with MNG. In our case series all the three patients had thyroid malignancy in long standing multinodular goitre. In surgical series of Mc Call et al, they found a 17% incidence of malignancy in patients with solitary thyroid nodules compared to 13% in patients with nontoxic multinodular goiter and this difference was not statistically significant. Franklyn et al reported an incidence of malignancy of 5-9 % and 1-4 % in patients with solitary thyroid nodule and with multinodular goiter respectively. The age range of our patients was 30 to 60 years of age. Hanumanthappa M.B et al in their study reported 3 rd to 4 th decade as the most common age group. The duration of the goitre in all our three cases was between 15 to 30 years. K Sothy et al in their study have reported the duration of goitre ranging from lowest 5 years to longest more than 20 years. All our cases were asymptomatic until their recent complaints of sudden increase in size, change in voice and development of lymphadenopathy. Any such recent change in signs and symptoms in long standing goiters should raise the suspicion of a malignant change. Various studies conducted so far reported papillary carcinoma as the most common malignancy in long standing goitre. None of our cases were papillary carcinomas, instead FNAC diagnosis was Medullary carcinoma in two cases and one case was of anaplastic carcinoma. K Sothy et al in their study reported two cases of medullary carcinoma and two were anaplastic carcinoma. Histopathological diagnosis was available in first two cases of medullary carcinoma. However in the remaining one case, histopathological diagnosis was not available as patient was inoperable. Review of literature shows many studies that were conducted to observe, compare and detect the thyroid malignancy in MNG and solitary thyroid nodule (STN) using FNA smears and later confirmed by histopathology and some on thyroidectomy specimens alone. Studies have been conducted wherein comparison was done between USG guided FNAC findings in the preoperative assessment of the thyroid nodules and the results of thyroidectomy specimens. Studies in both the groups were conducted in many geographical regions (e.g., USA, Spain, France, Italy, Croatia, Turkey, Saudi Arabia, and India). In the first group, the occurrence of malignancy in MNG ranged from 5-36%. In the second group of studies, the occurrence of malignancy in MNG was 4.1-18.9%. Studies have also been conducted to assess FNAC results on dominant, suspicious and non-dominant nodules of MNG. Nadir et al in their study of 42 cases of histopathologically proven MNGs, observed that, 15 (35.7%) non-dominant nodules and 27 (64.2%) dominant nodules harboured malignancy. Therefore, the risk of malignancy in nondominant nodules in MNG should not be underestimated and attention should be paid to non-dominant nodules that appear suspicious on USG. On the contrary, there are studies documenting that nodules harbouring malignancy or suspicious of malignancy in MNG cannot be distinguished clinically or radiologically. In that scenario the decision to select a nodule to perform FNAC becomes difficult. This in turn makes the early detection of cancer in MNG a very difficult task. A high index of suspicion needs to be practiced www.pacificejournals.com/apalm eISSN: 2349-6983; pISSN: 2394-6466 while reporting FNAC smears from longstanding goiters specially when there is a history of sudden change in clinical signs and symptoms. However, there are studies suggesting that FNAC may not be good enough for sampling associated malignancy, because of the problem of limited sampling or targeting of the lesion in all cases of MNG. This may lead to false negative diagnosis of an associated malignancy with MNG. To conclude, malignant association in a long standing MNG is known and therefore a high index of suspicion, while reporting FNAC smears, helps in detection and diagnosis of these malignancies especially when there is any recent development in signs and symptoms. Funding None
|
Synchronization and Noise Performance of Two-Frequency Oscillators A nonlinear theory of multiple-frequency oscillators of the negative resistance type is developed and applied to two-frequency Gunn- and Impattdiode oscillators. The following results which were confirmed by measurements are obtained: By a high-Q loading of the second circuit a considerable improvement in the FM-noise spectrum at the fundamental frequency is achieved. The locking bandwidth is nearly twice that of a (sub-) harmonically synchronized single-frequency oscillator. The same is true for the frequency range of FM-noise reduction.
|
Itsue Matsuoka Saiki, 88, of Honolulu died at home.She was born in Honolulu. She is survived by sons Wayne K., Lyle T. and Ryan H.; six grandchildren; and three great-grandchildren. Private services.
|
/*
* Copyright 2004-2019 H2 Group. Multiple-Licensed under the MPL 2.0,
* and the EPL 1.0 (http://h2database.com/html/license.html).
* Initial Developer: H2 Group
*/
package org.h2.test.todo;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Statement;
import org.h2.tools.DeleteDbFiles;
/**
* The complete condition should be sent to a linked table, not just the index
* condition.
*/
public class TestLinkedTableFullCondition {
/**
* Run just this test.
*
* @param args ignored
*/
public static void main(String... args) throws Exception {
DeleteDbFiles.execute("data", null, true);
Class.forName("org.h2.Driver");
Connection conn;
conn = DriverManager.getConnection("jdbc:h2:data/test");
Statement stat = conn.createStatement();
stat.execute("create table test(id int primary key, name varchar)");
stat.execute("insert into test values(1, 'Hello')");
stat.execute("insert into test values(2, 'World')");
stat.execute("create linked table test_link" +
"('', 'jdbc:h2:data/test', '', '', 'TEST')");
stat.execute("set trace_level_system_out 2");
// the query sent to the linked database is
// SELECT * FROM PUBLIC.TEST T WHERE ID>=? AND ID<=? {1: 1, 2: 1};
// it should also include AND NAME='Hello'
stat.execute("select * from test_link " +
"where id = 1 and name = 'Hello'");
conn.close();
}
}
|
/**
* @internal
* Clear SAM
* SAM limits the communication to a single remote MAC address
* for 3 seconds. This operation implements the timeout callback.
*
* This is a callback for the scheduler. Arguments should fulfill
* pf_scheduler_timeout_ftn_t
*
* @param net InOut: The p-net stack instance
* @param arg In: Not used.
* @param current_time In: Not used.
*/
static void pf_dcp_clear_sam (pnet_t * net, void * arg, uint32_t current_time)
{
LOG_DEBUG (
PF_DCP_LOG,
"DCP(%d): SAM timeout. Clear stored remote MAC address.\n",
__LINE__);
net->dcp_sam = mac_nil;
pf_scheduler_reset_handle (&net->dcp_sam_timeout);
}
|
package org.opensha.refFaultParamDb.tests.dao.db;
import static org.junit.Assert.*;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.opensha.refFaultParamDb.dao.db.DB_AccessAPI;
import org.opensha.refFaultParamDb.dao.db.SiteRepresentationDB_DAO;
import org.opensha.refFaultParamDb.tests.AllTests;
import org.opensha.refFaultParamDb.vo.SiteRepresentation;
/**
* <p>Title: TestSiteRepresentationDB_DAO.java </p>
* <p>Description: Tets the Site representation DB DAO to check that database
* operations are being performed correctly./p>
* <p>Copyright: Copyright (c) 2002</p>
* <p>Company: </p>
* @author not attributable
* @version 1.0
*/
public class TestSiteRepresentationDB_DAO {
private DB_AccessAPI dbConnection;
private SiteRepresentationDB_DAO siteRepresentationDB_DAO = null;
public TestSiteRepresentationDB_DAO() {
dbConnection = AllTests.dbConnection;
}
@Before
public void setUp() throws Exception {
siteRepresentationDB_DAO = new SiteRepresentationDB_DAO(dbConnection);
}
@After
public void tearDown() throws Exception {
siteRepresentationDB_DAO = null;
}
@Test
public void testSiteRepresentationDB_DAO() {
siteRepresentationDB_DAO = new SiteRepresentationDB_DAO(dbConnection);
assertNotNull("siteRepresentationDB_DAO object should not be null",siteRepresentationDB_DAO);
}
/**
* Get all the representations with which a site can be associated
* @return
*/
@Test
public void testGetAllSiteRepresentations() {
int numSiteRepresentations = siteRepresentationDB_DAO.getAllSiteRepresentations().size();
assertEquals("There are 4 poosible representations for a site in the database", 4, numSiteRepresentations);
}
/**
* Get a representation based on site representation Id
* @param siteRepresentationId
* @return
*/
@Test
public void testGetSiteRepresentationBasedOnId() {
SiteRepresentation siteRepresentation = this.siteRepresentationDB_DAO.getSiteRepresentation(6);
assertNull("There is no site representation with id =6",siteRepresentation);
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation(1);
assertEquals("Entire Fault", siteRepresentation.getSiteRepresentationName());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation(2);
assertEquals("Most Significant Strand", siteRepresentation.getSiteRepresentationName());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation(3);
assertEquals("One of Several Strands", siteRepresentation.getSiteRepresentationName());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation(4);
assertEquals("Unknown", siteRepresentation.getSiteRepresentationName());
}
/**
* Get a representation based on site representation name
*
* @param siteRepresentationName
* @return
*/
@Test
public void testGetSiteRepresentationBasedOnName(String siteRepresentationName) {
SiteRepresentation siteRepresentation = this.siteRepresentationDB_DAO.getSiteRepresentation("abc");
assertNull("There is no site representation with name abc",siteRepresentation);
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation("Entire Fault");
assertEquals(1, siteRepresentation.getSiteRepresentationId());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation("Most Significant Strand");
assertEquals(2, siteRepresentation.getSiteRepresentationId());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation("One of Several Strands");
assertEquals(3, siteRepresentation.getSiteRepresentationId());
siteRepresentation = siteRepresentationDB_DAO.getSiteRepresentation("Unknown");
assertEquals(4, siteRepresentation.getSiteRepresentationId());
}
}
|
inp = lambda: map(int, input().rstrip().split())
n, k = inp()
a = list(inp())
b = [a[0]]
for i in range(1,n):
x = max(a[i], k - b[-1])
b.append(x)
print(sum(b) - sum(a))
print(*b)
|
Applying Cluster Analysis to Define a Typology of Chronic Fatigue Syndrome in a Medically-Evaluated, Random Community Sample This study involved a randomly selected, medically-evaluated, community-based sample of 166 individuals with chronic fatigue. Participants diagnosed with chronic fatigue syndrome and medically-explained chronic fatigue reported significantly more severe fatigue following exertion than the idiopathic chronic fatigue group, and participants with medically-explained chronic fatigue also reported significantly more severe fatigue following exertion than the psychiatrically-explained chronic fatigue group. A cluster analysis was performed to define a typology of chronic fatigue symptomatology for participants diagnosed with chronic fatigue syndrome. Three clusters emerged. Cluster 1 contained only one participant with chronic fatigue syndrome and was characterized by relatively low post-exertional fatigue. Cluster 2 contained a small proportion of individuals with chronic fatigue syndrome and was characterized by most severe post-exertional fatigue and most improvement in fatigue following rest. Cluster 3 contained the highest proportion of individuals with chronic fatigue syndrome, and was characterized by high post-exertional fatigue and fatigue not alleviated by rest.
|
/**
* Copyright 2015 Confluent Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except
* in compliance with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software distributed under the License
* is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
* or implied. See the License for the specific language governing permissions and limitations under
* the License.
**/
package io.confluent.connect.hdfs.partitioner;
import org.apache.hadoop.hive.metastore.api.FieldSchema;
import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
import org.apache.kafka.connect.sink.SinkRecord;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
public class DefaultPartitioner implements Partitioner {
private static final String partitionField = "partition";
private final List<FieldSchema> partitionFields = new ArrayList<>();;
@Override
public void configure(Map<String, Object> config) {
partitionFields.add(new FieldSchema(partitionField, TypeInfoFactory.stringTypeInfo.toString(), ""));
}
@Override
public String encodePartition(SinkRecord sinkRecord) {
return partitionField + "-" + String.valueOf(sinkRecord.kafkaPartition());
}
@Override
public String generatePartitionedPath(String topic, String encodedPartition) {
return topic + "/" + encodedPartition;
}
@Override
public List<FieldSchema> partitionFields() {
return partitionFields;
}
}
|
/** Show Properties Dialog. */
class ShowCustomizationConsoleAction extends AbstractAction {
ShowCustomizationConsoleAction() {
super("Properties", createImageIcon("images/prop.png"));
setEnabled(true);
}
public void actionPerformed(ActionEvent e) {
actLsnShowPropDialogs.actionPerformed(e);
// firePropertyChange(Common.REQUEST_SHOW_CUSTOMIZATION_CONSOLE, 0,
// 1);
}
}
|
<reponame>btilm305/mc-protocol-lib
package ch.spacebase.mcprotocol.standard.packet;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import ch.spacebase.mcprotocol.net.Client;
import ch.spacebase.mcprotocol.net.ServerConnection;
import ch.spacebase.mcprotocol.packet.Packet;
public class PacketSpawnExpOrb extends Packet {
public int entityId;
public int x;
public int y;
public int z;
public short count;
public PacketSpawnExpOrb() {
}
public PacketSpawnExpOrb(int entityId, int x, int y, int z, short count) {
this.entityId = entityId;
this.x = x;
this.y = y;
this.z = z;
this.count = count;
}
@Override
public void read(DataInputStream in) throws IOException {
this.entityId = in.readInt();
this.x = in.readInt();
this.y = in.readInt();
this.z = in.readInt();
this.count = in.readShort();
}
@Override
public void write(DataOutputStream out) throws IOException {
out.writeInt(this.entityId);
out.writeInt(this.x);
out.writeInt(this.y);
out.writeInt(this.z);
out.writeShort(this.count);
}
@Override
public void handleClient(Client conn) {
}
@Override
public void handleServer(ServerConnection conn) {
}
@Override
public int getId() {
return 26;
}
}
|
LOS ANGELES — The International Olympic Committee voted unanimously July 11 to award both the 2024 and 2028 Summer Olympics simultaneously in September — meaning Los Angeles is certain to be awarded one set of games, likely in 2028, while Paris gets the other.
Los Angeles and Paris are the only two cities vying for the 2024 Games and both made formal presentations to the IOC in Switzerland. Boston, Rome, Budapest and Hamburg had earlier expressed interest in the games, but later pulled out of the competition.
Mayor Eric Garcetti headed the Los Angeles delegation while the one from Paris was led by French President Emanuelle Macron and included Paris Mayor Anne Hidalgo. Garcetti and Hidalgo both thanked the IOC after its vote, according to reports from Lausanne.
Now that the IOC has approved the simultaneous awarding of both games, it is expected the move will all but secure the 2028 Games for Los Angeles because the city’s delegation has been receptive to the idea while the Paris organizers have insisted on 2024 because their planned Olympic Village will not be available in 2028. Various media reports, citing unnamed sources, have said IOC officials favor Paris for 2024.
Despite the flexible statements from LA 2024 officials on 2028 over the last few months, Garcetti insisted the decision on which city will host in 2024 is not a done deal.
“I am not being coy, we don’t have it worked out sitting here, who goes when,” Garcetti said. “I just have the confidence that it will. Both cities have to assess now that the rules have changed and I’ve always said I can’t take a hypothetical.
LA 2024 also praised the IOC’s decision to award both games.
“This is a proud day for Los Angeles and for the Olympic and Paralympic Movements in America,” said a statement from LA 2024. “We’re thrilled with the IOC’s decision today, which is a major step forward in making LA’s Olympic dream a reality.
The L.A. delegation in Switzerland includes Garcetti, LA 2024 Chairman Casey Wasserman, CEO Gene Sykes and Vice Chairs Janet Evans and Candace Cable, among others. The group gave the IOC a 45-minute presentation July 11 on the city’s bid, including a 30-minute question-and-answer session.
By utilizing existing venues like Los Angeles Memorial Coliseum and ones already planned by private investors, LA 2024 presented a balanced $5.3 billion budget for the games.
IOC President Thomas Bach echoed a recent report by the IOC’s Evaluation Commission, which concluded that L.A. and Paris’ bids are in line with reforms the committee has been striving for over the last several years.
The Olympic Agenda 2020, which was approved in 2014, is aimed at fighting corruption while improving transparency and good governance.
“We are very impressed with how both cities have embraced the reforms of the Olympic Agenda 2020,” Bach said on the eve of the L.A. delegation’s presentation.
Garcetti sought to reinforce the point as he closed the city’s presentation.
“First, we’re a young city, full of fresh, new ideas,” the mayor said. “Second, we’re not focused on the last 100 years, we are focused on the next 100. The question every candidate city must answer is ‘What do we leave behind after the games are over — not only for our city, but for the movement?
Also in the L.A. delegation was four-time Olympian Allyson Felix, a Los Angeles native who stressed the United States is a fine choice as an Olympic Games venue despite a turbulent history.
The selection of the host city for 2024 and 2028 will take place in Lima, Peru, in September. If awarded one of the games, it will be the third time Los Angeles will have hosted the Olympics after previously hosting in 1932 and 1984.
President Donald Trump, meanwhile, tweeted that he’s “working hard” to bring the Summer Olympics to Los Angeles, but exactly what he’s doing wasn’t immediately clear.
Trump met in the Oval Office last month with Bach and “pledged his full support” for the Los Angeles bid, according to the White House.
|
package gomodule
import (
"context"
"io"
"time"
"golang.org/x/mod/module"
)
// Info represents metadata of a particular module version. See Service.
type Info struct {
Version string // version string
Time time.Time // commit time
}
// Service is a strongly-typed interface for the Go module proxy protocol https://golang.org/cmd/go/#hdr-Module_proxy_protocol.
type Service interface {
// Returns a non-nil error e such that ErrorIsCode(e, NotFound) is true if the specified module version does not exist.
// Implementors can create an error e such that ErrorIsCode(e, NotFound) is true by calling NewErrorf(NotFound,, "...", ...).
Info(ctx context.Context, moduleVersion *module.Version) (*Info, error)
// Returns a non-nil error e such that ErrorIsCode(e, NotFound) is true if the specified module version does not exist.
// Implementors can create an error e such that ErrorIsCode(e, NotFound) is true by calling NewErrorf(NotFound,, "...", ...).
Latest(ctx context.Context, modulePath string) (*Info, error)
// List returns a d who's byte stream is the concatenation of version+"\n" for each version of the specified module.
List(ctx context.Context, modulePath string) (io.ReadCloser, error)
// Zip returns a d who's byte stream is a zip archive containing all relevant files of the specified module version.s
// Returns a non-nil error e such that ErrorIsCode(e, NotFound) is true if the specified module version does not exist.
// Implementors can create an error e such that ErrorIsCode(e, NotFound) is true by calling NewErrorf(NotFound,, "...", ...).
Zip(ctx context.Context, moduleVersion *module.Version) (io.ReadCloser, error)
// GoMod returns a d who's byte stream is the go.mod file (UTF-8 encoded) of the specified module version.
// Returns a non-nil error e such that ErrorIsCode(e, NotFound) is true if the specified module version does not exist.
// Implementors can create an error e such that ErrorIsCode(e, NotFound) is true by calling NewErrorf(NotFound,, "...", ...).
GoMod(ctx context.Context, moduleVersion *module.Version) (io.ReadCloser, error)
}
|
Design of a linear magnetic refrigeration structure running with rotating bar-shaped magnets Magnetic refrigeration (MR) is a cooling technology based on the magnetocaloric effect (MCE). It has the potential to replace conventional refrigeration systems. Recent development in magnetic refrigeration technology has encouraged researchers all over the world to think about some new original systems. This paper aims to describe a new magnetic refrigeration structure based on a rotating bar shaped configuration.
|
A spokeswoman for Mosinee's LogJam Festival said organizers would review use of Confederate flags, but defended the display as historically accurate.
MOSINEE - Mosinee's mayor says the city hasn't received any complaints about a festival drawing criticism for its display of Confederate flags.
Milwaukee folk duo Nickel&Rose, who performed at this year's LogJam Festival, took to Facebook last week to speak out against Confederate flags that flew on the festival's grounds. LogJam aims to celebrate the city's history and, among other attractions, featured a Civil War battle reenactment using cannons developed by a Confederate commander.
The nonprofit event, which is run by a volunteer board, took place at Mosinee's River Park from Aug. 10 through Aug. 12.
On Facebook, Nickel&Rose said Confederate flags represent the "murder and enslavement of Black people" and didn't belong at the festival. The Facebook post has since been removed.
"There's no room for displaying Confederate flags outside of a very specific and well-defined historical context, and a river history celebration in Northern Wisconsin lacked this context," they said.
In response, the organization apologized on Facebook, saying that "Friends of LogJam and its sponsors respect and appreciate everyone's opinion" and do not support those historical beliefs.
"Because we understand how Civil War re-enactment may be offensive to our guests, we have already initiated change in our program for future events," said the post, which also has been taken down. Organizers did not elaborate on what changes they would make.
The city of Mosinee previously served as a sponsor for LogJam, but removed itself as an event organizer after back-and-forth between Mayor Brent Jacobson and the City Council over whether to continue. The city still provides a donation using tourism dollars to assist with marketing. That donation was $5,000 for 2018, according to city administrator Jeff Gates.
Jacobson said the city hasn't received any complaints about the Confederate flags, but would review any that surface. He declined to comment further.
"We really can’t control it because it’s a private event," he said.
Deb Nelles, a spokeswoman for Friends of LogJam, said the flags were part of a display aiming to recreate a Confederate encampment. The group behind the encampment has been part of the festival for roughly nine years, she said, and the flags have been present before.
There was someone sitting at a table to take questions, Nelles said — something she wishes critics of the display had taken advantage of.
"It is nothing new, and it’s always been done as a historic context," she said.
LogJam displayed flags flown by Confederate factions during the Civil War, but Nelles noted that it did not include the Confederate battle flag, which became a symbol for pro-segregationists during the civil rights era, according to a CNN report.
"That’s a racist flag," Nelles said.
Critics, including Nickel&Rose, argued that the display lacked context about the Civil War's roots in slavery and that it ignored moral questions in favor of a display about the mechanics of cannon fire.
However, Nelles defended the inclusion of Confederate symbolism at LogJam.
"When we retell past history, we can’t bend that history," she said. "We can’t whitewash it. We can’t change it to suit the current politics of our time."
The festival's board will discuss the issue to determine how to proceed, Nelles said. But nothing has been decided yet, she said, including whether to allow the flags at future festivals.
"This is going to be a growing thing for everybody," she said.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.