content
stringlengths 7
2.61M
|
---|
I design my own outfits and buy a few pieces from different random shops in town.
It was a long evening dress made from recycled clothing. It looked hideous on me.
Before I became the fashion guru that I am now, I used to wear wrong accessories and makeup. I look at those pictures now and I wonder.
I eat healthy, work out and detoxify.
|
def find_winner_or_draw(state: State) -> State:
winning_boards = ['PPP......', '...PPP...', '......PPP', 'P..P..P..',
'.P..P..P.', '..P..P..P', 'P...P...P', '..P.P.P..']
board = ''.join(state.board)
for player in 'XO':
for wb in winning_boards:
player_wb = wb.replace('P', player)
if re.match(player_wb, board):
return state._replace(winner=player)
if '.' not in board:
return state._replace(draw=True)
return state
|
Design and testing of a high speed module based memory system An LCP flex based interconnect and a mating area array connector are used to increase the bandwidth of the module based memory systems. Simulations show that data rates in the range of 6.4 Gbps to 16.0 Gbps are possible depending on the memory system configuration. Tested prototype memory systems confirmed simulation predicted data rates of 16 Gbps and 12.8 Gbps for flex interconnect lengths of 6" and 12", respectively.
|
// done reports whether the loop is done. If it does not converge
// after the maximum number of iterations, it returns an error.
func (l *loop) done(z *Decimal) (bool, error) {
l.c.Sub(l.delta, l.prevZ, z)
sign := l.delta.Sign()
if sign == 0 {
return true, nil
}
if sign < 0 {
l.delta.Neg(l.delta)
}
p = 4
z = 12345.678 = 12345678 * 10^-3
eps = 10.000 = 10^(-4+8-3)
p = 3
z = 0.001234 = 1234 * 10^-6
eps = 0.00001 = 10^(-3+4-6)
eps := Decimal{Coeff: *bigOne, Exponent: -l.precision + int32(z.NumDigits()) + z.Exponent}
if l.delta.Cmp(&eps) <= 0 {
return true, nil
}
l.i++
if l.i == l.maxIterations {
return false, errors.Errorf(
"%s %s: did not converge after %d iterations; prev,last result %s,%s delta %s precision: %d",
l.name, l.arg.String(), l.maxIterations, z, l.prevZ, l.delta, l.precision,
)
}
l.prevZ.Set(z)
return false, nil
}
|
1. Field of the Invention
The present invention relates to a ceramic catalytic converter disposed in an exhaust passage of an engine for a vehicle.
2. Related Arts
In general, a ceramic catalytic converter is provided in an exhaust passage of an engine for a vehicle for purifying exhaust gas. Such a ceramic catalytic converter typically has a ceramic catalytic support carrying a catalyst layer on a surface thereof, a metallic cylindrical casing for holding the catalytic support therein, and a holding member disposed on a circumferential surface of the catalytic support in a gap between the catalytic support and the cylindrical casing.
The ceramic catalytic support is conventionally made of a cordierite-system ceramic material (2MgO.2Al.sub.2 O.sub.3.5SiO.sub.2) having a low thermal expansion coefficient and this material generally has been utilized for the ceramic catalytic support for years. The catalyst layer formed on the surface of the catalytic support includes noble metals such as platinum (Pt), rhodium (Rh), lead (Pd) and the like for converting undesirable ingredients such as carbon monoxide (CO), hydrocarbon (HC), oxides of nitrogen and the like included in the exhaust gas from the engine into harmless gases and water.
The holding member is utilized to prevent damage to the ceramic catalytic support which does not have a sufficient mechanical strength. The holding member generally has an elastic body made of various kinds of ceramic fibers. For example, as shown in FIG. 20A, a mat-like holding member 9 disclosed in JP-A-7-102961 is made of ceramic fibers which have a length of 30 mm or more, and are oriented generally in one direction. The holding member 9 is disposed on a ceramic catalytic support 12 (shown in FIG. 20B) so that the orientation of the ceramic fibers is generally parallel to an axial direction of the ceramic catalytic support 12.
However, the ceramic fibers forming the holding member 9 are oriented in one direction and hardly intertwined with each other. Therefore, when tensile force is applied to the holding member 9 in a perpendicular direction with respect to the orientation of the ceramic fibers, the ceramic fibers are liable to be separated from each other. A shearing force applied on the holding member 9 in a direction parallel to the elongating direction of the ceramic fibers also can easily cause the separation of the ceramic fibers. The separation of the ceramic fibers loosens the holding member 9, resulting in a decrease of the holding force of the holding member 9 for holding the ceramic catalytic support 12 in the cylindrical casing. As a result, damage to the catalytic support 12 may occur.
The above-mentioned problem is more likely to arise in a high temperature atmosphere. In the high temperature atmosphere, the thermal expansion coefficients of the ceramic catalytic support 12 and the cylindrical casing are different from each other, so that the gap between the ceramic catalytic support 12 and the cylindrical casing increases. Accordingly, the ceramic catalytic support 12 is liable to rattle within the cylindrical casing. In addition, when manufacturing the holding member 9, it is technically difficult to arrange the ceramic fibers during lamination so that they are oriented in one direction.
|
State Repression, Hostel Capitalism, and Black Resistance in Italy Resistance by the newly arrived black working class in Italy takes on forms that often leave little or no trace; the work of the historian must be to listen carefully to the silences in order to recover these acts. This contribution focuses on the resistance with asylum seeker hostels. Beginning with an examination of the resistance and repression of Bobb Alagie, it explains the reasons for the increase in these acts of resistance following the implementation of the hotspot approach, as well as the important role that the resisting subject produced by this situation has had within the formation of the new Italian right. Finally, questions are raised about the potential for future progressive alliances in the new political context.
|
/**
* Autogenerated by Thrift for src/terse_write.thrift
*
* DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
* @generated @nocommit
*/
#pragma once
#include <thrift/lib/cpp2/visitation/visit_by_thrift_field_metadata.h>
#include "thrift/compiler/test/fixtures/terse_write/gen-cpp2/terse_write_metadata.h"
namespace apache {
namespace thrift {
namespace detail {
template <>
struct VisitByFieldId<::apache::thrift::test::MyStruct> {
template <typename F, typename T>
void operator()(FOLLY_MAYBE_UNUSED F&& f, int32_t fieldId, FOLLY_MAYBE_UNUSED T&& t) const {
switch (fieldId) {
default:
throwInvalidThriftId(fieldId, "::apache::thrift::test::MyStruct");
}
}
};
template <>
struct VisitByFieldId<::apache::thrift::test::StructLevelTerseStruct> {
template <typename F, typename T>
void operator()(FOLLY_MAYBE_UNUSED F&& f, int32_t fieldId, FOLLY_MAYBE_UNUSED T&& t) const {
switch (fieldId) {
case 1:
return f(0, static_cast<T&&>(t).bool_field_ref());
case 2:
return f(1, static_cast<T&&>(t).byte_field_ref());
case 3:
return f(2, static_cast<T&&>(t).short_field_ref());
case 4:
return f(3, static_cast<T&&>(t).int_field_ref());
case 5:
return f(4, static_cast<T&&>(t).long_field_ref());
case 6:
return f(5, static_cast<T&&>(t).float_field_ref());
case 7:
return f(6, static_cast<T&&>(t).double_field_ref());
case 8:
return f(7, static_cast<T&&>(t).string_field_ref());
case 9:
return f(8, static_cast<T&&>(t).binary_field_ref());
case 10:
return f(9, static_cast<T&&>(t).enum_field_ref());
case 11:
return f(10, static_cast<T&&>(t).list_field_ref());
case 12:
return f(11, static_cast<T&&>(t).set_field_ref());
case 13:
return f(12, static_cast<T&&>(t).map_field_ref());
case 14:
return f(13, static_cast<T&&>(t).struct_field_ref());
default:
throwInvalidThriftId(fieldId, "::apache::thrift::test::StructLevelTerseStruct");
}
}
};
template <>
struct VisitByFieldId<::apache::thrift::test::FieldLevelTerseStruct> {
template <typename F, typename T>
void operator()(FOLLY_MAYBE_UNUSED F&& f, int32_t fieldId, FOLLY_MAYBE_UNUSED T&& t) const {
switch (fieldId) {
case 1:
return f(0, static_cast<T&&>(t).terse_bool_field_ref());
case 2:
return f(1, static_cast<T&&>(t).terse_byte_field_ref());
case 3:
return f(2, static_cast<T&&>(t).terse_short_field_ref());
case 4:
return f(3, static_cast<T&&>(t).terse_int_field_ref());
case 5:
return f(4, static_cast<T&&>(t).terse_long_field_ref());
case 6:
return f(5, static_cast<T&&>(t).terse_float_field_ref());
case 7:
return f(6, static_cast<T&&>(t).terse_double_field_ref());
case 8:
return f(7, static_cast<T&&>(t).terse_string_field_ref());
case 9:
return f(8, static_cast<T&&>(t).terse_binary_field_ref());
case 10:
return f(9, static_cast<T&&>(t).terse_enum_field_ref());
case 11:
return f(10, static_cast<T&&>(t).terse_list_field_ref());
case 12:
return f(11, static_cast<T&&>(t).terse_set_field_ref());
case 13:
return f(12, static_cast<T&&>(t).terse_map_field_ref());
case 14:
return f(13, static_cast<T&&>(t).terse_struct_field_ref());
case 15:
return f(14, static_cast<T&&>(t).bool_field_ref());
case 16:
return f(15, static_cast<T&&>(t).byte_field_ref());
case 17:
return f(16, static_cast<T&&>(t).short_field_ref());
case 18:
return f(17, static_cast<T&&>(t).int_field_ref());
case 19:
return f(18, static_cast<T&&>(t).long_field_ref());
case 20:
return f(19, static_cast<T&&>(t).float_field_ref());
case 21:
return f(20, static_cast<T&&>(t).double_field_ref());
case 22:
return f(21, static_cast<T&&>(t).string_field_ref());
case 23:
return f(22, static_cast<T&&>(t).binary_field_ref());
case 24:
return f(23, static_cast<T&&>(t).enum_field_ref());
case 25:
return f(24, static_cast<T&&>(t).list_field_ref());
case 26:
return f(25, static_cast<T&&>(t).set_field_ref());
case 27:
return f(26, static_cast<T&&>(t).map_field_ref());
case 28:
return f(27, static_cast<T&&>(t).struct_field_ref());
default:
throwInvalidThriftId(fieldId, "::apache::thrift::test::FieldLevelTerseStruct");
}
}
};
} // namespace detail
} // namespace thrift
} // namespace apache
|
It's a known fact that US President Donald Trump isn't an expert when it comes to handshakes. Ever since Donald Trump took over as POTUS in January, his handshakes have been under much scrutiny. Several world leaders like Japanese Prime Minister Shinzo Abe have been subjected to Mr Trump's 'yank and pull' style handshake making for rather uncomfortable pictures. But his latest, a group handshake at the Association of Southeast Asian Nations (ASEAN) summit in Manila, Philippines is being labelled his most awkward yet.
In a departure from his usual aggressive handshake, Donald Trump had some trouble figuring out which hand to grasp in a choreographed cross-body linked handshake of respective state heads at the ASEAN summit on Monday. While leaders crossed their arms to join hands with leaders next to them, Donald Trump awkwardly grabbed the hand of Vietnamese President Tran Dai Quang with both hands. With the US President briefly breaking the symbolic handshake, supposed to denote unity, Philippine President Rodrigo Duterte stood around awkwardly with a spare hand.
Mr Trump eventually figured out his mistake but not before the unflattering images made it to Twitter.
Guess he never played twister?
Donald Trump is in the Philippines for the last leg of his Asia tour which included stops in Japan, South Korea, China and Vietnam.
The US President's signature handshake style makes headlines during every high profile meeting. While Mr Trump tries his best to 'yank and pull', some bravehearts have defied Mr Trump's 'pull' and found a way around it, like Canadian Prime Minister Justin Trudeau who placed a hand on Mr Trump's shoulder to avoid getting pulled towards him.
|
/**
* Rank the attributes based on heuristics:
* 1) More percent unique is better
* 2) Less precent missing is better
* @author fpanahi
*/
private static class AttrCombComparator implements Comparator<AttrComb> {
public int compare(AttrComb a, AttrComb b) {
double uniqueDiff = tableStats.getAttrStats(a).getPercentUnique() -
tableStats.getAttrStats(b).getPercentUnique();
if (uniqueDiff > 0) {
return -1;
} else if (uniqueDiff < 0) {
return 1;
} else {
double missingDiff = tableStats.getAttrStats(a).getPercentMissing() -
tableStats.getAttrStats(b).getPercentMissing();
if (missingDiff > 0) {
return 1;
} else if (missingDiff < 0) {
return -1;
}
}
return 0;
}
}
|
public class Assignment2 {
}
|
<reponame>rdkcmf/rdkc-rms<gh_stars>1-10
/**
##########################################################################
# If not stated otherwise in this file or this component's LICENSE
# file the following copyright and licenses apply:
#
# Copyright 2019 RDK Management
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##########################################################################
**/
#ifndef _RMSPROTOCOLAPIVARIANT_H
#define _RMSPROTOCOLAPIVARIANT_H
//forward declarations
struct variant_t;
typedef variant_t *(*apiVariantCreate_f)();
typedef variant_t *(*apiVariantCopy_f)(variant_t *pSource);
typedef variant_t *(*apiVariantCreateRtmpRequest_f)(const char *pFunctionName);
typedef bool (*apiVariantAddRtmpRequestParameter_f)(variant_t *pRequest, variant_t **ppParameter, bool release);
typedef void (*apiVariantRelease_f)(variant_t *pVariant);
typedef void (*apiVariantReset_f)(variant_t *pVariant, int depth, ...);
typedef char* (*apiVariantGetXml_f)(variant_t *pVariant, char *pDest, uint32_t destLength, int depth, ...);
typedef bool(*apiVariantGetBool_f)(variant_t *pVariant, int depth, ...);
typedef int8_t(*apiVariantGetI8_f)(variant_t *pVariant, int depth, ...);
typedef int16_t(*apiVariantGetI16_f)(variant_t *pVariant, int depth, ...);
typedef int32_t(*apiVariantGetI32_f)(variant_t *pVariant, int depth, ...);
typedef int64_t(*apiVariantGetI64_f)(variant_t *pVariant, int depth, ...);
typedef uint8_t(*apiVariantGetUI8_f)(variant_t *pVariant, int depth, ...);
typedef uint16_t(*apiVariantGetUI16_f)(variant_t *pVariant, int depth, ...);
typedef uint32_t(*apiVariantGetUI32_f)(variant_t *pVariant, int depth, ...);
typedef uint64_t(*apiVariantGetUI64_f)(variant_t *pVariant, int depth, ...);
typedef double (*apiVariantGetDouble_f)(variant_t *pVariant, int depth, ...);
typedef char* (*apiVariantGetString_f)(variant_t *pVariant, char *pDest, uint32_t destLength, int depth, ...);
typedef void (*apiVariantSetBool_f)(variant_t *pVariant, bool value, int depth, ...);
typedef void (*apiVariantSetI8_f)(variant_t *pVariant, int8_t value, int depth, ...);
typedef void (*apiVariantSetI16_f)(variant_t *pVariant, int16_t value, int depth, ...);
typedef void (*apiVariantSetI32_f)(variant_t *pVariant, int32_t value, int depth, ...);
typedef void (*apiVariantSetI64_f)(variant_t *pVariant, int64_t value, int depth, ...);
typedef void (*apiVariantSetUI8_f)(variant_t *pVariant, uint8_t value, int depth, ...);
typedef void (*apiVariantSetUI16_f)(variant_t *pVariant, uint16_t value, int depth, ...);
typedef void (*apiVariantSetUI32_f)(variant_t *pVariant, uint32_t value, int depth, ...);
typedef void (*apiVariantSetUI64_f)(variant_t *pVariant, uint64_t value, int depth, ...);
typedef void (*apiVariantSetDouble_f)(variant_t *pVariant, double value, int depth, ...);
typedef void (*apiVariantSetString_f)(variant_t *pVariant, const char *pValue, int depth, ...);
typedef void (*apiVariantPushBool_f)(variant_t *pVariant, bool value, int depth, ...);
typedef void (*apiVariantPushI8_f)(variant_t *pVariant, int8_t value, int depth, ...);
typedef void (*apiVariantPushI16_f)(variant_t *pVariant, int16_t value, int depth, ...);
typedef void (*apiVariantPushI32_f)(variant_t *pVariant, int32_t value, int depth, ...);
typedef void (*apiVariantPushI64_f)(variant_t *pVariant, int64_t value, int depth, ...);
typedef void (*apiVariantPushUI8_f)(variant_t *pVariant, uint8_t value, int depth, ...);
typedef void (*apiVariantPushUI16_f)(variant_t *pVariant, uint16_t value, int depth, ...);
typedef void (*apiVariantPushUI32_f)(variant_t *pVariant, uint32_t value, int depth, ...);
typedef void (*apiVariantPushUI64_f)(variant_t *pVariant, uint64_t value, int depth, ...);
typedef void (*apiVariantPushDouble_f)(variant_t *pVariant, double value, int depth, ...);
typedef void (*apiVariantPushString_f)(variant_t *pVariant, const char *pValue, int depth, ...);
typedef struct apiVariant_t {
apiVariantCreate_f create;
apiVariantCopy_f copy;
apiVariantCreateRtmpRequest_f createRtmpRequest;
apiVariantAddRtmpRequestParameter_f addRtmpRequestParameter;
apiVariantRelease_f release;
apiVariantReset_f reset;
apiVariantGetXml_f getXml;
apiVariantGetBool_f getBool;
apiVariantGetI8_f getI8;
apiVariantGetI16_f getI16;
apiVariantGetI32_f getI32;
apiVariantGetI64_f getI64;
apiVariantGetUI8_f getUI8;
apiVariantGetUI16_f getUI16;
apiVariantGetUI32_f getUI32;
apiVariantGetUI64_f getUI64;
apiVariantGetDouble_f getDouble;
apiVariantGetString_f getString;
apiVariantSetBool_f setBool;
apiVariantSetI8_f setI8;
apiVariantSetI16_f setI16;
apiVariantSetI32_f setI32;
apiVariantSetI64_f setI64;
apiVariantSetUI8_f setUI8;
apiVariantSetUI16_f setUI16;
apiVariantSetUI32_f setUI32;
apiVariantSetUI64_f setUI64;
apiVariantSetDouble_f setDouble;
apiVariantSetString_f setString;
apiVariantPushBool_f pushBool;
apiVariantPushI8_f pushI8;
apiVariantPushI16_f pushI16;
apiVariantPushI32_f pushI32;
apiVariantPushI64_f pushI64;
apiVariantPushUI8_f pushUI8;
apiVariantPushUI16_f pushUI16;
apiVariantPushUI32_f pushUI32;
apiVariantPushUI64_f pushUI64;
apiVariantPushDouble_f pushDouble;
apiVariantPushString_f pushString;
} apiVariant_t;
#endif /* _RMSPROTOCOLAPIVARIANT_H */
|
/**
* Atomic load of 32-bit atomic variable
*
* @param atom Pointer to a 32-bit atomic variable
* @param mmodel Memory ordering associated with the load operation
*
* @return Value of the variable
*/
static inline uint32_t _odp_atomic_u32_load_mm(const odp_atomic_u32_t *atom,
_odp_memmodel_t mmodel)
{
return __atomic_load_n(&atom->v, mmodel);
}
|
// XTPChartTextDeviceCommand.h
//
// This file is a part of the XTREME TOOLKIT PRO MFC class library.
// (c)1998-2011 Codejock Software, All Rights Reserved.
//
// THIS SOURCE FILE IS THE PROPERTY OF CODEJOCK SOFTWARE AND IS NOT TO BE
// RE-DISTRIBUTED BY ANY MEANS WHATSOEVER WITHOUT THE EXPRESSED WRITTEN
// CONSENT OF CODEJOCK SOFTWARE.
//
// THIS SOURCE CODE CAN ONLY BE USED UNDER THE TERMS AND CONDITIONS OUTLINED
// IN THE XTREME TOOLKIT PRO LICENSE AGREEMENT. CODEJOCK SOFTWARE GRANTS TO
// YOU (ONE SOFTWARE DEVELOPER) THE LIMITED RIGHT TO USE THIS SOFTWARE ON A
// SINGLE COMPUTER.
//
// CONTACT INFORMATION:
// <EMAIL>
// http://www.codejock.com
//
/////////////////////////////////////////////////////////////////////////////
//{{AFX_CODEJOCK_PRIVATE
#if !defined(__XTPCHARTTEXTDEVICECOMMAND_H__)
#define __XTPCHARTTEXTDEVICECOMMAND_H__
//}}AFX_CODEJOCK_PRIVATE
#if _MSC_VER >= 1000
#pragma once
#endif // _MSC_VER >= 1000
#include "XTPChartDeviceCommand.h"
class CXTPMarkupUIElement;
//===========================================================================
// Summary:
// This class represents a text device command,which is a kind of CXTPChartDeviceCommand.
// It specifically handles the rendering of texts in a chart.
// Remarks:
//===========================================================================
class _XTP_EXT_CLASS CXTPChartTextDeviceCommand : public CXTPChartDeviceCommand
{
public:
//-----------------------------------------------------------------------
// Summary:
// Constructs a CXTPChartTextDeviceCommand object.
// Parameters:
// strText - The text to be rendered.
// pFont - The font used to render.
// color - The color of the text.
// Remarks:
//-----------------------------------------------------------------------
CXTPChartTextDeviceCommand(const CXTPChartString& strText, CXTPChartFont* pFont, const CXTPChartColor& color);
protected:
//-----------------------------------------------------------------------
// Summary:
// This is a virtual function override of base class CXTPChartDeviceContext,
// act polymorphically to do the actual drawing of the chart element using
// OepnGL, to which this device command is associated with.
// Parameters:
// pDC - The device context of the chart.
// Remarks:
//-----------------------------------------------------------------------
virtual void ExecuteOverride(CXTPChartOpenGLDeviceContext* pDC);
//-----------------------------------------------------------------------
// Summary:
// This is a virtual function override of base class CXTPChartDeviceContext,
// act polymorphically to do the actual drawing of the chart element, to which
// this device command is associated with.
// Parameters:
// pDC - The device context of the chart.
// Remarks:
//-----------------------------------------------------------------------
virtual void ExecuteOverride(CXTPChartDeviceContext* pDC);
//-----------------------------------------------------------------------
// Summary:
// This function do the actual drawing of the chart element, to which
// this device command is associated with, here it renders the text.
// Parameters:
// pDC - The device context of the chart.
// Remarks:
//-----------------------------------------------------------------------
virtual void ExecuteInternal(CXTPChartDeviceContext* pDC);
protected:
CXTPChartString m_strText; //The string to be rendered.
CXTPChartFont* m_pFont; //The font used to render the text.
CXTPChartColor m_color; //The color of the text.
};
//===========================================================================
// Summary:
// This class represents a text device command,which is a kind of CXTPChartDeviceCommand.
// This class does handles the smoothing of texts in chart and by using antialising.
// Remarks:
//===========================================================================
class _XTP_EXT_CLASS CXTPChartTextAntialiasingDeviceCommand : public CXTPChartDeviceCommand
{
public:
//-----------------------------------------------------------------------
// Summary:
// Constructs a CXTPChartTextAntialiasingDeviceCommand object.
// Parameters:
// bAntialias - TRUE if the antialiasing to be enabled, FALSE to not use
// the antialiasing.The default value is TRUE.
// Remarks:
//-----------------------------------------------------------------------
CXTPChartTextAntialiasingDeviceCommand(BOOL bAntialias = TRUE);
protected:
//-----------------------------------------------------------------------
// Summary:
// It is a virtual function override called just befor the drawing operation.
// Here it does some initial process to enable antialiasing to texts.
// Parameters:
// pDC - The chart device context.
// Remarks:
//-----------------------------------------------------------------------
void BeforeExecute(CXTPChartDeviceContext* pDC);
//-----------------------------------------------------------------------
// Summary:
// It is a virtual function override called just after the drawing operation.
// Here it does some post process to enable antialiasing to texts.
// Parameters:
// pDC - The chart device context.
// Remarks:
//-----------------------------------------------------------------------
void AfterExecute(CXTPChartDeviceContext* pDC);
protected:
BOOL m_bAntiAlias; //TRUE if antialiasing is enabled, FALSE if antialiasing is not used.
int m_nOldTextRenderingHint;
};
//===========================================================================
// Summary:
// This class represents a bounded text device command,which is a kind of
// CXTPChartDeviceCommand.It handles the rendering of bounded texts in a chart.
// Remarks:
//===========================================================================
class _XTP_EXT_CLASS CXTPChartBoundedTextDeviceCommand : public CXTPChartTextDeviceCommand
{
public:
//-----------------------------------------------------------------------
// Summary:
// Constructs a CXTPChartBoundedTextDeviceCommand object.
// Parameters:
// strText - The text to be rendered.
// pFont - The font used to render.
// color - The color of the text.
// rectangle - The text bounds.
// Remarks:
//-----------------------------------------------------------------------
CXTPChartBoundedTextDeviceCommand(const CXTPChartString& strText, CXTPChartFont* pFont, const CXTPChartColor& color, const CXTPChartRectF& rectangle);
protected:
//-----------------------------------------------------------------------
// Summary:
// This function do the actual drawing of the chart element, to which
// this device command is associated with, here it renders the text inside
// a rectangle.
// Parameters:
// pDC - The device context of the chart.
// Remarks:
//-----------------------------------------------------------------------
virtual void ExecuteInternal(CXTPChartDeviceContext* pDC);
virtual CXTPChartElement* HitTest(CPoint point, CXTPChartElement* pParent) const;
protected:
CXTPChartRectF m_rect; //The bounding rectangle of the text.
};
class CXTPChartMarkupElementDeviceCommand : public CXTPChartDeviceCommand
{
public:
//-----------------------------------------------------------------------
// Summary:
// Constructs a CXTPChartMarkupElementDeviceCommand object.
// Parameters:
// strText - The text to be rendered.
// pFont - The font used to render.
// color - The color of the text.
// rectangle - The text bounds.
// Remarks:
//-----------------------------------------------------------------------
CXTPChartMarkupElementDeviceCommand(CXTPMarkupUIElement* pMarkupUIElement, CXTPChartFont* pFont, const CXTPChartColor& color, const CXTPChartRectF& rectangle);
~CXTPChartMarkupElementDeviceCommand();
public:
virtual void ExecuteOverride(CXTPChartDeviceContext* pDC);
static CXTPChartSizeF AFX_CDECL MeasureElement(CXTPChartDeviceContext* pDC, CXTPMarkupUIElement* pMarkupUIElement, CXTPChartFont* pFont);
CXTPChartElement* HitTest(CPoint point, CXTPChartElement* pParent) const;
protected:
CXTPChartRectF m_rect; //The bounding rectangle of the text.
CXTPMarkupUIElement* m_pMarkupUIElement;
CXTPChartColor m_color;
CXTPChartFont* m_pFont;
};
#endif //#if !defined(__XTPCHARTTEXTDEVICECOMMAND_H__)
|
<filename>pubsub/subscribe.go
package pubsub
// Subscribe to a channel
func Subscribe(channel string) (listener chan Message) {
listener = make(chan Message)
mu.Lock()
defer mu.Unlock()
if listeners[channel] == nil {
listeners[channel] = make([]chan Message, 0)
}
listeners[channel] = append(listeners[channel], listener)
return
}
|
/**
* An object input stream which updates the loaded plugins before deserializing
* classes which have fields containing Pipeline plugins. <P>
*
* The {@link NodeCommon NodeCommon} and {@link QueueJob QueueJob} classes both contain
* Action plugin instances which should be instantiated using the latest version of the
* plugin. This class performs this plugin update only once for each network request
* regardless of the number of instances of the Action plugin classes are contained in the
* request. This single update also insures that all instances of an Action within a
* single request are of the same loaded class.
*/
public
class PluginInputStream
extends ObjectInputStream
{
/*----------------------------------------------------------------------------------------*/
/* C O N S T R U C T O R */
/*----------------------------------------------------------------------------------------*/
/**
* Creates an ObjectInputStream that reads from the specified InputStream.
*/
public
PluginInputStream
(
InputStream in
)
throws IOException
{
super(in);
}
/*----------------------------------------------------------------------------------------*/
/* O B J E C T I N P U T S T R E A M O V E R R I D E S */
/*----------------------------------------------------------------------------------------*/
/**
* Load the local class equivalent of the specified stream class description.
*/
protected Class<?>
resolveClass
(
ObjectStreamClass desc
)
throws IOException, ClassNotFoundException
{
String name = desc.getName();
int wk;
for(wk=0; wk<sClassNames.length; wk++) {
if(name.equals(sClassNames[wk])) {
try {
PluginMgrClient.getInstance().update();
}
catch(PipelineException ex) {
LogMgr.getInstance().log
(LogMgr.Kind.Plg, LogMgr.Level.Warning,
ex.getMessage());
}
break;
}
}
return super.resolveClass(desc);
}
/*----------------------------------------------------------------------------------------*/
/* S T A T I C I N T E R N A L S */
/*----------------------------------------------------------------------------------------*/
private static final String sClassNames[] = {
"us.temerity.pipeline.Archive",
"us.temerity.pipeline.BaseKey",
"us.temerity.pipeline.NodeBundle",
"us.temerity.pipeline.NodeCommon",
"us.temerity.pipeline.NodeDetails",
"us.temerity.pipeline.MasterExtensionConfig",
"us.temerity.pipeline.QueueExtensionConfig",
"us.temerity.pipeline.QueueJob",
"us.temerity.pipeline.SuffixEditor",
"us.temerity.pipeline.message.NodeGetAnnotationRsp",
"us.temerity.pipeline.message.NodeGetAnnotationsRsp",
"us.temerity.pipeline.message.NodeAddAnnotationReq",
"us.temerity.pipeline.message.MiscArchiveReq",
"us.temerity.pipeline.message.FileArchiveReq",
"us.temerity.pipeline.message.MiscRestoreReq",
"us.temerity.pipeline.message.FileExtractReq",
"us.temerity.pipeline.message.JobEditAsReq"
};
}
|
Puducherry Chief Minister V Narayanasamy today lashed out at Prime Minister Narendra Modi and the Centre for "letting down farmers in Tamil Nadu and Puducherry by deliberately delaying formation of Cauvery Management Board (CMB)."
The Chief Minister charged that the prime minister had time to fly abroad and take up electioneering in poll-bound Karnataka but “has no time to concentrate on forming the Board as a solution to address the hardships of farmers in Tamil Nadu and Puducherry.” Referring to the “continued blocking of implementation of vital schemes including free rice scheme by Lt Governor Kiran Bedi,” he said the free rice scheme is a programme of the State government and its expenditure is being met by the territorial administration.
She later withdrew the order on the free rice scheme after her decision came in for criticism from various quarters. The Lt Governor had no authority to block implementation of schemes in rural areas by imposing “amusing conditions”, he alleged. He said the territorial government had earmarked Rs 140 crore for implementing the free rice scheme to benefit the poorer sections, members of scheduled castes and backward classes for a period of six months from April this year.
Bedi, however, intervened and blocked the implementation of the scheme by imposing a condition that unless the villages were open defecation-free and garbage-free it would be kept on hold, he alleged. He announced that financial assistance of Rs 1,500 would be provided to each student from Puducherry allotted NEET centres outside the Union Territory besides travelling allowance. Narayanasamy asked the candidates to contact his office for assistance even after appearing for the examination for admission to medical courses, which is scheduled to be held tomorrow.
|
N=int(input())
A=list(map(int,input().split()))
if N==2:
print(max(A))
elif N==3:
print(max(A))
else:
if N%2==1:
dp=[[0 for i in range(0,3)] for i in range(0,N)]
dp[0][2]=A[0]
dp[0][1]=-float('inf')
dp[0][0]=-float('inf')
dp[1][2]=-float('inf')
dp[1][1]=A[0]
dp[1][0]=-float('inf')
for i in range(2,N):
if i==N-1:
dp[N-1][0]=dp[N-3][0]+A[N-1]
else:
dp[i][0]=max(dp[i-2][0]+A[i],dp[i-1][1])
dp[i][1]=max(dp[i-2][1]+A[i],dp[i-1][2])
dp[i][2]=dp[i-2][2]+A[i]
test1=dp[N-1][0]
dp=[[0 for i in range(0,2)] for i in range(0,N)]
dp[0][1]=-float('inf')
dp[0][0]=0
dp[1][1]=A[1]
dp[1][0]=-float('inf')
for i in range(2,N):
if i==N-1:
dp[N-1][0]=dp[N-3][0]+A[N-1]
else:
dp[i][0]=max(dp[i-2][0]+A[i],dp[i-1][1])
dp[i][1]=dp[i-2][1]+A[i]
test2=dp[N-1][0]
A=A[::-1]
dp=[[0 for i in range(0,2)] for i in range(0,N)]
dp[0][1]=-float('inf')
dp[0][0]=0
dp[1][1]=A[1]
dp[1][0]=-float('inf')
for i in range(2,N):
if i==N-1:
dp[N-1][0]=dp[N-3][0]+A[N-1]
else:
dp[i][0]=max(dp[i-2][0]+A[i],dp[i-1][1])
dp[i][1]=dp[i-2][1]+A[i]
test3=dp[N-1][0]
test4=sum(A[i] for i in range(0,N) if i%2==1)
print(max(test1,test2,test3,test4))
if N%2==0:
dp=[[0 for i in range(0,2)] for i in range(0,N)]
dp[0][1]=A[0]
dp[0][0]=-float('inf')
dp[1][1]=-float('inf')
dp[1][0]=A[0]
for i in range(2,N):
if i==N-1:
dp[N-1][0]=dp[N-3][0]+A[N-1]
else:
dp[i][0]=max(dp[i-2][0]+A[i],dp[i-1][1])
dp[i][1]=dp[i-2][1]+A[i]
test1=dp[N-1][0]
test3=sum(A[i] for i in range(0,N) if i%2==0)
test4=sum(A[i] for i in range(0,N) if i%2==1)
print(max(test1,test3,test4))
|
Physical object interaction in first responder mixed reality training Virtual Reality (VR) systems can improve the training of first responders and soldiers in multi-domain operations in a number of ways. Realistic simulation of physical objects, however, is challenging. The huge variety of equipment pieces and other objects specialists in first responder units - and in particular CBRN-troops - interact with further increases the effort. In this paper, we present a novel and flexible Mixed Reality (MR) training system for first responders that enables the integration of physical objects by using Augmented Virtuality (AV) and binary tracking". A Head Mounted Display (HMD) immerses the user in VR, while augmenting the visualization with 3D imagery of real objects, captured by an RGB-D sensor. In addition, a RFID-reader at the user's hand detects the presence or absence (binary response) of certain equipment items. Our proposed MR system fuses this information with data of an inertial motion capture suit to an approximate global object pose and distributes it. Our solution provides a wide range of options for physical object interaction and collaboration in a multi-user MR environment. In addition, we demonstrate the training capabilities of our proposed system with a multi-user training scenario, simulating a CBRN crisis. Results from our technical and quantitative user evaluation with 13 experts in CBRN response from the Austrian Armed Forces (National Defense Academy and Competence Center NBC Defense) indicate strong applicability and user acceptance. Over 80% of the participants found it easy or very easy to interact with physical objects and liked the multi-user training much or very much.
|
Evaluation of Nonnuclear Soil Moisture and Density Devices for Field Quality Control When new transportation infrastructure is constructed or current infrastructure systems undergo maintenance, sufficient soil strength is critical to a successful construction effort. Currently, soil design specifications are given for a minimum soil density and a specified range of soil moisture content. Quality control is achieved by monitoring the soil density and moisture content throughout the construction process. The nuclear density gauge (NDG) is most commonly employed to determine soil density and moisture content because of its ease of use, speed of readings, and reliability of results. However, potential safety hazards and rigorous user certification requirements have led many agencies to seek alternative devices. This paper focuses on a portion of a much larger study that compared a wide range of compaction control devices; the paper also assesses the performance of devices that measure soil density and moisture content. Several new, commercially available alternatives for the measurement of soil density were tested on various soil types and conditions to determine which device performed best and most consistently. For the same soil types and conditions, several devices and techniques for the determination of soil moisture content were also tested. The combination of the TransTech soil density gauge and the heated frying panopen flame field moisture content techniques represented the best alternative to the NDG.
|
This invention relates generally to the art of making corrosion resistant valves, and more particularly, to a method of making a knife gate corrosion resistant valve.
The desirability of specially constructing corrosion resistant valves has long been recognized where valves are to be used for such things as highly active chemicals, acids, bases, solvents, gasoline and the like. A difficulty encountered in constructing such valves is that materials which are resistant to corrosion are often expensive and/or weak so that it is normally difficult and expensive to construct valve members out of such materials. One common practice in constructing corrosion resistant valves is to employ inexpensive frames or housings which have corrosion resistant linings covering those surfaces to be contacted by corrosive fluid materials.
For example, U.S. Pat. No. 3,545,480, describes a rather popular valve in which corrosion-resistant, sheet metal liners are first constructed and then a cast valve body is machined to fit the liner. A valve seat is then attached to the liner at the throat of the valve. The liner described in this patent is constructed in sections and is then assembled. It is easy to see that this construction, although saving money by not using expensive materials for the body is still unduly expensive because of the time necessary for machining parts, assembling the liner members and attaching the valve seat. Further, this valve only works in one direction inasmuch as it only has a valve seat on one side of the gate. Still further, a problem with this valve is that, since the liner is constructed of elements which must be assembled, cracks are left between the liner elements which must be filled by welding or with plastics. Thus, it is an object of this invention to provide a method for making a corrosion resistant valve in which a liner is to be made of only one piece and in which the liner is combined with a body without undue amount of work to adapt these two members to fit one another. It is also an object of this invention to provide such a method of making such a valve which produces a valve which can be used to cut off flow in either direction. Still further, it is an object of this invention to provide a method of making a valve in which a valve seat is produced simultaneously with producing the liner.
It has been suggested to use fluorocarbon resin liners for valves, and other conduit, bodies. Such suggestions are made in U.S. Pat. Nos. 3,206,530 to Boteler, 3,438,388 to Schenck and 3,026,899 to Mischanski. In each of these cases, the liner is intended to protect a body, or housing, from corrosive liquids passing through the valve body. Boteler (U.S. Pat. No. 3,206,530) mentions that the liner can be molded in the valve body housing or separately therefrom. Schenck (U.S. Pat. No. 3,438,388) heats a tube of fluorocarbon resin which he then pressurizes against the walls of a valve body using a pressurized gas. In these cases valving elements are combined with gaskets, valving-element liners and the like at the interface between the valving element and its seat. It is an object of this invention to provide a method of forming a corrosion-resistant valve in which a liner can be constructed to closely interface with a valving element in one step so that no gaskets, seats, valving-element liners or other sealing members are necessary in order to control flow through a valve flow passage. It is also an object of this invention to provide a method for making a corrosion resistant valve which works in two directions in one step.
It is yet a further object of this invention to provide a method of making a corrosion-resistant valve which is extremely inexpensive since it does not require undue machining of parts, but yet which provides excellent operation characteristics and an unusual amount of durability.
|
GM Earnings: Will Taxpayers Make Money on GM?
General Motors earned $2 billion in the third quarter, or $1.20 a share, after adjusting for a 3-for-1 stock split, as revenue rose 20 percent. Compared to the loss of 73 cents a share in the period of July 10, 2009 to Sept. 30, 2009, the results should make Uncle Sam happy--the government currently owns 61 percent of GM.
The results are encouraging, ahead of the company's plan to go public next week. In IPO on November 18th, GM will seek to raise $13 billion by selling common and preferred shares of stock.The majority of the proceeds will go to its existing shareholders, not the company, including Uncle Sam. The IPO is expected to reduce the US Treasury's stake from 61 percent to approximately 40 percent.
Here's the big question: will taxpayers make money on GM? That depends--like any investment, we need to see the average price at which we sell our stake.Treasury intends to space out the remaining sales of stock over the next few years. Analysts believe that if the price of the stock doubles from the IPO price, we just might break even on our investment, which would have to be considered a big win.
|
Weiss
The sun was still low in the sky as Ruby and I reached the top of the city walls. It was just peaking over the top of Beacon's spires. We'd left early in the morning, eager to get to our new mission. Ruby seemed to see it as making amends for Blake's condition. I wanted something to take her mind off our injured comrades.
We gazed down at the forest from our place up on the wall. Two guards had come with us, with ropes to lower us down.
"So there was no sign of any gaps or cracks in the wall on the inside?" I asked them. They both shook their heads.
"No Ma'am. The wall was intact on the inside." One of them answered. I looked down at the base of the wall, wondering what there could be for us to find. If there was no crack on the inside, then why would there be anything wrong with the wall on the outside?
Ruby and I shared a thoughtful look; she was probably thinking the same thing. I let my gaze linger on Ruby a little while longer after she looked away. Ever since our… meeting… in the shower two days prior, I'd been struggling to hold myself away from Ruby. I didn't know what to do anymore. I wanted to hold myself back, I didn't know how she felt about me; I wanted to throw myself at her, and she was definitely sending me hints towards that direction. But I didn't know if she had the same feeling as me.
But she was definitely trying to get closer. Our evening in Blake's ward in the infirmary had proven that. She'd snuggled up to me for the entire time we were there, and I'd furiously fought the urge to wrap my arms around her the entire time. I didn't want her to realise my feelings if she didn't reciprocate them. But why would she be trying to get so close if she didn't feel the same way? I wanted to tear my hair out. All this second guessing was driving me crazy.
Just find a way to ask her, I told myself, you don't need to tell her your feelings. Just ask her how she's feeling. It was a good idea, but it would sound strange… "So Ruby, how do you feel about me?" I mentally shuddered. No, that would never work. "So Ruby, I feel like we've been getting closer…" Nope. No way in Hell. "Ruby, you know, if you ever need to talk about anything, you can talk to me." Maybe… Probably not, she had Yang for that. She'd have no reason to confide in me. Unless she does feel the same way…
"So where should we start?" Ruby asked. I nearly leapt out of my skin. I'd been so caught up in my thoughts that I'd almost forgotten she was there. I focussed on the situation at hand. We had a job to do. There's no time to be worrying about your relationship with Ruby right now.
"There's no sense in dilly dallying, we might as well just go down there and start looking around." I replied. Ruby nodded.
"Okay, we'll just drop down from here." She said to the guards. The guards hurried to attach the ropes to pulleys and get them ready for our descent. We hooked ourselves up as soon as they were ready and began rappelling down the wall.
We hit the ground at the base of the wall and drew our weapons. I watched our surroundings while Ruby unhooked herself, and she kept watch while I did the same. There were no signs of Grimm anywhere near us. I wasn't sure if that was strange or not. We hadn't been told how often Grimm came near the wall.
I looked in either direction along the wall, wondering which way to start searching. It all looked exactly the same. I looked at Ruby, who shrugged and headed off to our left, eyes on the face of the wall. I followed after her, keeping my eyes on the forest.
We walked for what seemed like hours, trekking through a never changing landscape of crimson trees. I stayed watchful as we walked, my eyes focussed on the trees around us, searching for any sign of movement. I looked over my shoulder now and again to look at Ruby, making sure she was still there. I hadn't heard a peep from her since we'd descended into the forest.
She was still there every time I looked, but she was always facing the wall. I couldn't help but wonder if she'd found anything. But at the same time I knew that if she had, she'd probably have told me. I did my best to suppress my curiosity, focussing on the forest around us.
We'd been walking for at least an hour, maybe even two. But there was still no sign of any movement in the forest around us. Something was definitely strange about this now. We'd spent hours in a forest known for being filled with Grimm, and we'd yet to even hear one in the distance. I had expected to at least have had an encounter with a few Beowolves or a Boarbatusk within the first hour.
I looked up, at the top of the wall. I could just make out the guards that had lowered us down at the top. They were following along the wall with us, ready to drop their ropes down to help us up at any moment. I sighed, turning back to Ruby.
"Ruby," I called out to her. "Should we turn back? We might be able to find something in the other direction." I turned to face the forest while waiting for her answer. She was silent for a moment before answering.
"Yeah, I guess so." I heard her call out. "There doesn't seem to be anything here…" There was something in her voice that worried me. She seemed distracted, conflicted. I stole a glance at her, checking to see if she was okay. She was gazing up at the wall, her back to me.
I glanced back at the forest one last time, turning to walk over to Ruby.
"Ruby…?" I called her name as I reached out to touch her on the shoulder. She jumped, spinning around and raising Crescent Rose. I ducked, bringing Myrtenaster up to guard my head. "Ruby! What're you doing?"
I heard Crescent Rose crash to the ground. I looked up at Ruby. She was hugging herself, tears streaming down her face. I watched in disbelief as she fell to her knees, sobbing. I stood up, taking a step towards her.
"Ruby…? What's the matter?" I reached a hand out towards her.
Ruby let out a piercing shriek, holding her hands over her ears. I stumbled back, covering my own. Her scream rang out, echoing through the forest behind me. Something must have heard that, I thought, examining the trees, something's got to come now. I pulled my hands away from my ears, turning back to Ruby. As I took a step towards her, she screamed again, somehow louder than before.
She was on her knees in the dirt, head tilted back, screaming towards the sky. I fell to my own knees, covering my ears once more. I crawled towards Ruby as her scream tore through the air. I grabbed onto Ruby's skirt, trying to pull her onto the ground. When she didn't move, I crawled further forward, tackling her.
We tumbled to the ground, Ruby's scream cutting off. She bucked, trying to throw me off. I pinned her on the ground, preventing her from moving. Despite my efforts, Ruby kept screaming. My ears were in agony, and I couldn't use my hands to block them, or Ruby would toss me off.
"Ruby!" I yelled, trying to reach her over the sounds of her screaming. She didn't stop, she didn't even react. She just kept screaming. I gritted my teeth, closing my eyes. Sorry Ruby. I let go of Ruby with one hand and hit her, hard. As my palm slapped into her cheek, her head jerked to the side, momentarily stopping her screaming.
I let out a sigh of relief, she'd finally calmed down. Before I could get off Ruby, I heard a sharp intake of breath from her, and she screamed again.
"Somebody stop the whistling!" she wailed, tears streaking her face. Whistling? What whistling? I put my hands over Ruby's ears, pressing my forehead to hers.
"Ruby! There's no whistling! Calm down!" I yelled.
"Noooooo! I can hear it!" Her words descended into inarticulate screaming. I felt tears welling up in my own eyes, what was happening to Ruby?
I picked up Myrtenaster, tears running down my face.
"I'm sorry Ruby…" I whispered. I swung Myrtenaster, slamming the pommel into the side of Ruby's head. She instantly fell silent, her head slumping to the side. After taking a second to check her breathing and heart rate, I gave Ruby a gentle kiss on the forehead. I shook with sobs, tears dripped onto Ruby's face, mixing with her own. I reached down, caressing Ruby's now peaceful face.
"Oh Ruby… What happened to you?" I whispered. Maybe the stress and the worry were getting to her, she was having a break down. I held Ruby in my arms, clinging to her.
Something growled in the trees behind me. My head jerked up. I let go of Ruby, spinning around to face the forest. A group of three Beowolves were emerging from the trees. I swore, reading Myrtenaster. I loaded a chamber of Burn Dust, poising Myrtenaster to strike.
The open field between the wall and the trees was an excellent battle ground, giving me all the room I needed to dive, jump, slash and burn my way through the Beowolves. These Beowolves were much smaller than the one that had injured Blake and I was fighting in a much more advantageous battleground than a dark factory. I dealt with the Beowolves with ease, leaving their bodies smoking on the edge of the forest.
I approached the wall, shouting up to the guards atop it to lower their ropes. I turned to watch the forest as they began lowering the ropes, watching for any other sign of movement. Surely the cries of the Beowolves would attract more Grimm. Keeping one eye on the forest, I hooked one of the ropes onto Ruby's belt, making sure it was securely fastened.
I heard a loud roar come from the forest and spun to face it. An Ursa was smashing its way through the trees towards Ruby and I. I sheathed Myrtenaster in a rush, tying the other rope to my own belt. I yanked down hard on the ropes, signalling the guards atop the wall to reel us in. I picked up Ruby, holding her in my arms as the ropes tightened. The Ursa had reached the end of the trees now and was charging across the open gap between the forest and the wall.
"Hurry up!" I screamed at the guards. When the Ursa was merely three metres away, the ropes jerked Ruby and I into the air. I tucked my legs up, jumping off the wall to get further away from the Ursa. The Ursa leapt, trying to chase us, swinging its paws madly. Its claws sliced through the dangling fabric of Ruby's cloak, shredding the end of it, but we were too high for it to reach us. I sighed with relief as the guards hauled us up, watching the Ursa smash against the sheer wall, bellowing in frustration.
I held Ruby to my chest tightly all through our ascent. What had happened? Why did she break down like that? I was worried about her, was she really losing it? I caressed her cheek gently and kissed her lightly on the forehead.
When I looked up, we were almost at the top of the wall, the guards sweating as the hauled us up. They collapsed on the ground as they hauled us over the lip, gasping for breath. I laid Ruby down gently and sat against one of the parapets along the wall. I leant my head back, closing my eyes. We're okay, I thought, for now at least. We still have to work out what's happening with Ruby. I looked over at her now peaceful face, wondering what had gone wrong.
"What the hell happened?" one of the guards gasped. I looked over at them, they were still lying on their backs, breathing heavily.
"I don't know," I replied. "She seemed to be on edge for a while, and then she just… Well, you heard."
"Yeah. Yeah we did. What was she screaming about?"
"Something about whistling. She wanted me to stop 'the whistling'."
"So she lost it, basically. She absolutely lost it." The other guard muttered, rolling over onto his side.
"She did not lose it, thank you very much. That's my team leader you're talking about, show her some respect!"
"Okay, okay, my apologies Ma'am. But, clearly something went wrong." He was right. I looked at Ruby again. Something had gone wrong. I felt sure it was just the stress of the last few days getting to her, so he was right in a way; she had lost it.
"Well, we won't know until she wakes up." I said, putting the matter to rest. "Send for some help, I want to get her back to Beacon as soon as possible."
I was sitting next to Ruby's bed in Beacon's infirmary when she woke. She looked around the room, squinting, looking confused. I watched as she tried to sit up, held a hand to her head and slumped back onto the bed.
"Uhhh… My head." She groaned. "Where am I?"
"You're in the infirmary at Beacon." I replied. Rather shortly. I was worried about Ruby, but I couldn't help but be a little angry. Once my initial concern had worn off, I remembered that the girl had nearly taken my head off!
"Weiss? Wha… What happened?"
"I don't know Ruby. You tell me. What the hell happened out there?" I sat back in my chair, crossing my arms. She looked up at me, blanching at my stern look.
"I… I don't know. The last thing I remember is getting to the bottom of the wall. What happened?"
"What happened? You went insane! That's what happened! You stopped talking for hours, and when I tried to see if you were okay, you nearly cut my head off!" I shrieked, losing my cool. Ruby looked up, a jovial half-smile on her face. She thought I was joking?
"Good one," she laughed. "Like I'd ever… Weiss?" Her face fell as she realised I was being serious. I glared at her as she turned pale, terror etching itself on her face. "W-Weiss… Did… Did I hurt you?"
"No, you dunce!" I yelled. I knew I should calm down, stop yelling, but I was too mad now. "But not from lack of trying! What the hell happened Ruby? First you try to kill me, and then you start screaming, about some whistling that no one else could hear, loud enough for all of Forever Fall to hear you." Ruby grew paler as I went on, looking down and rubbing her hands together nervously.
As I finished my tirade, she looked up at me, her face pale.
"Weiss…" She whispered. "Did that really happen?"
"Of course it did you dolt! Do you think I'd make this up? What do you take me for? How can you not remember?" I turned away from Ruby, refusing to meet her eyes.
"Weiss!" she yelled. I froze, turning to look at her. She was angry, looking at me sternly. "I'm sorry if I hurt you, but I really don't remember any of this. I don't remember any whistling, I don't remember attacking you. Like I said, the last thing I remember was getting to the bottom of the wall. Now, you look like your fine. You must be if you have the energy to get this angry at me. And if you're okay, then you don't have any reason to get mad in the first place!"
I slunk back in my chair, retreating before Ruby's anger. I hadn't seen her this angry at me since we'd argued in the Emerald Forest on our first day. And she was right. I knew I was being unreasonable.
"I… I know," I said apologetically. "I… I'm… I'm sorry Ruby. I don't know, it was just a very stressful morning." Ruby's face softened.
"I understand Weiss, but we're both okay right?"
"Yeah." I said, smiling at her.
"Then everything's okay." She said, returning the smile before wincing and clutching a hand to her head. "Urgh…" she groaned. "What happened to my head? It's like something's trying to get out."
"Uh… About that..." I said sheepishly. "I had to find a way to calm you down, to stop you from screaming, so I kind of… had to… knock you out."
"You what?" She asked, frowning. "So you're getting angry at me, while you're the one who nearly split my head open!"
"Uhh… Yes?" I cringed, bracing for another tirade.
Ruby took a deep breath, calming herself down.
"Okay, we're both alive, we're okay. There's no need for anyone to get angry." She smiled, showing that she'd calmed down. I smiled back, relieved. "So have you told Yang about what happened?" she asked.
"Yes, I have. She was here earlier, watching over you with me, but she's gone back to Blake's room now. She seems to spend all her time there."
"Well she can't go to classes or exercise, what else does she have to do?"
"I suppose you're right."
We sat in silence for a minute or two. We heard people passing down the hallway outside, people talking normally. Eventually Ruby broke the silence.
"Have you told Professor Ozpin?" she asked.
"No," I replied, just realizing that I hadn't. "We probably should, shouldn't we?"
"Yeah, he might have some idea about what happened."
"Should I go now, or wait until tomorrow? It's starting to get late." I said, looking out the window. "You were out for most of the day."
Ruby peered out the window thoughtfully.
"No, we can both go and see him tomorrow. I'm sure my head will feel better once I get a good night's sleep." She laid herself down, laying her head on the pillow.
"Hey, Weiss?" She whispered, looking up at me. "Can you do me a favour?"
"Sure, what is it?"
"Will you stay with me tonight?" she asked, blushing slightly. I smiled down at her, feeling my own cheeks grow warm.
"Of course I will."
|
def simplex(A, b, c):
|
This invention relates to a semiconductor module with a plurality of semiconductor devices, especially a memory module with a plurality of memory chips such as a dynamic random access memory.
Each memory chip of a memory module refers a reference potential for deciding a logical value. Erroneous decisions of the memory chips are lead by a plurality of noises.
The reference potential of one of the memory chips is affected by first to third noises. The first noise is transmitted from other devices outside of the memory module. The second noise is generated by the memory chip itself. The third noise is generated by another one of the memory chip of the same memory module and transmits between the memory chips.
The first noise is depressed by a memory module comprising a low pass filter (LPF) between a system board. The memory module having the LPF is disclosed in U.S. Pat. No. 6,646,945.
The second noise is depressed by a memory module comprising a common reference electrode and a decoupling capacitor. The common reference electrode is extending in a plane parallel to a ground layer and supplies a reference potential to a plurality of memory chips directly. The decoupling capacitor is connected between the reference electrode and the ground layer.
|
1. Field of the Invention
The present invention relates to a microscope examination method, an optical stimulation apparatus, and a microscope examination apparatus employed in optical marking. Optical marking utilizes a substance that produces fluorescence in response to an optical stimulus, such as a fluorescent protein or a caged compound, or that activates another fluorescent substance.
This application is based on Japanese Patent Application No. 2004-152994, the content of which is incorporated herein by reference.
2. Description of Related Art
A conventionally known microscope examination apparatus including this type of optical stimulation apparatus is the microscope examination apparatus disclosed, for example, in Atsushi Miyawaki, et al., “Special Review, Optical techniques using the new fluorescent protein kaede”, Cell Technology, Vol. 22, No. 3, 2003, pp 316-326 (hereinafter referred to as reference 1).
This microscope examination apparatus is an inverted-type incident-light fluorescence microscope having an observation light source formed of a xenon lamp and a fluorescence filter for carrying out fluoroscopy. In addition, this microscope is also provided with a xenon lamp, an excitation filter, and a field stop for forming a spot of ultraviolet light serving as an optical stimulus. The optical stimulus is made incident on the specimen along the same optical axis by means of a dichroic mirror disposed in the light path of the observation light source.
With this microscope examination apparatus, the optical system, including the field stop, the focusing lens, the objective lens, and so forth, can be precisely adjusted, thus making it possible to precisely position the spot of ultraviolet light at the center of the field of view used for fluoroscopy. Therefore, the location in the object under examination, such as a cell, where the optical stimulus is to be applied can be aligned with the center of the field of view and irradiated with the spot of ultraviolet light. Thus, the optical stimulus can be accurately applied to the target cell, which allows optical marking to be carried out.
In such a case, the optical stimulus location in the object under examination is restricted to a single point at the center of the field of view in the microscope apparatus in reference 1. Therefore, when an optical stimulus is to be accurately applied to that location, it is essential to shift the relative positional relationship in directions orthogonal to the optical axis of the objective lens and the object under examination.
When examining cells and so forth, in order to maintain the viability of the cells, it is customary to carry out examination of the cells while they are disposed in a predetermined amount of liquid, such as a culture medium or the like. However, one drawback with this technique is that moving the object under examination while keeping the objective lens fixed causes the cells to move around in the liquid, thus changing the examination conditions. Also, when moving the objective lens while keeping the object under examination fixed, it is necessary to move the entire optical system including the objective lens. In order to move it with accuracy, the apparatus inevitably becomes larger and the cost is also increased. This is another drawback.
Another possible method is that in which a field stop is moved in directions orthogonal to the optical axis. This method does not suffer from the drawbacks mentioned above. However, the irradiation position of the spot of ultraviolet light is arbitrarily moved in the optical field, which differs from the methods described above in which the spot is fixed at the center of the field of view. Therefore, this method suffers from a problem in that it is difficult to accurately specify the irradiation position.
|
Quality of life, cognitive, physical and emotional function at diagnosis predicts head and neck cancer survival: analysis of cases from the Head and Neck 5000 study Purpose The aim of this paper is to determine whether health-related quality of life (HRQOL) at diagnosis of head and neck cancer (HNC) is associated with overall survival following treatment with curative intent after adjusting for other factors. Methods Data were collected from 5511 participants of the Head and Neck 5000 study (HN5000). HRQOL was measured using the EORTC QLQ-C30. Questionnaire and covariate data were available from 2171 participants diagnosed as follows: oral cavity, oropharynx HPV+ and HPV−, and larynx. On average, participants were followed up 3.2 years (SD 1.2) after diagnosis. Data were adjusted for age, gender, co-morbidity, intended treatment, education level, income from benefits, smoking status and alcohol consumption. Results There was a clinically meaningful difference between Global HRQOL scores at diagnosis and survival in an unadjusted and adjusted model: . In analyses stratified by tumour site and HPV status, this association was similarly noted before adjustment and persisted after. There were some tumour sub-site variations: improved survival for people with laryngeal cancer reporting higher levels of physical role or social functioning and people with oral cancer reporting higher levels of role or social functioning. Conclusion As survival is the main priority for most people diagnosed with cancer, pre-treatment HRQOL is an additional factor to be included in risk stratification and case-mix adjustments. There is merit in incorporating HRQOL into routine clinical care as this is a useful facet in patient-clinician decision making, prognostication and recovery. Introduction For people diagnosed with head and neck cancer (HNC) and their carers, survival is an important priority. Different studies have shown the importance of individual, clinical, treatment, lifestyle and social factors in predicting survival in different cancer types but the influence of pre-treatment health-related quality of life (HRQOL) in large HNC cohorts has not previously been reported. A meta-analysis of 30 randomised controlled trials (started between 1986 and 2004) from the European Organisation for Research and Treatment of Cancer (EORTC) included survival data for 10,108 patients with 11 different cancer sites. Although set in the context of clinical trials and not HNC specific, this study showed that baseline HRQOL gave additional prognostic information over and above that derived from sociodemographic and clinical measures. These authors also reported that, for people with HNC, emotional functioning, nausea and vomiting and dyspnoea predicted survival. In a systematic review of the association between HRQOL and survival in patients with HNC in 19 different studies, 12 studies focused on all subscales of the EORTC questionnaire and 7 focused on selected subscales. There was strong evidence for a positive association between survival and pre-treatment physical functioning and change in global QoL from pre-treatment to 6 months after treatment. These findings are at variance to other studies were there appeared to be some association between selected psycho-social factors and survival, however this relationship was not strong. There is insufficient evidence for associations between survival and other pretreatment HRQOL subscales (role functioning, emotional functioning, cognitive functioning, social functioning and mental HRQOL). Recent findings from a prospective study of 109 people with HNC reported an impact of HRQOL over a longer time frame where higher levels of HRQOL at diagnosis (improved physical function and reduced sleep disturbance) predicted improved 10-year survival rates independent of clinical, individual and lifestyle factors. Although there are studies that investigate the impact of HRQOL on survival for people diagnosed with cancer, many of them are subject to limitations: those with large sample sizes are often carried out in cohorts of people with different cancers and those that focus on HNC are usually restricted by small samples in which it is difficult to stratify for tumour site and other factors. The UK based Head and Neck 5000 study (HN5000) is a prospective study of over 5000 people diagnosed with HNC. This cohort provides a unique opportunity to explore factors at the time of diagnosis that may predict survival up to 3 years after diagnosis of HNC. The large sample size allows analyses to be stratified by tumour site and helps quantify the place of HRQOL as a predictor of survival after adjustment for other variables. The aim of this paper is to determine the effect of HRQOL in predicting overall survival for participants in the HN5000 cohort following treatment with curative intent after adjusting for other associated factors. Methods Data were collected from participants in the Head and Neck 5000 prospective clinical cohort study (HN5000). Details on HN5000 have been published and a fully searchable data dictionary is available online (https ://www.heada ndnec k5000.org.uk/). Data were collected data at diagnosis (baseline) and 4 and 12 months and three years later using self-report questionnaires and data capture forms (DCF) to record details from clinical records. Of the 5511 people were consented into the study, 142 were subsequently found to be ineligible. The resultant study sample contained 5369 people with head and neck cancer. Ethics The study was approved by the National Research Ethics Committee. Inclusion criteria For this study we included people diagnosed with an oral cavity, oropharyngeal or laryngeal tumour defined using the following ICD codes: C01, C02.4, C05.1, C05.2, C05.8, C05.9, C09, C10. We excluded people who did not provide a blood sample or consent to storage of biosamples at the time of diagnosis and therefore did not have serum HPV status. We also excluded people on a palliative or supportive treatment pathway at diagnosis. This was because we expected the relationship between quality of life and survival to be different in this small group of people, compared with the majority of people who were on a curative treatment pathway. Questionnaire HRQOL at diagnosis was measured using the EORTC QLQ-C30 questionnaire. It comprises 30 questions combined into nine symptom scales, five functional domains and a global measure of HRQOL. For the purposes of this study we used the five functional domains (physical, role, emotional, cognitive and social functioning) and the global HRQOL as exposure variables. Scores were calculated according to EORTC guidelines resulting in a range of 0-100 for each domain. Clinically meaningful differences in HRQOL were considered to be evident when there was a 10-point difference in scores. Outcome The primary outcome was survival as of 1 April 2017. This was recorded via patient medical records and linkage to death certificate data through the UK Health and Social Care Information Centre (HSCIC). Confounders We included various demographic, clinical and health behaviour factors that may confound the association between HRQOL and survival. These were: age at diagnosis, gender, highest educational qualification and the proportion of household income that comes from benefits; clinical tumour, node and metastatic (TNM) stage, pre-treatment co-morbidity using the overall comorbidity score from the Adult Comorbidity Evaluation (ACE)-27 and intended treatment, categorised as: surgery only, surgery with adjunct therapy, chemoradiotherapy only, radiotherapy only. Health behaviours were smoking status (current, former or never smoker) and alcohol consumption. Alcohol consumption was converted into standard UK alcohol units per week. We categorised this into four categories of alcohol consumption: non-drinker, moderate use, harmful use and hazardous use. Serum HPV testing We tested blood samples for HPV status. The primary measure was serological response to HPV antibodies using a glutathione S-transferase multiplex assay carried out at the German Cancer Research Centre (DKFZ) in Heidelberg, Germany. We defined seropositivity as HPV16 E6 antibodies > 1000 Median Fluorescence Intensity units (MFI). Statistical analysis We compared the data for participants with complete versus incomplete data. For those with complete data we stratified all analyses by tumour site, with further stratification by serum HPV status for people with oropharyngeal cancer. We described the HRQOL, demographic and clinical characteristics, and health behaviours of people in these four groups and compared them using ANOVAs (Kruskal-Wallis test for skewed data) for continuous variables and Pearson's chi-square test for categorical variables. For the HRQOL measures we conducted further post-hoc pairwise comparisons using Dunn's test (with Bonferroni correction) for omnibus tests where p < 0.1. We used Cox proportional hazards models to investigate the effects of different patient and treatment factors on the risk of death. People alive at our latest date of follow-up were assigned as right censored at this date. We derived hazard ratios for a 10-point change in each of the QOL scales, which is considered to be a clinically meaningful difference using univariable Cox regression models. We then adjusted the Cox models for age, gender, comorbidity, intended treatment, education, income from benefits, smoking status and alcohol consumption. We tested the proportional hazards assumption and found that TNM stage was not proportional. We therefore stratified our analyses by TNM stage. To control the family wise error rate we considered a 'family' of statistical tests to be the Cox regression models within a specific tumour site (either unadjusted or adjusted). Using this definition, we applied a Bonferroni corrected significance level of 0.05/6 = 0.008. We considered all other results to be exploratory and significance levels for these tests were not adjusted. Results From the total H&N5000 cohort confirmed as eligible to participate (N = 5511), 4323 people (80.5%) were diagnosed with either oral cavity (OC), oropharyngeal (OPC) or laryngeal cancer (LC). We excluded 88 people on a palliative treatment pathway and a further 674 people who did not have HPV serology available. Consequently 3561 people met our inclusion criteria and comprised our study population. We analysed data from 2171 (61.0%) participants who had complete HRQOL, confounder and outcome data (see Fig. 1); the mean follow-up time was 3.2 years (SD 1.2). There were 440 deaths during the study period with a total of 6874 person-years of follow-up. People with complete data differed from those without complete data across most exposures and confounders (Table 1). Most notably a larger proportion of people with complete data had HPV-positive OPC tumours and were younger with fewer or less severe comorbidities. People with complete data were also more likely to have never smoked but they did report higher alcohol consumption at the time of diagnosis. Differences in HRQOL between the complete and incomplete data groups were minor, as seen by the comparable median, and upper and lower quartiles, although p-values were small. This reflects small differences in the relative ranking between the groups. Variation in QoL by tumour site There were small differences in global HRQOL and in the functional domains between people with OC, OPC (HPV±) and LC tumours at diagnosis ( Table 2). People with OPC-HPV+ tumours having higher global HRQOL and higher physical functioning scores. Those with OPC-HPV + and LC tumours had higher emotional functioning scores than those with OC or OPC-HPV− tumours. People with OC had lower cognitive function scores than those with OPC-HPV+ or LC tumours and people with OPC-HPV− had lower social function scores than people with either OC or LC tumours (Table 3). In summary, people with OPC HPV+ report better HRQOL in all categories than those with tumours in other oral sites. Global quality of life We found a clinically meaningful difference between Global HRQOL scores at diagnosis and survival in our unadjusted model when all tumour sites were analysed together and this difference remained after adjustment: ( Table 4). In analyses stratified by tumour site and HPV status this association was similarly noted before adjustment and persisted after for those with OC ( Functional domains A 10-point higher score in all but the cognitive and emotional functional domains at diagnosis was associated with improved survival when all tumour sites were analysed together (Table 4). Within the tumour sub-site analyses, global HRQOL for those with OPC tumours (irrespective of HPV status) was not associated with survival in the fully adjusted model. Discussion In this study we used the EORTC QLQ-C30 questionnaire to examine the association between baseline HRQOL (global score and functional domains) and survival for people from the HN5000 cohort with OC, OPC (HPV±) and LC tumours. After adjusting for demographic and clinical confounders, our findings show that people who report higher (better) levels of HRQOL at diagnosis have a higher survival compared to those with lower self-reported HRQOL (associated with worse survival for all tumour sites). This is true for both the global HRQOL score and all functional domains except emotional and cognitive functioning. We also report tumour site-specific associations As in previous studies our findings show that survival is worse in people who reported low levels of HRQOL (globally and in specific HRQOL domains) when their cancer was diagnosed. Strengths and weaknesses This study has several strengths. First, data from the HN5000 cohort provided a large sample that enabled detailed comparison between people with different types of HNC tumours and those with OPC diagnosed as HPV±. This is an important consideration as OPC is increasing and survival differs depending on whether the tumour is HPV±. Second, our analyses were adjusted for a variety of relevant clinical, individual and lifestyle confounders facilitating identification of the independent impact of HRQOL, so demonstrating the potential value of baseline HRQOL as a prognostic indicator. Third, our data were collected prospectively and enabled us to report on survival three years after diagnosis. This is an important timescale for people with HNC as most cancer-related deaths are likely to have occurred by this time. Fourth, the EORTC QLQ-C30 is a commonly used and well-validated cancerrelated HRQOL questionnaire. Finally, participants in the HN5000 cohort are representative of standard care reflected through recruitment from a wide range of hospitals including District General Hospital Specialist centres and teaching hospitals where they received routine care rather than being recruited from clinical trials. The study has several weaknesses. First, the response rate in the HN5000 study was satisfactory but, of those who were eligible for the study, only 61% provided complete data; we also acknowledge that people with poorer baseline HRQOL and those who were older at diagnosis were less likely to complete baseline questionnaires and therefore are under-represented in this analysis. Our complete data set also comprises a larger proportion of participants who are HPV+ than the incomplete data set-however, our findings suggest that, after comprehensive adjustment for relevant factors, HPV status has a negligible effect on the association between HRQOL and survival. Second, all of our hospitals were in the UK and so some may question the generalisability of our findings. However, we believe our data are generalisable to other countries because our demographic data are similar to smaller studies reporting HRQOL and survival. Implications Pre-treatment HRQOL is an additional factor that informs risk stratification and case-mix adjustments. With increasing accessibility for patients to complete patient reported outcomes through electronic platforms it is feasible to incorporate HRQOL into routine clinical care. This data can assist in patient-clinician decision making, prognostication and post-treatment recovery. Potentially by identifying patients with poorer HRQOL at baseline and being cognisant of the associated clinical characteristics, it is feasible to enhance post-treatment care and monitor more closely longitudinal HRQOL on an individual patient basis to facilitate early intervention. This could lead not only better HRQOL but also improved survival. permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons.org/licen ses/by/4.0/.
|
Range of cross reactivity of anti-GM1 IgG antibody in Guillain-Barr syndrome The cross reactivity of anti-GM1 IgG antibody with various gangliosides and asialo-GM1 in serum samples from 27 patients with Guillain-Barr syndrome was investigated. An enzyme linked immunosorbent assay (ELISA) absorption study showed that anti-GM1 IgG antibody cross reacted with asialo-GM1 in 52% of the patients, GM1b in 41%, GD1b in 22%, and GalNAc-GD1a in 19%, and that it did not cross react with GM2, GT1b, or GQ1b. The antibody that cross reacted with GD1b was associated with a high frequency of cranial nerve involvement and negative Campylobacter jejuni serology. Anti-GM1 IgG antibody has a broad range of cross reactivity which may contribute to various clinical variations of Guillain-Barr syndrome.
|
Controllability of the 3D Compressible Euler System The paper is devoted to the controllability problem for 3D compressible Euler system. The control is a finite-dimensional external force acting only on the velocity equation. We show that the velocity and density of the fluid are simultaneously controllable. In particular, the system is approximately controllable and exactly controllable in projections.
|
Design of Solar Heating System Coupled to A Ground Heat Storage And A Geothermal Heat Pump By 2020 all new buildings within the European Union should reach nearly zero energy level. Their energy needs should be significantly covered by renewable energy sources. As a consequence, it is important to identify which combinations of technologies will be suitable in order to reach such objectives. Climate conditions, final energy and investment costs, technological maturity and stakeholders services quality are key elements for the final choice. This paper presents some results of a more general survey the purpose of which is to study different combinations of heating systems integrated in single-family dwellings under the Belgian climate conditions. The heating system that is concerned is a solar heating system with a ground heat storage and a geothermal heat pump that exhibits energy performance in agreement with such future building standards. This system works with solar collectors which produce heat for space heating (SH) and domestic hot water preparation (DHW). This thermal energy is distributed to the building - using a short-term storage - when it is needed or stored in the DHW tank. If the production is in excess, the energy is stored into the ground using a borehole heat exchanger. This energy will then be used by a geothermal heat pump when the solar collectors cannot fulfil the building heating needs. A design procedure has been developed and tested on several buildings using simulation tools. The borehole depth ranges from 150 m to 350 m and the solar collectors areas from 22 m2 to 46 m2 depending on the energy performance of the building. The procedure permits to reach a renewable coverage ratio higher than 80% (instead of 55% for solar heating coupled with a geothermal heat pump without storage). Results are presented for one building. The system performance is compared to that of a reference system (most likely to be used in practical case): air to water heat pump without any solar assistance.
|
Short range correlations in a one dimensional electron gas We use the SSTL (Singwi, Sjolander, Tosi, Land) approximation to investigate the short--range correlations in a one dimensional electron gas, for the first time. Although SSTL is introduced to better satisfy the compressibility sum rule in three dimensions, the widely used STLS (Singwi, Tosi, Land, Sjolander) approximation turns out to be more successful in the case of the one dimensional electron gas. Introduction The advances in fabrication technologies have made it possible to experimentally fabricate one dimensional electronic structures -. This has naturally resulted in an increasing interest in theoretical investigations of such structures -. The resulting one dimensional electron gas has also been subject to computational investigation. The extensive investigations in the three dimensional electron gas have clearly shown the importance of the short range correlations at lower densities in determining the pair distribution function g(r) at small r. In the high density limit, the long range correlations can be described well by the random phase approximation (RPA). One way to include the short range correlation effects beyond RPA is to use the powerful approach developed by STLS. It is well known that STLS approach suffers from a compressibility inconsistency. The compressibility calculated from the small q limit of the dielectric function (q) does not agree with that calculated from the ground state energy. There have been several attempts to overcome this inconsistency. SSTL included the effect of screening on the effective interaction potential as explained in the next section. In this work, we aim to check the performance of the SSTL approach and compare it with the STLS in a one dimensional electron gas. To the best of our knowledge, this is the first application of the SSTL approach in lower dimensions. The organization of the paper is as follows; the STLS and SSTL formalisms are given in section 2. The results and discussion concentrating on the comparison between STLS and SSTL performance on the compressibility issue are presented in section 3. Formalism In the mean field approximation, a key component of the electron gas is the usual static structure factor S(q) which is related to density-density response function (q, ) through the fluctuation-dissipation theorem as The density-density response function (q, ) is defined as where 0 (q, ) is the zero-temperature susceptibility of a noninteracting electron gas and is given in one dimension by with and qk F m ⋆ are the boundaries for particle-hole excitations. The Fermi wave vector k F is related to the one dimensional electron density n via k F = n/2. The dimensionless electron density parameter is defined as is the self-consistent effective potential related to the static structure factor through where v(q) is the Fourier transform of the Coulomb interaction between two electrons in the lowest subband in the harmonic confinement in one dimension and is given by where 0 is the background dielectric constant, q is the wave vector along the wire and F (q) is the form factor which takes into account the finite thickness of the wire. The form factor reads here K 0 (x) is the zeroth order modified Bessel function of the second kind and b is the characteristic length of the harmonic potential. Indeed, it is a measure of the effective radius of the quantum wire. In the STLS approximation, where the local field correction G(q) is The set of Eqs., and have to be solved self-consistently for G(q), (q) and S(q) within the STLS approximation. In RPA, G(q) = 0. The SSTL approximation is different from the STLS approximation in that the potential under the integral sign in Eq. is screened by the static dielectric function (q) which is given by, This is originally done to better satisfy the compressibility sum rule in three dimensional electron gas. The ground state energy per particle in one dimensional electron gas may be written as g = kin + ex + cor, where kin is the kinetic energy per particle, and simply kin = 2 /(48r 2 s ) in units of effective Rydberg Ry ⋆. ex is the exchange energy per particle and cor is the correlation energy per particle The compressibility K may be calculated either by using the ground state energy per particle g or by using the q → 0 limit of the static dielectric function (q) lim q→0 (q) = 1 + v(q) n 2 K. The compressibility sum rule is that the compressibilities calculated by using Eq. and Eq. are the same. Results and Discussion In Fig. 1 the local field correction G(q) calculated self-consistently using STLS and SSTL approximations are shown for fixed b = 5a ⋆ B and r s = 5. The most striking difference between the two curves is their large q limits. As STLS G(q) is approaching 1, SSTL G(q) approaches a limit bigger than 1. It may be noted that the relation is satisfied in our calculations. This, of course, is going to lead to different results when applied to physical properties of the one dimensional electron gas. The different large q behaviour of G(q) is expected to lead to different small r behaviour of the pair correlation function g(r). This is shown in Fig. 2. As radius of the wire gets smaller the difference between STLS and SSTL results become larger, as may be seen in Fig. 3. As the density becomes smaller (i.e., r s becomes larger) the SSTL g(r) becomes negative for small r. This limit is where the correlation effects become important. The SSTL (q) −1 is given in Fig. 4. This is rather similar to STLS or RPA (q) −1 with a discontinuity in the derivative at q = 2k F. The ground state energy per particle for our system is shown in Fig. 5. The STLS and SSTL curves are rather similar. The compressibility calculated by using the ground state energy in three different approaches is presented in Fig. 6 as a function of r s. The similarity in energies is also reflected in these curves. The negative compressibility values at higher r s show that the system becomes unstable. The compressibility calculated by using q → 0 limit of the static dielectric function is shown in Fig. 7. Here, the STLS and SSTL compressibilities calculated by two routes are as different as in RPA. This surprising result is a natural consequence of the small q behaviour of the local field correction which is very small for SSTL at small q values. The compressibility in a one dimensional system is studied previously by Gold and Calmels using a three-sum-rule approach for the local field correction. The compressibility sum rule is better satisfied within this variant of STLS in one dimension as can be seen from their figure 4. We find the same result by the present full STLS calculation. We can conclude that the short-range correlations are calculated using the SSTL approximation in a one dimensional electron gas, for the first time. The performance of the SSTL approximation is compared with the more widely used STLS approximation. It is shown that SSTL compares reasonably well with STLS when the pair correlation function, dielectric function and the ground state energy per particle are considered but fails totally when the compressibility sum rule is checked.
|
The role of dry mouth in screening sleep apnea Purpose of the study Effective screening questionnaires are essential for early detection of obstructive sleep apnea (OSA). The STOP-Bang questionnaire has high sensitivity but low specificity. Dry mouth is a typical clinical sign of OSA. We hypothesised that adding dry mouth in the STOP-Bang questionnaire would improve its specificity. Study design A survey of the incidence of dry mouth was performed in a general population group and suspected sleep apnea clinical population group. Patients with suspected OSA were assessed by laboratory polysomnography and STOP-Bang questionnaire was performed. Adding the option of dry mouth to the OSA screening questionnaire resulted in a new quesionnaire, where cut-off value, diagnostic efficacy and the predictive parameters (sensitivity, specificity, positive predictive value and negative predictive value) were explored. Results (In the 912 general population group, the incidence of dry mouth in the snoring group (54.0%) was much higher than that in the non-snoring group (30.5%) (p<0.05). In 207 patients with suspected OSA, the incidence of dry mouth in the OSA group was much higher than that in the non-OSA group (p<0.05). The sensitivity and specificity of the STOP-Bang questionnaire were 88.8% and 23.7% for identifying OSA, and 92.2% and 23.1% for identifying moderate and severe OSA, respectively. Adding the option of dry mouth (dry mouth every morning) to the STOP-Bang questionare resulted in a new questionnaire (STOP-Bang-dry-mouth questionnarie) with 9 items. Its sensitivity and specificity were 81.70% and 42.10% for identifying OSA, and 89.10% and 42.30% for identifying moderate and severe OSA, respectively. Conclusions The dry mouth symptom correlated with snoring and sleep apnea. The specificity of the STOP-Bang questionnaire can be improved by integrating dry mouth. The diagnostic accuracy of the STOP-Bang-dry mouth questionnaire is yet to be further verified in prospective studies.
|
<filename>fila-lista-encadeada/Main.java
class Main {
public static void main(String[] args) {
Fila f = new Fila();
f.inserir(1);
f.inserir(2);
f.inserir(3);
f.inserir(4);
f.inserir(5);
f.inserir(6);
f.inserir(7);
//f.retirar();
f.print();
}
}
|
The Radical Egalitarian Case for Animal Rights Professor of philosophy at North Carolina State University and a leading animal rights advocate in the United States, Tom Regan is the author of several articles and books on moral philosophy, including The Case for Animal Rights. Regan disagrees with Singer's utilitarian program for animal liberation, for he re;ects utilitarianism as lacking a notion of intrinsic worth. Regan's position is that animals and humans all have equal intrinsic value on which their right to life and concern are based. Regan is revolutionary. He calls for not reform but the total abolition of the use of animals in science, the total dissolution of the commercial animal agriculture system, and the total elimination of commercial and sport hunting and trapping. "The fundamental wrong is the system that allows us to view animals as our resources... Lab animals are not our tasters; we are not their kings. "
|
<reponame>gabrielagabriel/ui-library
// tslint:disable:max-line-length
import React from 'react'
import BaseIcon from '_utils/icon'
import { BaseIconDefaultProps } from '_utils/icon/BaseIcon'
export const InfoIcon = (props: Icon) => (
<BaseIcon {...props}>
<g transform="translate(-1 -1)" fill="none" fillRule="evenodd">
<path d="M0 0h24v24H0z" />
<path
d="M12 22.065C6.441 22.065 1.935 17.56 1.935 12 1.935 6.441 6.44 1.935 12 1.935c5.559 0 10.065 4.506 10.065 10.065 0 5.559-4.506 10.065-10.065 10.065zm0-1a9.065 9.065 0 1 0 0-18.13 9.065 9.065 0 0 0 0 18.13z"
fill={props.iconColor}
fillRule="nonzero"
/>
<path
d="M10.26 11.63a.5.5 0 1 1 0-1H12a.5.5 0 0 1 .5.5v5.218a.5.5 0 1 1-1 0V11.63h-1.24zm0 5.218a.5.5 0 1 1 0-1h3.48a.5.5 0 1 1 0 1h-3.48z"
fill={props.iconColor}
fillRule="nonzero"
/>
<circle fill={props.iconColor} fillRule="nonzero" cx="12" cy="7.652" r="1" />
</g>
</BaseIcon>
)
InfoIcon.defaultProps = BaseIconDefaultProps
export default React.memo(InfoIcon)
|
<reponame>fossabot/DeskGap<filename>core/lib/ui/preload.ts
import { messageUI, internalDeskGap } from './bootstrap';
import asyncNode from './async-node';
export class UIDeskGap {
readonly platform = <'darwin' | 'win32'>internalDeskGap.platform;
readonly ipcRenderer = messageUI;
readonly messageUI = messageUI;
readonly asyncNode = asyncNode;
}
declare global {
interface Window {
deskgap: UIDeskGap;
}
}
window.deskgap = new UIDeskGap();
//Re-expose the messageReceived which is defined in message.ts
(<any>window).deskgap.__messageReceived = internalDeskGap.messageReceived;
|
A = input().split()
if int(A[0]) + int(A[1]) + int(A[2]) >= 22:
print('bust')
elif int(A[0]) + int(A[1]) + int(A[2]) <= 21:
print('win')
|
<filename>venv/lib/python3.7/site-packages/pystan/stan/lib/stan_math/test/unit/math/prim/arr/err/check_nonnegative_test.cpp
#include <stan/math/prim/arr.hpp>
#include <gtest/gtest.h>
#include <vector>
#include <limits>
#include <string>
using stan::math::check_nonnegative;
TEST(ErrorHandlingScalar, CheckNonnegativeVectorized) {
int N = 5;
const char* function = "check_nonnegative";
std::vector<double> x(N);
x.assign(N, 0);
EXPECT_NO_THROW(check_nonnegative(function, "x", x))
<< "check_nonnegative(vector) should be true with finite x: " << x[0];
x.assign(N, std::numeric_limits<double>::infinity());
EXPECT_NO_THROW(check_nonnegative(function, "x", x))
<< "check_nonnegative(vector) should be true with x = Inf: " << x[0];
x.assign(N, -0.01);
EXPECT_THROW(check_nonnegative(function, "x", x), std::domain_error)
<< "check_nonnegative should throw exception with x = " << x[0];
x.assign(N, -std::numeric_limits<double>::infinity());
EXPECT_THROW(check_nonnegative(function, "x", x), std::domain_error)
<< "check_nonnegative(vector) should throw an exception with x = -Inf: "
<< x[0];
x.assign(N, std::numeric_limits<double>::quiet_NaN());
EXPECT_THROW(check_nonnegative(function, "x", x), std::domain_error)
<< "check_nonnegative(vector) should throw exception on NaN: " << x[0];
}
TEST(ErrorHandlingScalar, CheckNonnegativeVectorized_one_indexed_message) {
int N = 5;
const char* function = "check_nonnegative";
std::vector<double> x(N);
std::string message;
x.assign(N, 0);
x[2] = -1;
try {
check_nonnegative(function, "x", x);
FAIL() << "should have thrown";
} catch (std::domain_error& e) {
message = e.what();
} catch (...) {
FAIL() << "threw the wrong error";
}
EXPECT_NE(std::string::npos, message.find("[3]"));
}
TEST(ErrorHandlingScalar, CheckNonnegative_nan) {
const char* function = "check_nonnegative";
double nan = std::numeric_limits<double>::quiet_NaN();
std::vector<double> x = {1, 2, 3};
for (size_t i = 0; i < x.size(); i++) {
x[i] = nan;
EXPECT_THROW(check_nonnegative(function, "x", x), std::domain_error);
x[i] = i;
}
}
|
<filename>Project/SQKDataKit/GitDataImportOperation.h
//
// GitDataImportOperation.h
// SQKDataKit
//
// Created by <NAME> on 13/12/2013.
// Copyright (c) 2013 3Squared. All rights reserved.
//
#import <SQKDataKit/SQKCoreDataOperation.h>
@interface GitDataImportOperation : SQKCoreDataOperation
@property (nonatomic, readonly) NSDate *startDate;
- (instancetype)initWithContextManager:(SQKContextManager *)contextManager data:(id)data;
- (void)performWorkWithPrivateContext:(NSManagedObjectContext *)context usingData:(id)data;
@end
|
<reponame>lokeshjindal15/pd-gem5_transformer
/*
* Copyright (c) 2000-2005 Silicon Graphics, Inc.
* All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it would be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write the Free Software Foundation,
* Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "xfs.h"
#include "xfs_sysctl.h"
/*
* Tunable XFS parameters. xfs_params is required even when CONFIG_SYSCTL=n,
* other XFS code uses these values. Times are measured in centisecs (i.e.
* 100ths of a second) with the exception of eofb_timer, which is measured in
* seconds.
*/
xfs_param_t xfs_params = {
/* MIN DFLT MAX */
.sgid_inherit = { 0, 0, 1 },
.symlink_mode = { 0, 0, 1 },
.panic_mask = { 0, 0, 255 },
.error_level = { 0, 3, 11 },
.syncd_timer = { 1*100, 30*100, 7200*100},
.stats_clear = { 0, 0, 1 },
.inherit_sync = { 0, 1, 1 },
.inherit_nodump = { 0, 1, 1 },
.inherit_noatim = { 0, 1, 1 },
.xfs_buf_timer = { 100/2, 1*100, 30*100 },
.xfs_buf_age = { 1*100, 15*100, 7200*100},
.inherit_nosym = { 0, 0, 1 },
.rotorstep = { 1, 1, 255 },
.inherit_nodfrg = { 0, 1, 1 },
.fstrm_timer = { 1, 30*100, 3600*100},
.eofb_timer = { 1, 300, 3600*24},
};
|
package algorithmia
import (
"testing"
)
func TestGetParentAndBase(t *testing.T) {
assertPB := func(pref, parentHave, parentMust, baseHave, baseMust string) {
if parentHave != parentMust {
t.Fatal(pref+"got parent", `"`+parentHave+`"`, "intstead of", `"`+parentMust+`"`)
}
if baseHave != baseMust {
t.Fatal(pref+"got parent", `"`+baseHave+`"`, "intstead of", `"`+baseMust+`"`)
}
}
p, b, err := getParentAndBase("a/b/c")
if err != nil {
t.Fatal(err)
}
assertPB("1:", p, "a/b", b, "c")
p, b, err = getParentAndBase("data://foo/bar")
if err != nil {
t.Fatal(err)
}
assertPB("2:", p, "data://foo", b, "bar")
p, b, err = getParentAndBase("data:///foo")
if err != nil {
t.Fatal(err)
}
assertPB("3:", p, "data:///", b, "foo")
p, b, err = getParentAndBase("data://foo")
if err != nil {
t.Fatal(err)
}
assertPB("4:", p, "data://", b, "foo")
if _, _, err := getParentAndBase("/"); err == nil {
t.Fatal("error expected")
}
if _, _, err := getParentAndBase(""); err == nil {
t.Fatal("error expected")
}
if _, _, err := getParentAndBase("a/"); err == nil {
t.Fatal("error expected")
}
if p := PathJoin("/a/b/c/", "d"); p != "/a/b/c/d" {
t.Fatal(`"/a/b/c/d" expected, got`, p)
}
if p := PathJoin("/a/b/c", "d"); p != "/a/b/c/d" {
t.Fatal(`"/a/b/c/d" expected, got`, p)
}
if p := PathJoin("/a//b/c//", "/d"); p != "/a//b/c///d" {
t.Fatal(`"/a//b/c///d" expected, got`, p)
}
}
|
Single coronary artery from the right coronary sinus with proximal origin of the left anterior descending coronary artery and left circumflex as distal continuation of the right coronary artery: a rare variant. A single coronary artery is a rare coronary anomaly. A 68-year-old male underwent coronary angiography for recent inferior wall myocardial infarction. It revealed a common coronary trunk arising from the right sinus of Valsalva and bifurcated into the right coronary artery (RCA) and anterior descending coronary arteries. The RCA, after its usual distribution in the right atrioventricular groove, continued as the left circumflex artery in the left atrioventricular groove. There were significant stenoses in the mid and distal RCA, which were treated percutaneously.
|
Provider initiated Human Immunodeficiency Virus testing and counseling in children, our experience at the Federal Teaching Hospital Abakaliki, South Eastern Nigeria Background: About half of human immunodeficiency virus (HIV) infected children in Africa die before their second birthday if not treated. Majority of the HIV infected children who die have never been diagnosed as HIV infected or present too late to the health care facility, with advanced disease. To avert this, interventions to ensure early diagnosis of HIV infection in children will need to be expanded. Aims and Objectives: The present study was aimed at high lightening the usefulness of provider initiated testing and counseling of HIV in determining the prevalence of HIV exposure/infection among children (aged 0 -18 years) who attended the children out -patient clinic of Federal Medical Center Abakaliki (now Federal Teaching Hospital Abakaliki,). Materials and Methods: It is a retrospective study involved all the children that presented at the children out -patient clinic from March 2010 to February 2013, regardless of their presenting complaints and were referred for HIV antibody testing using the opt -out (provider initiated HIV testing and counseling ) model. The rapid tests were used to test for HIV. Results: A total of 13,652 children were tested within the 3 -year period, out of which 279 (2%) tested positive for HIV. Among those who tested positive to rapid test, 147 (52.7%) were males and 132 (47.3%) were females. Children <18 months of age understandably (included exposed and infected), had the highest prevalence rate 1.4% while children >18 months (infected) had a prevalence rate 0.6%. There was statistically significant association between age and prevalence of HIV in the children (P < 0.001). Conclusion: Findings from this research suggested that PITC could be used to identify many previously undiagnosed HIV infected children, many of whom maybe asymptomatic and are at high risk of mortality.
|
Elevated urinary IL-18 levels at the time of ICU admission predict adverse clinical outcomes. BACKGROUND AND OBJECTIVES Urine IL-18 (uIL-18) has demonstrated moderate capacity to predict acute kidney injury (AKI) and adverse outcomes in defined settings. Its ability to predict AKI and provide prognostic information in broadly selected, critically ill adults remains unknown. DESIGN, SETTING, PARTICIPANTS, & MEASUREMENTS The study prospectively evaluated the capacity of uIL-18 measured within 24 hours of intensive care unit (ICU) admission to predict AKI, death, and receipt of acute dialysis in a large mixed-adult ICU population. RESULTS Of 451 patients, 86 developed AKI within 48 hours of enrollment and had higher median uIL-18 levels [426 (interquartile range : 152 to 1183) pg/mg creatinine] compared with those without AKI . The area under the receiver operating characteristic curve for uIL-18 predicting subsequent AKI within 24 hours was 0.62 (95% CI: 0.54 to 0.69) and improved modestly to 0.67 (95% CI: 0.53 to 0.81) in patients whose enrollment eGFR was >or=75 ml/min per 1.73 m. The highest median uIL-18 levels were observed in patients with sepsis at enrollment , those receiving acute dialysis or dying within 28 days of ascertainment. After adjustment for a priori selected clinical predictors, uIL-18 remained independently predictive of composite outcome of death or acute dialysis within 28 days of ascertainment (odds ratio, 1.86 ). CONCLUSIONS uIL-18 did not reliably predict AKI development, but did predict poor clinical outcomes in a broadly selected, critically ill adult population.
|
// This file is generated by rust-protobuf 2.25.2. Do not edit
// @generated
// https://github.com/rust-lang/rust-clippy/issues/702
#![allow(unknown_lints)]
#![allow(clippy::all)]
#![allow(unused_attributes)]
#![cfg_attr(rustfmt, rustfmt::skip)]
#![allow(box_pointers)]
#![allow(dead_code)]
#![allow(missing_docs)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(non_upper_case_globals)]
#![allow(trivial_casts)]
#![allow(unused_imports)]
#![allow(unused_results)]
//! Generated file from `tendermint/version/types.proto`
/// Generated files are compatible only with the same version
/// of protobuf runtime.
// const _PROTOBUF_VERSION_CHECK: () = ::protobuf::VERSION_2_25_2;
#[derive(PartialEq,Clone,Default)]
pub struct App {
// message fields
pub protocol: u64,
pub software: ::std::string::String,
// special fields
pub unknown_fields: ::protobuf::UnknownFields,
pub cached_size: ::protobuf::CachedSize,
}
impl<'a> ::std::default::Default for &'a App {
fn default() -> &'a App {
<App as ::protobuf::Message>::default_instance()
}
}
impl App {
pub fn new() -> App {
::std::default::Default::default()
}
// uint64 protocol = 1;
pub fn get_protocol(&self) -> u64 {
self.protocol
}
pub fn clear_protocol(&mut self) {
self.protocol = 0;
}
// Param is passed by value, moved
pub fn set_protocol(&mut self, v: u64) {
self.protocol = v;
}
// string software = 2;
pub fn get_software(&self) -> &str {
&self.software
}
pub fn clear_software(&mut self) {
self.software.clear();
}
// Param is passed by value, moved
pub fn set_software(&mut self, v: ::std::string::String) {
self.software = v;
}
// Mutable pointer to the field.
// If field is not initialized, it is initialized with default value first.
pub fn mut_software(&mut self) -> &mut ::std::string::String {
&mut self.software
}
// Take field
pub fn take_software(&mut self) -> ::std::string::String {
::std::mem::replace(&mut self.software, ::std::string::String::new())
}
}
impl ::protobuf::Message for App {
fn is_initialized(&self) -> bool {
true
}
fn merge_from(&mut self, is: &mut ::protobuf::CodedInputStream<'_>) -> ::protobuf::ProtobufResult<()> {
while !is.eof()? {
let (field_number, wire_type) = is.read_tag_unpack()?;
match field_number {
1 => {
if wire_type != ::protobuf::wire_format::WireTypeVarint {
return ::std::result::Result::Err(::protobuf::rt::unexpected_wire_type(wire_type));
}
let tmp = is.read_uint64()?;
self.protocol = tmp;
},
2 => {
::protobuf::rt::read_singular_proto3_string_into(wire_type, is, &mut self.software)?;
},
_ => {
::protobuf::rt::read_unknown_or_skip_group(field_number, wire_type, is, self.mut_unknown_fields())?;
},
};
}
::std::result::Result::Ok(())
}
// Compute sizes of nested messages
#[allow(unused_variables)]
fn compute_size(&self) -> u32 {
let mut my_size = 0;
if self.protocol != 0 {
my_size += ::protobuf::rt::value_size(1, self.protocol, ::protobuf::wire_format::WireTypeVarint);
}
if !self.software.is_empty() {
my_size += ::protobuf::rt::string_size(2, &self.software);
}
my_size += ::protobuf::rt::unknown_fields_size(self.get_unknown_fields());
self.cached_size.set(my_size);
my_size
}
fn write_to_with_cached_sizes(&self, os: &mut ::protobuf::CodedOutputStream<'_>) -> ::protobuf::ProtobufResult<()> {
if self.protocol != 0 {
os.write_uint64(1, self.protocol)?;
}
if !self.software.is_empty() {
os.write_string(2, &self.software)?;
}
os.write_unknown_fields(self.get_unknown_fields())?;
::std::result::Result::Ok(())
}
fn get_cached_size(&self) -> u32 {
self.cached_size.get()
}
fn get_unknown_fields(&self) -> &::protobuf::UnknownFields {
&self.unknown_fields
}
fn mut_unknown_fields(&mut self) -> &mut ::protobuf::UnknownFields {
&mut self.unknown_fields
}
fn as_any(&self) -> &dyn (::std::any::Any) {
self as &dyn (::std::any::Any)
}
fn as_any_mut(&mut self) -> &mut dyn (::std::any::Any) {
self as &mut dyn (::std::any::Any)
}
fn into_any(self: ::std::boxed::Box<Self>) -> ::std::boxed::Box<dyn (::std::any::Any)> {
self
}
fn descriptor(&self) -> &'static ::protobuf::reflect::MessageDescriptor {
Self::descriptor_static()
}
fn new() -> App {
App::new()
}
fn descriptor_static() -> &'static ::protobuf::reflect::MessageDescriptor {
static descriptor: ::protobuf::rt::LazyV2<::protobuf::reflect::MessageDescriptor> = ::protobuf::rt::LazyV2::INIT;
descriptor.get(|| {
let mut fields = ::std::vec::Vec::new();
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeUint64>(
"protocol",
|m: &App| { &m.protocol },
|m: &mut App| { &mut m.protocol },
));
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeString>(
"software",
|m: &App| { &m.software },
|m: &mut App| { &mut m.software },
));
::protobuf::reflect::MessageDescriptor::new_pb_name::<App>(
"App",
fields,
file_descriptor_proto()
)
})
}
fn default_instance() -> &'static App {
static instance: ::protobuf::rt::LazyV2<App> = ::protobuf::rt::LazyV2::INIT;
instance.get(App::new)
}
}
impl ::protobuf::Clear for App {
fn clear(&mut self) {
self.protocol = 0;
self.software.clear();
self.unknown_fields.clear();
}
}
impl ::std::fmt::Debug for App {
fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
::protobuf::text_format::fmt(self, f)
}
}
impl ::protobuf::reflect::ProtobufValue for App {
fn as_ref(&self) -> ::protobuf::reflect::ReflectValueRef {
::protobuf::reflect::ReflectValueRef::Message(self)
}
}
#[derive(PartialEq,Clone,Default)]
pub struct Consensus {
// message fields
pub block: u64,
pub app: u64,
// special fields
pub unknown_fields: ::protobuf::UnknownFields,
pub cached_size: ::protobuf::CachedSize,
}
impl<'a> ::std::default::Default for &'a Consensus {
fn default() -> &'a Consensus {
<Consensus as ::protobuf::Message>::default_instance()
}
}
impl Consensus {
pub fn new() -> Consensus {
::std::default::Default::default()
}
// uint64 block = 1;
pub fn get_block(&self) -> u64 {
self.block
}
pub fn clear_block(&mut self) {
self.block = 0;
}
// Param is passed by value, moved
pub fn set_block(&mut self, v: u64) {
self.block = v;
}
// uint64 app = 2;
pub fn get_app(&self) -> u64 {
self.app
}
pub fn clear_app(&mut self) {
self.app = 0;
}
// Param is passed by value, moved
pub fn set_app(&mut self, v: u64) {
self.app = v;
}
}
impl ::protobuf::Message for Consensus {
fn is_initialized(&self) -> bool {
true
}
fn merge_from(&mut self, is: &mut ::protobuf::CodedInputStream<'_>) -> ::protobuf::ProtobufResult<()> {
while !is.eof()? {
let (field_number, wire_type) = is.read_tag_unpack()?;
match field_number {
1 => {
if wire_type != ::protobuf::wire_format::WireTypeVarint {
return ::std::result::Result::Err(::protobuf::rt::unexpected_wire_type(wire_type));
}
let tmp = is.read_uint64()?;
self.block = tmp;
},
2 => {
if wire_type != ::protobuf::wire_format::WireTypeVarint {
return ::std::result::Result::Err(::protobuf::rt::unexpected_wire_type(wire_type));
}
let tmp = is.read_uint64()?;
self.app = tmp;
},
_ => {
::protobuf::rt::read_unknown_or_skip_group(field_number, wire_type, is, self.mut_unknown_fields())?;
},
};
}
::std::result::Result::Ok(())
}
// Compute sizes of nested messages
#[allow(unused_variables)]
fn compute_size(&self) -> u32 {
let mut my_size = 0;
if self.block != 0 {
my_size += ::protobuf::rt::value_size(1, self.block, ::protobuf::wire_format::WireTypeVarint);
}
if self.app != 0 {
my_size += ::protobuf::rt::value_size(2, self.app, ::protobuf::wire_format::WireTypeVarint);
}
my_size += ::protobuf::rt::unknown_fields_size(self.get_unknown_fields());
self.cached_size.set(my_size);
my_size
}
fn write_to_with_cached_sizes(&self, os: &mut ::protobuf::CodedOutputStream<'_>) -> ::protobuf::ProtobufResult<()> {
if self.block != 0 {
os.write_uint64(1, self.block)?;
}
if self.app != 0 {
os.write_uint64(2, self.app)?;
}
os.write_unknown_fields(self.get_unknown_fields())?;
::std::result::Result::Ok(())
}
fn get_cached_size(&self) -> u32 {
self.cached_size.get()
}
fn get_unknown_fields(&self) -> &::protobuf::UnknownFields {
&self.unknown_fields
}
fn mut_unknown_fields(&mut self) -> &mut ::protobuf::UnknownFields {
&mut self.unknown_fields
}
fn as_any(&self) -> &dyn (::std::any::Any) {
self as &dyn (::std::any::Any)
}
fn as_any_mut(&mut self) -> &mut dyn (::std::any::Any) {
self as &mut dyn (::std::any::Any)
}
fn into_any(self: ::std::boxed::Box<Self>) -> ::std::boxed::Box<dyn (::std::any::Any)> {
self
}
fn descriptor(&self) -> &'static ::protobuf::reflect::MessageDescriptor {
Self::descriptor_static()
}
fn new() -> Consensus {
Consensus::new()
}
fn descriptor_static() -> &'static ::protobuf::reflect::MessageDescriptor {
static descriptor: ::protobuf::rt::LazyV2<::protobuf::reflect::MessageDescriptor> = ::protobuf::rt::LazyV2::INIT;
descriptor.get(|| {
let mut fields = ::std::vec::Vec::new();
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeUint64>(
"block",
|m: &Consensus| { &m.block },
|m: &mut Consensus| { &mut m.block },
));
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeUint64>(
"app",
|m: &Consensus| { &m.app },
|m: &mut Consensus| { &mut m.app },
));
::protobuf::reflect::MessageDescriptor::new_pb_name::<Consensus>(
"Consensus",
fields,
file_descriptor_proto()
)
})
}
fn default_instance() -> &'static Consensus {
static instance: ::protobuf::rt::LazyV2<Consensus> = ::protobuf::rt::LazyV2::INIT;
instance.get(Consensus::new)
}
}
impl ::protobuf::Clear for Consensus {
fn clear(&mut self) {
self.block = 0;
self.app = 0;
self.unknown_fields.clear();
}
}
impl ::std::fmt::Debug for Consensus {
fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
::protobuf::text_format::fmt(self, f)
}
}
impl ::protobuf::reflect::ProtobufValue for Consensus {
fn as_ref(&self) -> ::protobuf::reflect::ReflectValueRef {
::protobuf::reflect::ReflectValueRef::Message(self)
}
}
static file_descriptor_proto_data: &'static [u8] = b"\
\n\x1etendermint/version/types.proto\x12\x12tendermint.version\x1a\x14go\
goproto/gogo.proto\"=\n\x03App\x12\x1a\n\x08protocol\x18\x01\x20\x01(\
\x04R\x08protocol\x12\x1a\n\x08software\x18\x02\x20\x01(\tR\x08software\
\"9\n\tConsensus\x12\x14\n\x05block\x18\x01\x20\x01(\x04R\x05block\x12\
\x10\n\x03app\x18\x02\x20\x01(\x04R\x03app:\x04\xe8\xa0\x1f\x01B;Z9githu\
b.com/tendermint/tendermint/proto/tendermint/versionb\x06proto3\
";
static file_descriptor_proto_lazy: ::protobuf::rt::LazyV2<::protobuf::descriptor::FileDescriptorProto> = ::protobuf::rt::LazyV2::INIT;
fn parse_descriptor_proto() -> ::protobuf::descriptor::FileDescriptorProto {
::protobuf::Message::parse_from_bytes(file_descriptor_proto_data).unwrap()
}
pub fn file_descriptor_proto() -> &'static ::protobuf::descriptor::FileDescriptorProto {
file_descriptor_proto_lazy.get(|| {
parse_descriptor_proto()
})
}
|
Functions of ocular surface mucins in health and disease Purpose of reviewThe purpose of the present review is to describe new concepts on the role of mucins in the protection of corneal and conjunctival epithelia and to identify alterations of mucins in ocular surface diseases. Recent findingsNew evidence indicates that gel-forming and cell surface-associated mucins contribute differently to the protection of the ocular surface against allergens, pathogens, extracellular molecules, abrasive stress, and drying. SummaryMucins are high-molecular weight glycoproteins characterized by their extensive O-glycosylation. Major mucins expressed by the ocular surface epithelia include cell surface-associated mucins MUC1, MUC4, MUC16, and the gel-forming mucin MUC5AC. Recent advances using functional assays have allowed the examination of their roles in the protection of corneal and conjunctival epithelia. Alterations in mucin and mucin O-glycan biosynthesis in ocular surface disorders, including allergy, nonautoimmune dry eye, autoimmune dry eye, and infection, are presented.
|
<reponame>LyuWeihua/ReadyWork
/**
*
* Original work Copyright 2017-2019 CodingApi
* Modified work Copyright (c) 2020 <NAME> [ready.work]
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package work.ready.cloud.transaction.core.message.service;
import work.ready.cloud.cluster.Cloud;
import work.ready.cloud.transaction.core.corelog.aspect.AspectLog;
import work.ready.cloud.transaction.core.corelog.aspect.AspectLogHelper;
import work.ready.cloud.transaction.common.exception.TxCommunicationException;
import work.ready.cloud.transaction.core.interceptor.TransactionInfo;
import work.ready.cloud.transaction.common.serializer.SerializerContext;
import work.ready.cloud.transaction.core.message.CmdExecuteService;
import work.ready.cloud.transaction.core.message.TransactionCmd;
import work.ready.cloud.transaction.common.message.params.GetAspectLogParams;
import java.io.Serializable;
import java.util.Objects;
public class GetAspectLogService implements CmdExecuteService {
private final AspectLogHelper aspectLogHelper;
public GetAspectLogService() {
this.aspectLogHelper = Cloud.getTransactionManager().getAspectLogHelper();
}
@Override
public Serializable execute(TransactionCmd transactionCmd) throws TxCommunicationException {
try {
GetAspectLogParams getAspectLogParams =transactionCmd.getMessage().loadBean(GetAspectLogParams.class);
AspectLog txLog = aspectLogHelper.getTxLog(getAspectLogParams.getGroupId(), getAspectLogParams.getUnitId());
if (Objects.isNull(txLog)) {
throw new TxCommunicationException("non exists aspect log.");
}
TransactionInfo transactionInfo = (TransactionInfo)SerializerContext.getInstance().deserialize(txLog.getBytes());
return transactionInfo;
} catch (Exception e) {
throw new TxCommunicationException(e);
}
}
}
|
<filename>RadauD.py
from __future__ import division, print_function, absolute_import
# import sys
# plist = sys.path
# ll = ['C:/Users/tjcze/Desktop/PythonProjects/butcher', 'C:/Users/tjcze/Desktop/PythonProjects/decimalfunctions']
# for i in ll:
# if i in plist:
# pass
# else:
# sys.path.append(r'{}'.format(i))
# sys.path.append('C:/Users/tjcze/Desktop/PythonProjects/butcher')
# sys.path.append('C:/Users/tjcze/Desktop/PythonProjects/decimalfunctions')
exec(open(r'C:\Users\tjcze\Desktop\PythonProjects\RadauD\__init__.py').read())
import butchertableau as bt
# import matrixfunctions
import decimalfunctions as df
import numpy as np
# from scipy.linalg import lu_factor, lu_solve
from scipy.sparse import csc_matrix, issparse, eye
# from scipy.sparse.linalg import splu
from scipy.optimize._numdiff import group_columns
from common import (validate_max_step, validate_tol, select_initial_step, norm, num_jac, EPS,
warn_extraneous, validate_first_step, OdeSolution)
from base import OdeSolver, DenseOutput
# from itertools import groupby
# from warnings import warn
import decimal
import mpmath as mps
from scipy.integrate import solve_ivp
from scipy.optimize import OptimizeResult
import inspect
# from mpmath import *
flatten = lambda l: [item for sublist in l for item in sublist]
PREC = [int(75)]
mps.dps = PREC[0]
mps.prec = PREC[0]
# prec = 25
S6 = 6 ** 0.5
Eb = [-13 - 7 * S6, -13 + 7 * S6, -1]
E = [ei/3.0 for ei in Eb]
Ef = df.DF(E, PREC[0]).decfuncl(E)
order = 5
# X = bt.butcher(order, 25)
# A, B, C = X.radau()
# Ainv = X.inv(A)
# T, TI = X.Tmat(Ainv)
# P = X.P(C)
# D = df.DF(A, 25)
# EIGS = X.eigs(Ainv)
MU_REAL = 3.637834252744502
MU_COMPLEX = 2.6810828736277488 - 3.050430199247423j
# print(T)
# print(TI)
# print(P)
# print(A)
# print(Ainv)
# print(B)
# print(C)
A = [[0.19681547722366058, -0.06553542585019845, 0.023770974348220134],
[0.39442431473908746, 0.29207341166522777, -0.04154875212599763],
[0.376403062700466, 0.512485826188424, 0.111111111111110]]
AI = [[3.22474487139158, 1.16784008469041, -0.253197264742181],
[-3.56784008469039, 0.775255128608406, 1.05319726474218],
[5.53197264742194, -7.53197264742187, 5.00000000000001]]
B = [0.376403062700466, 0.512485826188424, 0.111111111111110]
C = [0.15505102572168228, 0.6449489742783174, 1.0]
T = [[0.0944387624889745, -0.141255295020953, 0.0300291941051473],
[0.250213122965334, 0.204129352293798, -0.38294211275726],
[1.0, 1.0, 0.0]]
TD = df.DF(T, PREC[0]).decfunc()
TI = [[4.178718591551935, 0.32768282076106514, 0.5233764454994487],
[-4.178718591551935, -0.3276828207610649, 0.476623554500551],
[0.5028726349458223, -2.571926949855616, 0.5960392048282263]]
TID = df.DF(TI, PREC[0]).decfunc()
# print(TID)
P = [[10.048809399827414, -25.62959144707665, 15.580782047249254],
[-1.38214273316075, 10.29625811374331, -8.914115380582556],
[0.3333333333333328, -2.6666666666666616, 3.333333333333328]]
PD = df.DF(P, PREC[0]).decfunc()
# print(PD)
NEWTON_MAXITER = 6 # Maximum number of Newton iterations.
MIN_FACTOR = 0.2 # Minimum allowed decrease in a step size.
MAX_FACTOR = 10 # Maximum allowed increase in a step size.
TI_REAL = TI[0]
TI_COMPLEX = [TI[1][i] + 1j * TI[2][i] for i in range(len(T[0]))]
# print(TI_COMPLEX)
# print(D.matrix_multiplyd(TI, AI, prec))
def CP(L):
# print(len(L), len(L[0]))
total = []
current = 1
for p in flatten(L):
current *= p
total.append(current)
final = total[-1]
return final
def Tile(l, x, y=None):
til = []
tf = []
for i in range(x):
for li in l:
til.append(li)
if y is None or y == 0 or y == 0.0:
return til
else:
for yi in range(y):
tf.append(til)
return tf
def dm(v, pr):
De = decimal.Decimal
val_i = De(v)
aa = '{}:.{}f{}'.format('{', pr,'}')
aa1 = '{}'.format(aa)
aa2 = str(aa1)
aa3 = str(aa2.format(val_i))
aa4 = De(aa3)
return aa4
class ComplexDecimal(object):
def __init__(self, value, prec):
self.real = dm(value.real, prec)
self.imag = dm(value.imag, prec)
self.vi = dm(value.imag, prec)
self.prec = prec
if self.vi >= 0 or self.vi >= 0.0:
self.sign = '+'
elif self.vi <= 0 or self.vi <= 0.0:
self.sign = '-'
def __add__(self, other):
result = ComplexDecimal(self)
result.real += dm(other.real, self.prec)
if other.imag >= 0 or other.imag >= 0.0:
result.imag += dm(other.imag, self.prec)
elif other.imag <= 0 or other.imag <= 0.0:
result.imag -= dm(other.imag, self.prec)
return result
__radd__ = __add__
# vi = value.imag
# if vi >= 0 or vi >= 0.0:
def __str__(self):
return f'({self.real} {self.sign} {abs(self.imag)}j)'
# elif vi <= 0 or vi <= 0.0:
# def __str__(self):
# return f'({str(self.real)} - {str(self.imag)}j)'
# def __str__(self):
# return f'({str(self.real)} + {str(self.imag)}j)'
# def __strs__(self):
# return f'({str(self.real)} - {str(self.imag)}j)'
def sqrt(self):
result = ComplexDecimal(self)
if self.imag:
raise NotImplementedError
elif self.real > 0:
result.real = self.real.sqrt()
return result
else:
result.imag = (-self.real).sqrt()
result.real = dm(0, self.prec)
return result
def real(self):
result = ComplexDecimal(self)
result.real = self.real
return result
def imag(self):
result = ComplexDecimal(self)
result.imag = self.imag
return result
def DCM(MUD, MUD2, prec):
M1R = ComplexDecimal(MUD, prec).real
M1C = ComplexDecimal(MUD, prec).imag
M2R = ComplexDecimal(MUD2, prec).real
M2C = ComplexDecimal(MUD2, prec).imag
vv = M1R*M2R
vv2 = M1C*M2R
vv3 = M1R*M2C
vv4 = M1C*M2C
vvi = vv - vv4
vvj = vv3 + vv2
return ComplexDecimal(complex(vvi, vvj), prec)
def DCA(MUD, MUD2, prec):
M1R = ComplexDecimal(MUD, prec).real
M1C = ComplexDecimal(MUD, prec).imag
M2R = ComplexDecimal(MUD2, prec).real
M2C = ComplexDecimal(MUD2, prec).imag
vv = M1R + M2R
vv2 = M1C + M2C
return ComplexDecimal(complex(vv, vv2), prec)
def DCS(MUD, MUD2, prec):
M1R = ComplexDecimal(MUD, prec).real
M1C = ComplexDecimal(MUD, prec).imag
M2R = ComplexDecimal(MUD2, prec).real
M2C = ComplexDecimal(MUD2, prec).imag
vv = M1R - M2R
vv2 = M1C - M2C
return ComplexDecimal(complex(vv, vv2), prec)
def DCD(MUD, MUD2, prec):
M3R = ComplexDecimal(MUD2, prec).real
M3C = -ComplexDecimal(MUD2, prec).imag
bot = DCM(MUD2, complex(M3R, M3C), prec).real
top = DCM(MUD, complex(M3R, M3C), prec)
TR = top.real / bot
TC = top.imag / bot
return ComplexDecimal(complex(TR, TC), prec)
def normd(A):
rows = len(A)
cols = len(A[0])
vals = []
for i in range(rows):
for j in range(cols):
vi = dm(abs(A[i][j]), PREC[0])**dm(2, PREC[0])
vals.append(vi)
vf = dm(sum(vals), PREC[0])**dm(1/2, PREC[0])
return vf
# a = np.arange(9) - 4
# print(normd([[-4, -3, -2], [-1, 0, 1], [2, 3, 4]]))
# print(7.745966692414834)
v1 = ComplexDecimal(TI_COMPLEX[0], PREC[0])
v2 = ComplexDecimal(TI_COMPLEX[1], PREC[0])
v3 = ComplexDecimal(TI_COMPLEX[2], PREC[0])
TI_COMPLEXb = [v1, v2, v3]
# print(TI_COMPLEXb[0])
# for i in TI_COMPLEX:
# vv = ComplexDecimal(i, 25)
# print(vv)
# TI_COMPLEXb.append(vv)
# print(v1, v2, v3)
MU_REALd = dm(3.637834252744502, PREC[0])
MU_COMPLEXd = ComplexDecimal(2.6810828736277488 - 3.050430199247423j, PREC[0])
# print(complex(2.6810828736277488 , -3.050430199247423) / complex(1.4 , -5.0))
# DC2 = DCD(complex(2.6810828736277488 , -3.050430199247423), complex(1.4 , -5.0), 25)
# print(DC2)
# MUD = complex(1.5, 3.0)
# MUD2 = complex(1.4, 5.0)
# print(MUD2)
# vv = MUD.real*MUD2.real
# vv2 = MUD.imag*MUD2.real
# vv3 = MUD.real*MUD2.imag
# vv4 = MUD.imag*MUD2.imag
# vvi = vv - vv4
# vvj = vv3+vv2
# print(vvi, vvj)
def solve_collocation_system(fun, t, y, h, Z0, scale, tol,
LU_real, LU_complex, solve_lu, prec=9):
"""Solve the collocation system.
Parameters
----------
fun : callable
Right-hand side of the system.
t : float
Current time.
y : ndarray, shape (n,)
Current state.
h : float
Step to try.
Z0 : ndarray, shape (3, n)
Initial guess for the solution. It determines new values of `y` at
``t + h * C`` as ``y + Z0``, where ``C`` is the Radau method constants.
scale : float
Problem tolerance scale, i.e. ``rtol * abs(y) + atol``.
tol : float
Tolerance to which solve the system. This value is compared with
the normalized by `scale` error.
LU_real, LU_complex
LU decompositions of the system Jacobians.
solve_lu : callable
Callable which solves a linear system given a LU decomposition. The
signature is ``solve_lu(LU, b)``.
Returns
-------
converged : bool
Whether iterations converged.
n_iter : int
Number of completed iterations.
Z : ndarray, shape (3, n)
Found solution.
rate : float
The rate of convergence.
"""
n = len([y])
M_real = MU_REAL / h
M_complex = MU_COMPLEX / float(h) #DCD(MU_COMPLEX, h, prec)
ch = [dm(h*i, prec) for i in range(len(C))]
# X = df.DF(TI, prec)
# ZZ = flatten(Z0)
# print(Z0)
# Z0b = np.array([flatten(Z0) for i in range(n)]).T
ZP = np.zeros((3, n), dtype=float).tolist()
for i in range(3):
for j in range(n):
# print(fun(t + float(h)*float(C[i]), dm(y[0], prec)))
try:
ZP[i][j] = fun(dm(t + float(h)*float(C[i]), prec), dm(y, prec) + (dm(Z0[i][j], prec))).tolist()
except:
# print(fun(t + float(h)*float(C[i]), dm(y[0], prec) + (dm(Z0[i][j], prec))).tolist()[0])
ZP[i][j] = [dm(fun(dm(t + float(h)*float(C[i]), prec), dm(y[0], prec) + (dm(Z0[i][j], prec))).tolist()[0], prec)]
# ZP[i][j] = (fun(t + float(h)*float(C[i]), dm(y[0], prec) + (dm(Z0[i][j], prec)))).tolist()
Z0P = np.array(flatten(ZP))
# print(Z0P, ZP)
# Wb = np.array(TID).dot(Z0)
# W = Wb.dot(np.array(Z0b).T)
# print(W)
# print(flatten(Z0))
Zd = df.DF(Z0P, prec).decfunc()
# print(Zd)
ZD = np.array(df.DF(Z0P, prec).decfunc())
# print(ZD)
# print(np.array(ZD).T)
Ti = np.array(TI)
try:
# W = Ti.dot(Z0P)
W = np.array(np.array(TID).dot(Zd))
except:
W = np.array(np.array(TID).dot(Zd)) #, dtype=float
wr = len(W)
wc = len([W[0]])
Z = Z0
# print(W)
#print(Z)
# F = [[dm(0.0, prec) for i in range(3)] for j in range(n)]
# F = [dm(0.0, prec) for i in range(3)]
F = np.empty((3, n))
# ch = h * C
ch = [dm(h*i, prec) for i in range(len(C))]
dW_norm_old = None
# dW = X.zeros_matrix(wr, wc)
dW = np.empty_like(W)
dW2 = np.empty_like(W)
converged = False
rate = None
# print(Z)
for k in range(NEWTON_MAXITER):
for i in range(3):
# for j in range(n):
# print(y, Z[i])
# yy = float(y[0])
# F[i] = fun(dm(t, prec) + dm(ch[i], prec), dm(y[0], prec) + dm(Z[i][0], prec))
try:
F[i] = dm(fun(dm(t, prec) + ch[i], dm(y, prec) + dm(Z[i][0], prec)), prec)
except:
try:
F[i] = dm(fun(dm(t, prec) + ch[i], dm(y, prec) + dm(Z[i][0], prec))[0], prec)
except:
F[i] = dm(fun(dm(t, prec) + ch[i], dm(y[0], prec) + dm(Z[i][0], prec))[0], prec)
# F[i] = dm(fun(t + float(ch[i]), dm(y[0], prec) + dm(Z[i][0], prec))[0], prec)
if not np.all(np.isfinite(F)):
break
Ff = flatten(F)
# print(F)
f_realm = np.array(F).T
# print(f_realm.dot(TI_REAL))
# fr2 = f_realm.T(TI_REAL)
# print(fr2)
FTR = f_realm.dot(np.array(TI_REAL))
# print(FTR)
# print(W)
mrw = dm(M_real, prec)*dm(W[0][0], prec)
MRW = [dm(M_real, prec) * dm(wi, prec) for wi in W[0]]
# print(MRW)
# f_real = FTR - mrw
# f_real = [dm(i, prec) - dm(j, prec) for i, j in zip(FTR, MRW)]
try:
try:
f_real = [dm(F.T.dot(TI_REAL), prec) - dm(M_real[0], prec) * dm(W[0][0], prec)]
except:
try:
f_real = [dm(F.T.dot(TI_REAL)[0], prec) - dm(M_real, prec) * dm(W[0][0], prec)]
except:
f_real = [dm(F.T.dot(TI_REAL)[0], prec) - dm(M_real[0], prec) * dm(W[0], prec)]
except:
f_real = [dm(F.T.dot(TI_REAL), prec) - dm(M_real[0], prec) * dm(W[0][0], prec)]
# f_real = F.T.dot(TI_REAL) - M_real * float(W[0])
# print(np.dot(F.T, TI_REAL))
fc = df.DF(F, prec)
# fc2 = fc.TD()
FC = [ComplexDecimal(i[0], prec) for i in F]
# print(FC)
Fcf = [DCM(FC[i], TI_COMPLEXb[i], prec) for i in range(len(F[0]))]
fas = [DCM(FC[i], TI_COMPLEXb[i], prec) for i in range(len(F[0]))]
for i in range(len(Fcf) - 1):
fa = DCA(fas[0], fas[1], prec)
del fas[0]
del fas[0]
fas.insert(0, fa)
# print(fas[0])
# WC = [DCM(MU_COMPLEX , complex(W[1][c] , W[2][c]), prec) for c in range(len(W[1]))]
# print(WC)
# f_complex = [complex(DCS(ic , ij, prec).real, DCS(ic , ij, prec).imag) for ic, ij in zip(Fcf, WC)]
f_complex = F.T.dot(TI_COMPLEX) - M_complex * complex(W[1] , W[2])
# print(f_complex)
DWR = df.DF(LU_real, prec).solve(LU_real, f_real)
# print(DWR[0].real)
# print(LU_complex)
# DWC = df.DF(LU_complex, prec).factor_solvec(f_complex)
# print(DWC[0].real, DWC[0].imag)
# dwr = [dm(i.real, prec) for i in DWR]
# dwcr = [dm(i.real, prec) for i in DWC]
# dwcc = [dm(i.imag, prec) for i in DWC]
# print(DWR, dwcr, dwcc)
# dW[0] = dwr
# dW[1] = dwcr
# dW[2] = dwcc
# print(solve_lu(LU_real, f_real))
dW_real = solve_lu(LU_real[0], LU_real[1], [f_real])
# print(LU_real[0], LU_real[1], dW_real[0].real)
# print(LU_complex[0], LU_complex[1])
dW_complex = solve_lu(LU_complex[0], LU_complex[1], [f_complex])
# print(dW_real, dW_complex)
dW[0] = dm(dW_real[0].real, prec)
dW[1] = dm(dW_complex[0].real, prec)
dW[2] = dm(dW_complex[0].imag, prec)
dW2[0] = dW_real[0].real
dW2[1] = dW_complex[0].real
dW2[2] = dW_complex[0].imag
# print(dW)
# print(dW)
dW_norm = normd(dW/scale)
if dW_norm_old is not None:
rate = dW_norm / dW_norm_old
if (rate is not None and (rate >= 1 or
rate ** (NEWTON_MAXITER - k) / (1 - rate) * dW_norm > tol)):
break
# print(W, dW)
W += dW
Wf = df.DF(W, prec).decfunc()
# print(W)
# Z = df.DF(T, prec).matrix_multiplyd(T, W, prec)
# Z = np.array(T).dot(W)
Z = np.array(TD).dot(W)
if (dW_norm == 0 or
rate is not None and rate / (1 - rate) * dW_norm < tol):
converged = True
break
dW_norm_old = dW_norm
return converged, k + 1, Z, rate
def predict_factor(h_abs, h_abs_old, error_norm, error_norm_old, prec):
"""Predict by which factor to increase/decrease the step size.
The algorithm is described in [1]_.
Parameters
----------
h_abs, h_abs_old : float
Current and previous values of the step size, `h_abs_old` can be None
(see Notes).
error_norm, error_norm_old : float
Current and previous values of the error norm, `error_norm_old` can
be None (see Notes).
Returns
-------
factor : float
Predicted factor.
Notes
-----
If `h_abs_old` and `error_norm_old` are both not None then a two-step
algorithm is used, otherwise a one-step algorithm is used.
References
----------
.. [1] <NAME>, <NAME> <NAME>, "Solving Ordinary Differential
Equations II: Stiff and Differential-Algebraic Problems", Sec. IV.8.
"""
if error_norm_old is None or h_abs_old is None or error_norm == 0:
multiplier = 1
else:
multiplier = float(h_abs) / float(h_abs_old) * (float(error_norm_old) / float(error_norm)) ** 0.25
with np.errstate(divide='ignore'):
factor = min(1, multiplier) * error_norm ** -0.25
return factor
class RadauD(OdeSolver):
"""Implicit Runge-Kutta method of Radau IIA family of order 5.
The implementation follows [1]_. The error is controlled with a
third-order accurate embedded formula. A cubic polynomial which satisfies
the collocation conditions is used for the dense output.
Parameters
----------
fun : callable
Right-hand side of the system. The calling signature is ``fun(t, y)``.
Here ``t`` is a scalar, and there are two options for the ndarray ``y``:
It can either have shape (n,); then ``fun`` must return array_like with
shape (n,). Alternatively it can have shape (n, k); then ``fun``
must return an array_like with shape (n, k), i.e. each column
corresponds to a single column in ``y``. The choice between the two
options is determined by `vectorized` argument (see below). The
vectorized implementation allows a faster approximation of the Jacobian
by finite differences (required for this solver).
t0 : float
Initial time.
y0 : array_like, shape (n,)
Initial state.
t_bound : float
Boundary time - the integration won't continue beyond it. It also
determines the direction of the integration.
first_step : float or None, optional
Initial step size. Default is ``None`` which means that the algorithm
should choose.
max_step : float, optional
Maximum allowed step size. Default is np.inf, i.e. the step size is not
bounded and determined solely by the solver.
rtol, atol : float and array_like, optional
Relative and absolute tolerances. The solver keeps the local error
estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
relative accuracy (number of correct digits). But if a component of `y`
is approximately below `atol`, the error only needs to fall within
the same `atol` threshold, and the number of correct digits is not
guaranteed. If components of y have different scales, it might be
beneficial to set different `atol` values for different components by
passing array_like with shape (n,) for `atol`. Default values are
1e-3 for `rtol` and 1e-6 for `atol`.
jac : {None, array_like, sparse_matrix, callable}, optional
Jacobian matrix of the right-hand side of the system with respect to
y, required by this method. The Jacobian matrix has shape (n, n) and
its element (i, j) is equal to ``d f_i / d y_j``.
There are three ways to define the Jacobian:
* If array_like or sparse_matrix, the Jacobian is assumed to
be constant.
* If callable, the Jacobian is assumed to depend on both
t and y; it will be called as ``jac(t, y)`` as necessary.
For the 'Radau' and 'BDF' methods, the return value might be a
sparse matrix.
* If None (default), the Jacobian will be approximated by
finite differences.
It is generally recommended to provide the Jacobian rather than
relying on a finite-difference approximation.
jac_sparsity : {None, array_like, sparse matrix}, optional
Defines a sparsity structure of the Jacobian matrix for a
finite-difference approximation. Its shape must be (n, n). This argument
is ignored if `jac` is not `None`. If the Jacobian has only few non-zero
elements in *each* row, providing the sparsity structure will greatly
speed up the computations [2]_. A zero entry means that a corresponding
element in the Jacobian is always zero. If None (default), the Jacobian
is assumed to be dense.
vectorized : bool, optional
Whether `fun` is implemented in a vectorized fashion. Default is False.
Attributes
----------
n : int
Number of equations.
status : string
Current status of the solver: 'running', 'finished' or 'failed'.
t_bound : float
Boundary time.
direction : float
Integration direction: +1 or -1.
t : float
Current time.
y : ndarray
Current state.
t_old : float
Previous time. None if no steps were made yet.
step_size : float
Size of the last successful step. None if no steps were made yet.
nfev : int
Number of evaluations of the right-hand side.
njev : int
Number of evaluations of the Jacobian.
nlu : int
Number of LU decompositions.
References
----------
.. [1] <NAME>, <NAME>, "Solving Ordinary Differential Equations II:
Stiff and Differential-Algebraic Problems", Sec. IV.8.
.. [2] <NAME>, <NAME>, and <NAME>, "On the estimation of
sparse Jacobian matrices", Journal of the Institute of Mathematics
and its Applications, 13, pp. 117-120, 1974.
"""
def __init__(self, fun, t0, y0, t_bound, max_step=np.inf,
rtol=1e-3, atol=1e-6, jac=None, jac_sparsity=None,
vectorized=False, first_step=None, prec=25, **extraneous):
warn_extraneous(extraneous)
super(RadauD, self).__init__(fun, t0, y0, t_bound, vectorized)
self.y_old = None
self.prec = prec
self.max_step = validate_max_step(max_step)
self.rtol, self.atol = validate_tol(rtol, atol, self.n)
# self.yn = [dm(i, self.prec) for i in self.y]
self.f = self.fun(self.t, self.y)
# Select initial step assuming the same order which is used to control
# the error.
if first_step is None:
self.h_abs = select_initial_step(self.fun, self.t, self.y, self.f, self.direction, 3, self.rtol, self.atol)
else:
self.h_abs = validate_first_step(first_step, t0, t_bound)
self.h_abs_old = None
self.error_norm_old = None
self.newton_tol = max(10 * EPS / rtol, min(0.03, rtol ** 0.5))
self.sol = None
self.jac_factor = None
self.jac, self.J = self._validate_jac(jac, jac_sparsity)
if issparse(self.J):
def lu(A):
self.nlu += 1
A1 = df.DF(A, self.prec)
L, U = A1.LU_decompositiond()
return L, U
# return splu(A)
def solve_lu(L, U, b):
AA = df.DF([L, U], self.prec).lu_solved(L, U, b)
Sol = AA.linear_solved(b)
return AA
# return LU.solve(b)
I = eye(self.n, format='csc')
else:
def lu(A):
self.nlu += 1
A1 = df.DF(A, self.prec)
L, U = A1.LU_decompositiond()
return L, U
def solve_lu(L, U, b):
AA = df.DF([L, U], self.prec).lu_solved(L, U, b)
# Sol = AA.solves(LU, b)
return AA
# return lu_solve(LU, b, overwrite_b=True)
I = np.identity(self.n)
self.lu = lu
self.solve_lu = solve_lu
self.I = I
self.current_jac = True
self.LU_real = None
self.LU_complex = None
self.Z = None
def _validate_jac(self, jac, sparsity):
t0 = self.t
y0 = self.y
if jac is None:
if sparsity is not None:
if issparse(sparsity):
sparsity = csc_matrix(sparsity)
groups = group_columns(sparsity)
sparsity = (sparsity, groups)
def jac_wrapped(t, y, f):
self.njev += 1
J, self.jac_factor = num_jac(self.fun_vectorized, t, y, f,
self.atol, self.jac_factor,
sparsity)
return J
J = jac_wrapped(t0, y0, self.f)
elif callable(jac):
J = jac(t0, y0)
self.njev = 1
if issparse(J):
J = csc_matrix(J)
def jac_wrapped(t, y, _=None):
self.njev += 1
return csc_matrix(jac(t, y), dtype=float)
else:
J = np.asarray(J, dtype=float)
def jac_wrapped(t, y, _=None):
self.njev += 1
return np.asarray(jac(t, y), dtype=float)
if J.shape != (self.n, self.n):
raise ValueError("`jac` is expected to have shape {}, but "
"actually has {}."
.format((self.n, self.n), J.shape))
else:
if issparse(jac):
J = csc_matrix(jac)
else:
J = np.asarray(jac, dtype=float)
if J.shape != (self.n, self.n):
raise ValueError("`jac` is expected to have shape {}, but "
"actually has {}."
.format((self.n, self.n), J.shape))
jac_wrapped = None
return jac_wrapped, J
def _step_impl(self):
t = self.t
y = self.y
f = self.f
max_step = self.max_step
atol = self.atol
rtol = self.rtol
min_step = 10 * np.abs(np.nextafter(float(t), self.direction * np.inf) - float(t))
# min_step = dm(10, self.prec) * dm(abs(dm(np.nextafter(float(t), self.direction * np.inf), self.prec) - dm(t, self.prec)), self.prec)
if self.h_abs > max_step:
h_abs = max_step
h_abs_old = None
error_norm_old = None
elif self.h_abs < min_step:
h_abs = float(min_step)
h_abs_old = None
error_norm_old = None
else:
h_abs = float(self.h_abs)
h_abs_old = self.h_abs_old
error_norm_old = self.error_norm_old
J = self.J
LU_real = self.LU_real
LU_complex = self.LU_complex
current_jac = self.current_jac
jac = self.jac
rejected = False
step_accepted = False
message = None
while not step_accepted:
if h_abs < min_step:
return False, self.TOO_SMALL_STEP
# print(self.direction)
h = float(h_abs) * self.direction
t_new = float(t) + float(h)
if self.direction * (float(t_new) - float(self.t_bound)) > 0:
t_new = self.t_bound
h = float(t_new) - float(t)
h_abs = np.abs(h)
if self.sol is None:
# Z0 = df.DF([0.0, 0.0, 0.0], self.prec).zeros_matrix(3, len(y))
Z0 = np.zeros((3, y.shape[0]))
else:
# print(C)
CH = np.array([t + h*i for i in C])
# print(CH)
Z0a = self.sol(CH).tolist()
Z1 = self.sol(CH).T
Z2 = [dm(i[0], self.prec) for i in Z1.tolist()]
Y = [dm(i, self.prec) for i in [y]]
# print(Z2)
# Z0b = Z0a.T
# Z0 = [i - j for i, j in zip(Z0a, y)]
# Z0f = [Z0[0].tolist()]
# print(Z0f)
# print(y)
# print(Y)
Z3 = np.array([[Z2[i] - Y[j] for j in range(len(Y))] for i in range(len(Z2))])
Z0 = Z3
# print(Z0)
# print(np.array(Z3).T)
# print(np.array(Z2) - Y[0])
# print(np.array(Z0).reshape(3, y.shape[0]))
# print(atol)
try:
yv = y[0]
except:
yv = y
scale = dm(dm(float(atol), self.prec) + dm(yv, self.prec) * dm(rtol, self.prec), self.prec)
def matrix_subtraction(A, B):
"""
Subtracts matrix B from matrix A and returns difference
:param A: The first matrix
:param B: The second matrix
:return: Matrix difference
"""
# Section 1: Ensure dimensions are valid for matrix subtraction
rowsA = len(A)
colsA = len(A[0])
rowsB = len(B)
colsB = len(B[0])
if rowsA != rowsB or colsA != colsB:
raise ArithmeticError('Matrices are NOT the same size.')
# Section 2: Create a new matrix for the matrix difference
C = df.DF(A, self.prec).zeros_matrix(rowsA, colsB)
# Section 3: Perform element by element subtraction
for i in range(rowsA):
for j in range(colsB):
C[i][j] = A[i][j] - B[i][j]
return C
def matrixsubc(A, B):
Z = []
for i in range(len(A)):
row = []
for j in range(len(A[i])):
VC = DCS(complex(A[i][j].real, A[i][j].imag), complex(B[i][j].real, B[i][j].imag), self.prec)
row.append(complex(VC.real, VC.imag))
Z.append(row)
return Z
converged = False
while not converged:
if LU_real is None or LU_complex is None:
JJ = df.DF([0.0], self.prec)
vr = MU_REAL / float(h)
vc = MU_COMPLEX / complex(h, 0.0)
IR = df.DF([0.0], self.prec).idenv(len(J), vr)
IC = df.DF([0.0], self.prec).idenfv(len(J), complex(vc.real, vc.imag))
# print(IC)
# LU_real, pv = df.DF(JJ.matrixsub(IR, J), self.prec).LU_factor()
# LU_complex, pv = df.DF(matrix_subtraction(IC, J), self.prec).LU_factorc()
Lr, Ur = self.lu(MU_REAL / h * self.I - J)
LU_real = [Lr, Ur]
Lc, Uc = self.lu(MU_COMPLEX / h * self.I - J)
LU_complex = [Lc, Uc]
converged, n_iter, Z, rate = solve_collocation_system(
self.fun, t, y, h, Z0, scale, self.newton_tol,
LU_real, LU_complex, self.solve_lu, self.prec)
if not converged:
if current_jac:
break
J = self.jac(t, y, f)
current_jac = True
LU_real = None
LU_complex = None
if not converged:
h_abs *= 0.5
LU_real = None
LU_complex = None
continue
# print(y, Z)
# y_new = [dm(y[i], self.prec) + Z[i][0] for i in range(len(y))]
try:
y_new = dm(y[0], self.prec) + dm(Z[-1][0], self.prec)
except:
y_new = dm(y, self.prec) + dm(Z[-1][0], self.prec)
# ZEa = df.DF(Z, self.prec).TD()
# ZE = float(np.array(Z).T.dot(np.array(E))) / h
ZE = np.array(Z).T.dot(np.array(Ef)) / dm(h, self.prec)
# ZEb = df.DF(Z, self.prec).matrix_multiplyd(ZEa, E)
# ZE = [dm(i, self.prec)/dm(h, self.prec) for i in ZEb]
# ZEb = np.array(Z).T.dot(np.array(E)) / h
# ZE = ZEb.tolist()
# print(LU_real, f, ZE, y, y_new)
# errora = df.DF(LU_real, self.prec).solve(LU_real, f + float(ZE))
# error = errora[0]
# scale = dm(float(atol), self.prec) + max(dm(abs(y[0]), self.prec), dm(abs(y_new[0]), self.prec)) * dm(rtol, self.prec)
# error_norm = dm(norm(np.array(float(error[0]) / float(scale))), self.prec)
# safety = dm(0.9, self.prec) * (dm(2, self.prec) * dm(NEWTON_MAXITER + 1, self.prec)) / (dm(2, self.prec) * dm(NEWTON_MAXITER + n_iter, self.prec))
error = self.solve_lu(LU_real[0], LU_real[1], [dm(f[0], self.prec) + dm(ZE[0], self.prec)])
# print(error[0].real)
scale = atol + float(np.maximum(np.abs(y), np.abs(y_new))) * rtol
error_norm = norm(error[0].real / scale)
safety = 0.9 * (2 * NEWTON_MAXITER + 1) / (2 * NEWTON_MAXITER + n_iter)
if rejected and error_norm > 1:
# print(y, ZE, error)
error = df.DF(LU_real, self.prec).solve(LU_real, self.fun(t, float(y[0]) + float(error[0].real)) + float(ZE))
error_norm = norm(float(error[0]) / scale)
if error_norm > 1:
factor = predict_factor(h_abs, h_abs_old,
error_norm, error_norm_old, self.prec)
h_abs *= max(MIN_FACTOR, safety * factor)
LU_real = None
LU_complex = None
rejected = True
else:
step_accepted = True
recompute_jac = jac is not None and n_iter > 2 and rate > 1e-3
factor = predict_factor(h_abs, h_abs_old, error_norm, error_norm_old, self.prec)
factor = min(MAX_FACTOR, safety * factor)
if not recompute_jac and factor < 1.2:
factor = 1
else:
LU_real = None
LU_complex = None
f_new = self.fun(t_new, y_new)
if recompute_jac:
J = jac(t_new, y_new, f_new)
current_jac = True
elif jac is not None:
current_jac = False
self.h_abs_old = self.h_abs
self.error_norm_old = error_norm
self.h_abs = h_abs * factor
self.y_old = y
self.t = t_new
self.y = y_new
self.f = f_new
self.Z = Z
self.LU_real = LU_real
self.LU_complex = LU_complex
self.current_jac = current_jac
self.J = J
self.t_old = t
self.sol = self._compute_dense_output()
return step_accepted, message
def _compute_dense_output(self):
Qb = df.DF(self.Z, self.prec)
Q = Qb.matrix_multiplyd(df.DF(self.Z, self.prec).TD(), df.DF(P, self.prec).decfunc(), self.prec)
# Q = np.dot(np.array(self.Z).T, PD)
# Q = np.dot(np.array(self.Z).T, PD)
return RadauDenseOutput(self.t_old, self.t, self.y_old, Q, self.prec)
def _dense_output_impl(self):
return self.sol
class RadauDenseOutput(DenseOutput):
def __init__(self, t_old, t, y_old, Q, prec):
super(RadauDenseOutput, self).__init__(t_old, t)
self.h = t - float(t_old)
self.Q = Q
self.order = np.array(Q).shape[1] - 1
self.y_old = y_old
self.prec = prec
def _call_impl(self, t):
x = (np.array(t, dtype=float) - np.array(self.t_old, dtype=float)) / float(self.h)
# print(x[0], x[-1])
xd = [dm(xi, self.prec) for xi in x.tolist()]
if t.ndim == 0:
pb = np.tile(np.array(xd), self.order + 1, 0)
p = np.cumprod(pb)
else:
pb = np.tile(np.array(xd), (self.order + 1, 1))
p = np.cumprod(pb, axis=0)
# Here we don't multiply by h, not a mistake.
# QQ = df.DF(self.Q, self.prec)
# print("p: {}".format(p))
# y = QQ.matrix_multiplyd([self.Q], [p], self.prec)
# print(p, p.tolist())
try:
pl = [[dm(pii, self.prec) for pii in range(len(p.tolist()[0]))] for pi in p.tolist()]
except:
pl = dm(p[0], self.prec)
y = np.dot(self.Q, p)
yv = y.tolist()[0]
# print(yv, self.y_old)
# y = yy.tolist()
# print(y, yy)
# yn = y.tolist()
# yf = [float(i) for i in yn[0]]
if y.ndim == 2:
# print(yf)
try:
# y += self.y_old[0]
try:
yvb = [dm(i, self.prec) + dm(self.y_old[0], self.prec) for i in yv]
except:
try:
yvb = [dm(i[0], self.prec) + dm(self.y_old, self.prec) for i in yv]
except:
try:
yvb = [dm(i, self.prec) + dm(self.y_old, self.prec) for i in yv]
except:
yvb = [dm(i[0], self.prec) + dm(self.y_old[0], self.prec) for i in yv]
except:
# y += self.y_old
yvb = [dm(i, self.prec) + dm(self.y_old, self.prec) for i in yv]
else:
y += self.y_old
return np.array([yvb])
def exponential_decay(t, y):
# preci = args
try:
# yn = [float(-0.5)*float(i) for i in y]
yn = [dm(-0.5, PREC[0])*dm(i, PREC[0]) for i in y]
# print(yn)
return yn
# print(yn)
except:
# print(y)
yn = dm(-0.5, PREC[0])*dm(y, PREC[0])
# print([yn])
return [yn]
METHODS = {'RadauD' : RadauD}
MESSAGES = {0: "The solver successfully reached the end of the integration interval.",
1: "A termination event occurred."}
class OdeResult(OptimizeResult):
pass
def prepare_events(events):
"""Standardize event functions and extract is_terminal and direction."""
if callable(events):
events = (events,)
if events is not None:
is_terminal = np.empty(len(events), dtype=bool)
direction = np.empty(len(events))
for i, event in enumerate(events):
try:
is_terminal[i] = event.terminal
except AttributeError:
is_terminal[i] = False
try:
direction[i] = event.direction
except AttributeError:
direction[i] = 0
else:
is_terminal = None
direction = None
return events, is_terminal, direction
def solve_event_equation(event, sol, t_old, t):
"""Solve an equation corresponding to an ODE event.
The equation is ``event(t, y(t)) = 0``, here ``y(t)`` is known from an
ODE solver using some sort of interpolation. It is solved by
`scipy.optimize.brentq` with xtol=atol=4*EPS.
Parameters
----------
event : callable
Function ``event(t, y)``.
sol : callable
Function ``sol(t)`` which evaluates an ODE solution between `t_old`
and `t`.
t_old, t : float
Previous and new values of time. They will be used as a bracketing
interval.
Returns
-------
root : float
Found solution.
"""
from scipy.optimize import brentq
return brentq(lambda t: event(t, sol(t)), t_old, t,
xtol=4 * EPS, rtol=4 * EPS)
def handle_events(sol, events, active_events, is_terminal, t_old, t):
"""Helper function to handle events.
Parameters
----------
sol : DenseOutput
Function ``sol(t)`` which evaluates an ODE solution between `t_old`
and `t`.
events : list of callables, length n_events
Event functions with signatures ``event(t, y)``.
active_events : ndarray
Indices of events which occurred.
is_terminal : ndarray, shape (n_events,)
Which events are terminal.
t_old, t : float
Previous and new values of time.
Returns
-------
root_indices : ndarray
Indices of events which take zero between `t_old` and `t` and before
a possible termination.
roots : ndarray
Values of t at which events occurred.
terminate : bool
Whether a terminal event occurred.
"""
roots = [solve_event_equation(events[event_index], sol, t_old, t)
for event_index in active_events]
roots = np.asarray(roots)
if np.any(is_terminal[active_events]):
if t > t_old:
order = np.argsort(roots)
else:
order = np.argsort(-roots)
active_events = active_events[order]
roots = roots[order]
t = np.nonzero(is_terminal[active_events])[0][0]
active_events = active_events[:t + 1]
roots = roots[:t + 1]
terminate = True
else:
terminate = False
return active_events, roots, terminate
def find_active_events(g, g_new, direction):
"""Find which event occurred during an integration step.
Parameters
----------
g, g_new : array_like, shape (n_events,)
Values of event functions at a current and next points.
direction : ndarray, shape (n_events,)
Event "direction" according to the definition in `solve_ivp`.
Returns
-------
active_events : ndarray
Indices of events which occurred during the step.
"""
g, g_new = np.asarray(g), np.asarray(g_new)
up = (g <= 0) & (g_new >= 0)
down = (g >= 0) & (g_new <= 0)
either = up | down
mask = (up & (direction > 0) |
down & (direction < 0) |
either & (direction == 0))
return np.nonzero(mask)[0]
def solve_ivpd(fun, t_span, y0, method='RK45', t_eval=None, dense_output=False,
events=None, vectorized=False, args=None, **options):
"""Solve an initial value problem for a system of ODEs.
This function numerically integrates a system of ordinary differential
equations given an initial value::
dy / dt = f(t, y)
y(t0) = y0
Here t is a one-dimensional independent variable (time), y(t) is an
n-dimensional vector-valued function (state), and an n-dimensional
vector-valued function f(t, y) determines the differential equations.
The goal is to find y(t) approximately satisfying the differential
equations, given an initial value y(t0)=y0.
Some of the solvers support integration in the complex domain, but note
that for stiff ODE solvers, the right-hand side must be
complex-differentiable (satisfy Cauchy-Riemann equations [11]_).
To solve a problem in the complex domain, pass y0 with a complex data type.
Another option always available is to rewrite your problem for real and
imaginary parts separately.
Parameters
----------
fun : callable
Right-hand side of the system. The calling signature is ``fun(t, y)``.
Here `t` is a scalar, and there are two options for the ndarray `y`:
It can either have shape (n,); then `fun` must return array_like with
shape (n,). Alternatively it can have shape (n, k); then `fun`
must return an array_like with shape (n, k), i.e. each column
corresponds to a single column in `y`. The choice between the two
options is determined by `vectorized` argument (see below). The
vectorized implementation allows a faster approximation of the Jacobian
by finite differences (required for stiff solvers).
t_span : 2-tuple of floats
Interval of integration (t0, tf). The solver starts with t=t0 and
integrates until it reaches t=tf.
y0 : array_like, shape (n,)
Initial state. For problems in the complex domain, pass `y0` with a
complex data type (even if the initial value is purely real).
method : string or `OdeSolver`, optional
Integration method to use:
* 'RK45' (default): Explicit Runge-Kutta method of order 5(4) [1]_.
The error is controlled assuming accuracy of the fourth-order
method, but steps are taken using the fifth-order accurate
formula (local extrapolation is done). A quartic interpolation
polynomial is used for the dense output [2]_. Can be applied in
the complex domain.
* 'RK23': Explicit Runge-Kutta method of order 3(2) [3]_. The error
is controlled assuming accuracy of the second-order method, but
steps are taken using the third-order accurate formula (local
extrapolation is done). A cubic Hermite polynomial is used for the
dense output. Can be applied in the complex domain.
* 'DOP853': Explicit Runge-Kutta method of order 8 [13]_.
Python implementation of the "DOP853" algorithm originally
written in Fortran [14]_. A 7-th order interpolation polynomial
accurate to 7-th order is used for the dense output.
Can be applied in the complex domain.
* 'Radau': Implicit Runge-Kutta method of the Radau IIA family of
order 5 [4]_. The error is controlled with a third-order accurate
embedded formula. A cubic polynomial which satisfies the
collocation conditions is used for the dense output.
* 'BDF': Implicit multi-step variable-order (1 to 5) method based
on a backward differentiation formula for the derivative
approximation [5]_. The implementation follows the one described
in [6]_. A quasi-constant step scheme is used and accuracy is
enhanced using the NDF modification. Can be applied in the
complex domain.
* 'LSODA': Adams/BDF method with automatic stiffness detection and
switching [7]_, [8]_. This is a wrapper of the Fortran solver
from ODEPACK.
Explicit Runge-Kutta methods ('RK23', 'RK45', 'DOP853') should be used
for non-stiff problems and implicit methods ('Radau', 'BDF') for
stiff problems [9]_. Among Runge-Kutta methods, 'DOP853' is recommended
for solving with high precision (low values of `rtol` and `atol`).
If not sure, first try to run 'RK45'. If it makes unusually many
iterations, diverges, or fails, your problem is likely to be stiff and
you should use 'Radau' or 'BDF'. 'LSODA' can also be a good universal
choice, but it might be somewhat less convenient to work with as it
wraps old Fortran code.
You can also pass an arbitrary class derived from `OdeSolver` which
implements the solver.
t_eval : array_like or None, optional
Times at which to store the computed solution, must be sorted and lie
within `t_span`. If None (default), use points selected by the solver.
dense_output : bool, optional
Whether to compute a continuous solution. Default is False.
events : callable, or list of callables, optional
Events to track. If None (default), no events will be tracked.
Each event occurs at the zeros of a continuous function of time and
state. Each function must have the signature ``event(t, y)`` and return
a float. The solver will find an accurate value of `t` at which
``event(t, y(t)) = 0`` using a root-finding algorithm. By default, all
zeros will be found. The solver looks for a sign change over each step,
so if multiple zero crossings occur within one step, events may be
missed. Additionally each `event` function might have the following
attributes:
terminal: bool, optional
Whether to terminate integration if this event occurs.
Implicitly False if not assigned.
direction: float, optional
Direction of a zero crossing. If `direction` is positive,
`event` will only trigger when going from negative to positive,
and vice versa if `direction` is negative. If 0, then either
direction will trigger event. Implicitly 0 if not assigned.
You can assign attributes like ``event.terminal = True`` to any
function in Python.
vectorized : bool, optional
Whether `fun` is implemented in a vectorized fashion. Default is False.
args : tuple, optional
Additional arguments to pass to the user-defined functions. If given,
the additional arguments are passed to all user-defined functions.
So if, for example, `fun` has the signature ``fun(t, y, a, b, c)``,
then `jac` (if given) and any event functions must have the same
signature, and `args` must be a tuple of length 3.
options
Options passed to a chosen solver. All options available for already
implemented solvers are listed below.
first_step : float or None, optional
Initial step size. Default is `None` which means that the algorithm
should choose.
max_step : float, optional
Maximum allowed step size. Default is np.inf, i.e. the step size is not
bounded and determined solely by the solver.
rtol, atol : float or array_like, optional
Relative and absolute tolerances. The solver keeps the local error
estimates less than ``atol + rtol * abs(y)``. Here `rtol` controls a
relative accuracy (number of correct digits). But if a component of `y`
is approximately below `atol`, the error only needs to fall within
the same `atol` threshold, and the number of correct digits is not
guaranteed. If components of y have different scales, it might be
beneficial to set different `atol` values for different components by
passing array_like with shape (n,) for `atol`. Default values are
1e-3 for `rtol` and 1e-6 for `atol`.
jac : array_like, sparse_matrix, callable or None, optional
Jacobian matrix of the right-hand side of the system with respect
to y, required by the 'Radau', 'BDF' and 'LSODA' method. The
Jacobian matrix has shape (n, n) and its element (i, j) is equal to
``d f_i / d y_j``. There are three ways to define the Jacobian:
* If array_like or sparse_matrix, the Jacobian is assumed to
be constant. Not supported by 'LSODA'.
* If callable, the Jacobian is assumed to depend on both
t and y; it will be called as ``jac(t, y)`` as necessary.
For 'Radau' and 'BDF' methods, the return value might be a
sparse matrix.
* If None (default), the Jacobian will be approximated by
finite differences.
It is generally recommended to provide the Jacobian rather than
relying on a finite-difference approximation.
jac_sparsity : array_like, sparse matrix or None, optional
Defines a sparsity structure of the Jacobian matrix for a finite-
difference approximation. Its shape must be (n, n). This argument
is ignored if `jac` is not `None`. If the Jacobian has only few
non-zero elements in *each* row, providing the sparsity structure
will greatly speed up the computations [10]_. A zero entry means that
a corresponding element in the Jacobian is always zero. If None
(default), the Jacobian is assumed to be dense.
Not supported by 'LSODA', see `lband` and `uband` instead.
lband, uband : int or None, optional
Parameters defining the bandwidth of the Jacobian for the 'LSODA'
method, i.e., ``jac[i, j] != 0 only for i - lband <= j <= i + uband``.
Default is None. Setting these requires your jac routine to return the
Jacobian in the packed format: the returned array must have ``n``
columns and ``uband + lband + 1`` rows in which Jacobian diagonals are
written. Specifically ``jac_packed[uband + i - j , j] = jac[i, j]``.
The same format is used in `scipy.linalg.solve_banded` (check for an
illustration). These parameters can be also used with ``jac=None`` to
reduce the number of Jacobian elements estimated by finite differences.
min_step : float, optional
The minimum allowed step size for 'LSODA' method.
By default `min_step` is zero.
Returns
-------
Bunch object with the following fields defined:
t : ndarray, shape (n_points,)
Time points.
y : ndarray, shape (n, n_points)
Values of the solution at `t`.
sol : `OdeSolution` or None
Found solution as `OdeSolution` instance; None if `dense_output` was
set to False.
t_events : list of ndarray or None
Contains for each event type a list of arrays at which an event of
that type event was detected. None if `events` was None.
y_events : list of ndarray or None
For each value of `t_events`, the corresponding value of the solution.
None if `events` was None.
nfev : int
Number of evaluations of the right-hand side.
njev : int
Number of evaluations of the Jacobian.
nlu : int
Number of LU decompositions.
status : int
Reason for algorithm termination:
* -1: Integration step failed.
* 0: The solver successfully reached the end of `tspan`.
* 1: A termination event occurred.
message : string
Human-readable description of the termination reason.
success : bool
True if the solver reached the interval end or a termination event
occurred (``status >= 0``).
References
----------
.. [1] <NAME>, <NAME>, "A family of embedded Runge-Kutta
formulae", Journal of Computational and Applied Mathematics, Vol. 6,
No. 1, pp. 19-26, 1980.
.. [2] <NAME>, "Some Practical Runge-Kutta Formulas", Mathematics
of Computation,, Vol. 46, No. 173, pp. 135-150, 1986.
.. [3] <NAME>, <NAME>, "A 3(2) Pair of Runge-Kutta Formulas",
Appl. Math. Lett. Vol. 2, No. 4. pp. 321-325, 1989.
.. [4] <NAME>, <NAME>, "Solving Ordinary Differential Equations II:
Stiff and Differential-Algebraic Problems", Sec. IV.8.
.. [5] `Backward Differentiation Formula
<https://en.wikipedia.org/wiki/Backward_differentiation_formula>`_
on Wikipedia.
.. [6] <NAME>, <NAME>, "THE MATLAB ODE SUITE", SIAM J. SCI.
COMPUTE., Vol. 18, No. 1, pp. 1-22, January 1997.
.. [7] <NAME>, "ODEPACK, A Systematized Collection of ODE
Solvers," IMACS Transactions on Scientific Computation, Vol 1.,
pp. 55-64, 1983.
.. [8] <NAME>, "Automatic selection of methods for solving stiff and
nonstiff systems of ordinary differential equations", SIAM Journal
on Scientific and Statistical Computing, Vol. 4, No. 1, pp. 136-148,
1983.
.. [9] `Stiff equation <https://en.wikipedia.org/wiki/Stiff_equation>`_ on
Wikipedia.
.. [10] <NAME>, <NAME>, and <NAME>, "On the estimation of
sparse Jacobian matrices", Journal of the Institute of Mathematics
and its Applications, 13, pp. 117-120, 1974.
.. [11] `Cauchy-Riemann equations
<https://en.wikipedia.org/wiki/Cauchy-Riemann_equations>`_ on
Wikipedia.
.. [12] `Lotka-Volterra equations
<https://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equations>`_
on Wikipedia.
.. [13] <NAME>, <NAME>. <NAME>, "Solving Ordinary Differential
Equations I: Nonstiff Problems", Sec. II.
.. [14] `Page with original Fortran code of DOP853
<http://www.unige.ch/~hairer/software.html>`_.
Examples
--------
Basic exponential decay showing automatically chosen time points.
>>> from scipy.integrate import solve_ivp
>>> def exponential_decay(t, y): return -0.5 * y
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8])
>>> print(sol.t)
[ 0. 0.11487653 1.26364188 3.06061781 4.81611105 6.57445806
8.33328988 10. ]
>>> print(sol.y)
[[2. 1.88836035 1.06327177 0.43319312 0.18017253 0.07483045
0.03107158 0.01350781]
[4. 3.7767207 2.12654355 0.86638624 0.36034507 0.14966091
0.06214316 0.02701561]
[8. 7.5534414 4.25308709 1.73277247 0.72069014 0.29932181
0.12428631 0.05403123]]
Specifying points where the solution is desired.
>>> sol = solve_ivp(exponential_decay, [0, 10], [2, 4, 8],
... t_eval=[0, 1, 2, 4, 10])
>>> print(sol.t)
[ 0 1 2 4 10]
>>> print(sol.y)
[[2. 1.21305369 0.73534021 0.27066736 0.01350938]
[4. 2.42610739 1.47068043 0.54133472 0.02701876]
[8. 4.85221478 2.94136085 1.08266944 0.05403753]]
Cannon fired upward with terminal event upon impact. The ``terminal`` and
``direction`` fields of an event are applied by monkey patching a function.
Here ``y[0]`` is position and ``y[1]`` is velocity. The projectile starts
at position 0 with velocity +10. Note that the integration never reaches
t=100 because the event is terminal.
>>> def upward_cannon(t, y): return [y[1], -0.5]
>>> def hit_ground(t, y): return y[0]
>>> hit_ground.terminal = True
>>> hit_ground.direction = -1
>>> sol = solve_ivp(upward_cannon, [0, 100], [0, 10], events=hit_ground)
>>> print(sol.t_events)
[array([40.])]
>>> print(sol.t)
[0.00000000e+00 9.99900010e-05 1.09989001e-03 1.10988901e-02
1.11088891e-01 1.11098890e+00 1.11099890e+01 4.00000000e+01]
Use `dense_output` and `events` to find position, which is 100, at the apex
of the cannonball's trajectory. Apex is not defined as terminal, so both
apex and hit_ground are found. There is no information at t=20, so the sol
attribute is used to evaluate the solution. The sol attribute is returned
by setting ``dense_output=True``. Alternatively, the `y_events` attribute
can be used to access the solution at the time of the event.
>>> def apex(t, y): return y[1]
>>> sol = solve_ivp(upward_cannon, [0, 100], [0, 10],
... events=(hit_ground, apex), dense_output=True)
>>> print(sol.t_events)
[array([40.]), array([20.])]
>>> print(sol.t)
[0.00000000e+00 9.99900010e-05 1.09989001e-03 1.10988901e-02
1.11088891e-01 1.11098890e+00 1.11099890e+01 4.00000000e+01]
>>> print(sol.sol(sol.t_events[1][0]))
[100. 0.]
>>> print(sol.y_events)
[array([[-5.68434189e-14, -1.00000000e+01]]), array([[1.00000000e+02, 1.77635684e-15]])]
As an example of a system with additional parameters, we'll implement
the Lotka-Volterra equations [12]_.
>>> def lotkavolterra(t, z, a, b, c, d):
... x, y = z
... return [a*x - b*x*y, -c*y + d*x*y]
...
We pass in the parameter values a=1.5, b=1, c=3 and d=1 with the `args`
argument.
>>> sol = solve_ivp(lotkavolterra, [0, 15], [10, 5], args=(1.5, 1, 3, 1),
... dense_output=True)
Compute a dense solution and plot it.
>>> t = np.linspace(0, 15, 300)
>>> z = sol.sol(t)
>>> import matplotlib.pyplot as plt
>>> plt.plot(t, z.T)
>>> plt.xlabel('t')
>>> plt.legend(['x', 'y'], shadow=True)
>>> plt.title('Lotka-Volterra System')
>>> plt.show()
"""
if method not in METHODS and not (
inspect.isclass(method) and issubclass(method, OdeSolver)):
raise ValueError("`method` must be one of {} or OdeSolver class."
.format(METHODS))
t0, tf = float(t_span[0]), float(t_span[1])
if args is not None:
# Wrap the user's fun (and jac, if given) in lambdas to hide the
# additional parameters. Pass in the original fun as a keyword
# argument to keep it in the scope of the lambda.
fun = lambda t, x, fun=fun: fun(t, x, *args)
jac = options.get('jac')
if callable(jac):
options['jac'] = lambda t, x: jac(t, x, *args)
if t_eval is not None:
t_eval = np.asarray(t_eval)
if t_eval.ndim != 1:
raise ValueError("`t_eval` must be 1-dimensional.")
if np.any(t_eval < min(t0, tf)) or np.any(t_eval > max(t0, tf)):
raise ValueError("Values in `t_eval` are not within `t_span`.")
d = np.diff(t_eval)
if tf > t0 and np.any(d <= 0) or tf < t0 and np.any(d >= 0):
raise ValueError("Values in `t_eval` are not properly sorted.")
if tf > t0:
t_eval_i = 0
else:
# Make order of t_eval decreasing to use np.searchsorted.
t_eval = t_eval[::-1]
# This will be an upper bound for slices.
t_eval_i = t_eval.shape[0]
if method in METHODS:
method = METHODS[method]
solver = method(fun, t0, y0, tf, vectorized=vectorized, **options)
if t_eval is None:
ts = [t0]
ys = [dm(y0, PREC[0])]
elif t_eval is not None and dense_output:
ts = []
ti = [t0]
ys = []
else:
ts = []
ys = []
interpolants = []
events, is_terminal, event_dir = prepare_events(events)
if events is not None:
if args is not None:
# Wrap user functions in lambdas to hide the additional parameters.
# The original event function is passed as a keyword argument to the
# lambda to keep the original function in scope (i.e. avoid the
# late binding closure "gotcha").
events = [lambda t, x, event=event: event(t, x, *args)
for event in events]
g = [event(t0, y0) for event in events]
t_events = [[] for _ in range(len(events))]
y_events = [[] for _ in range(len(events))]
else:
t_events = None
y_events = None
status = None
while status is None:
message = solver.step()
if solver.status == 'finished':
status = 0
elif solver.status == 'failed':
status = -1
break
t_old = solver.t_old
t = solver.t
y = solver.y
if dense_output:
sol = solver.dense_output()
interpolants.append(sol)
else:
sol = None
if events is not None:
g_new = [event(t, y) for event in events]
active_events = find_active_events(g, g_new, event_dir)
if active_events.size > 0:
if sol is None:
sol = solver.dense_output()
root_indices, roots, terminate = handle_events(
sol, events, active_events, is_terminal, t_old, t)
for e, te in zip(root_indices, roots):
t_events[e].append(te)
y_events[e].append(sol(te))
if terminate:
status = 1
t = roots[-1]
y = sol(t)
g = g_new
if t_eval is None:
ts.append(t)
ys.append(dm(y, PREC[0]))
else:
# The value in t_eval equal to t will be included.
if solver.direction > 0:
t_eval_i_new = np.searchsorted(t_eval, t, side='right')
t_eval_step = t_eval[t_eval_i:t_eval_i_new]
else:
t_eval_i_new = np.searchsorted(t_eval, t, side='left')
# It has to be done with two slice operations, because
# you can't slice to 0-th element inclusive using backward
# slicing.
t_eval_step = t_eval[t_eval_i_new:t_eval_i][::-1]
if t_eval_step.size > 0:
if sol is None:
sol = solver.dense_output()
ts.append(t_eval_step)
# print(sol(t_eval_step).tolist()[-1])
ys.append(sol(t_eval_step).tolist()[-1])
t_eval_i = t_eval_i_new
if t_eval is not None and dense_output:
ti.append(t)
message = MESSAGES.get(status, message)
if t_events is not None:
t_events = [np.asarray(te) for te in t_events]
y_events = [np.asarray(ye) for ye in y_events]
if t_eval is None:
ts = np.array(ts)
ys = df.DF(ys.tolist(), PREC[0]).TD()
else:
ts = np.hstack(ts)
# print(np.hstack(ys))
ys = df.DF([np.hstack(ys)], PREC[0]).decfunc() #np.hstack(ys)
if dense_output:
if t_eval is None:
sol = OdeSolution(ts, interpolants)
else:
sol = OdeSolution(ti, interpolants)
else:
sol = None
return OdeResult(t=ts, y=ys, sol=sol, t_events=t_events, y_events=y_events,
nfev=solver.nfev, njev=solver.njev, nlu=solver.nlu,
status=status, message=message, success=status >= 0)
solb = solve_ivp(exponential_decay, [0.0, 2.0], [0.5], method='Radau', jac=[[-0.5]])
# print(solb.t)
print(solb.y[0])
tevs = solb.t.tolist()
# print(tevs)
sol = solve_ivpd(exponential_decay, [0.0, 2.0], [dm(0.5, PREC[0])], t_eval=tevs, method='RadauD', jac=[[dm(-0.5, PREC[0])]], first_step=0.1, max_step=0.25, prec=PREC[0])
# print(sol.t)
print(sol.y[0])
# print(sol.y[0][0])
|
It's a fascinating thesis. The world has seen historic developments in the last few decades: the internet, China's opening up, the rise of emerging markets, fast and cheap travel…all of these trends led to a massive acceleration in global trade.
But have those trends peaked? Could the next big invention, say, 3-D printers, end the need for more and more trade? Imagine a world where you need a new faucet in your restroom. Instead of going to the local store that sells faucets made in China (which contributes to global trade) now you just print out your own faucet, sitting at home or at a local store. Are people also getting more interested in local products compared to global brands.
Joshua Cooper Ramo points out in an essay in Fortune that localism is on the rise – local banking, local manufacturing, and even local sourcing for food and restaurants. Is this simply a pause or could it be more than that? The answer will depend on politics.
The last time the world saw a consistent period where the growth of global trade lagged behind global growth was in the 1920s, 30s, and 40s. One factor was the rise in protectionist policies - as a response in many cases to the Great Depression and the disruption of the gold standard. At one point, under what was known as the Smoot-Hawley tariff, the United States government began imposing import duties of around 60 percent. The move was aimed at protecting domestic farmers, but instead, it exacerbated the depression. It led to a steep drop in trade, and a wave of counter protectionist measures by other countries.
The world has learned its lessons from the Great Depression. But perhaps not as well as it should have.
According to the independent think tank Global Trade Alert, we’re in the midst of a great rise in protectionism. In the 12 months preceding May 2013, governments around the world imposed three times as many protectionist measures than moves to open up. Anti-trade policies are at their highest point since the 2008 financial crisis. According to the Petersen Institute, the rise of these measures cost global trade 93 billion dollars in 2010.
There might be some good news on this front. Last month, the World Trade Organization passed a deal to cut red tape in customs. It’s a small start, and there is a lot more to accomplish. Globalization and trade have produced huge benefits for people, especially the poor, who have been able to make their way out of poverty in a faster growing and more connected global economy. But globalization won’t continue by accident or stealth – politicians will have to help make it happen.
Next entry »Can China reform in time?
I don't think that trade will ever completely stop. In America we look for the cheapest goods,so that why we wouldn't stop trading.Also when you are in business you look for the cheapest way to make your good,so if you let the Chinese make your good and get it for 50 cent a piece it's better than if you stay local and pay the American works 75 cent to a dollar piece.one thing that concerned me is when it said "the last time the world saw consistent period were the 1920,30,and 40s." I don't know a lot about globalization but I know a lot about history and in history those years were the years of the Great Depression.
Hey your web site url: %BLOGURL% seems to be redirecting to a completely different web page when I click the home page button. You might want to have this looked at.
Be careful that you just do not bad jaws your previous employer by any means throughout a job interview. Even though you left your job on poor conditions you do not have to get into all the specifics. Describe that you are searching for a brand new possibility and strive to put an optimistic rewrite on it.
Think lengthy and challenging prior to deciding to provide an auto on university. It may seem you can find close to less difficult, but you might have to pay for a charge to obtain your vehicle there, and you might not possibly apply it. It could be much more difficulty than its worth to get it there, so discover what you may prior to deliver the auto.
View your dental office routinely. A lot of times dentists have the ability to spot issues before you ever have any kind of pain. When they can obtain the problems before you may have ache, they could usually resolve them relatively quickly. This will save you a ton of money and soreness.
|
<filename>src/lib/enums/dataStatus.ts
export enum DataStatus {
Init,
Loading,
Loaded,
Error,
}
|
package lcd
import (
ctypes "github.com/tendermint/tendermint/rpc/core/types"
"github.com/cosmos/cosmos-sdk/codec"
)
var cdc = codec.New()
func init() {
ctypes.RegisterAmino(cdc)
}
|
package com.crossoverjie.leetcode.easy;
import org.junit.Assert;
import org.junit.Test;
public class RemoveDuplicatesSortedArrayTest {
@Test
public void removeDuplicates() throws Exception {
int[] nums = new int[]{1,2,2,3,4,5} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(5,i);
}
@Test
public void removeDuplicates2() throws Exception {
int[] nums = new int[]{2,2,2,3,4,5} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(4,i);
}
@Test
public void removeDuplicates3() throws Exception {
int[] nums = new int[]{2,2,2,3,5,6} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(4,i);
}
@Test
public void removeDuplicates4() throws Exception {
int[] nums = new int[]{2,2,2,3,5,5} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(3,i);
}
@Test
public void removeDuplicates5() throws Exception {
int[] nums = new int[]{3,5,5} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(2,i);
}
@Test
public void removeDuplicates6() throws Exception {
int[] nums = new int[]{1,1,2} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(2,i);
}
@Test
public void removeDuplicates7() throws Exception {
int[] nums = new int[]{1,1,1} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(1,i);
}
@Test
public void removeDuplicates8() throws Exception {
int[] nums = new int[]{1,1,1,2,2,3,3,4,4} ;
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(nums);
Assert.assertEquals(4,i);
}
@Test
public void removeDuplicates9() throws Exception {
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(null);
Assert.assertEquals(0,i);
}
@Test
public void removeDuplicates10() throws Exception {
RemoveDuplicatesSortedArray array = new RemoveDuplicatesSortedArray() ;
int i = array.removeDuplicates(new int[]{});
Assert.assertEquals(0,i);
}
}
|
<reponame>sgholamian/log-aware-clone-detection
//,temp,TestLoadJob.java,59,66,temp,TestSleepJob.java,94,101
//,2
public class xxx {
@Test (timeout=600000)
public void testSerialSubmit() throws Exception {
// set policy
policy = GridmixJobSubmissionPolicy.SERIAL;
LOG.info("Serial started at " + System.currentTimeMillis());
doSubmission(JobCreator.SLEEPJOB.name(), false);
LOG.info("Serial ended at " + System.currentTimeMillis());
}
};
|
/**
* Get the document scroll height, this means the actual height of the page from the top to the bottom of the DOM
*/
export default function getDocumentScrollHeight(): number {
const viewPortHeight = Math.max(document.documentElement.clientHeight, window.innerHeight || 0);
const scrollHeight = document.documentElement.scrollHeight;
const bodyScrollHeight = document.body.scrollHeight;
// In some situations the default scrollheight can be equal to the viewport height
// but the body scroll height can be different, then return that one
if ((viewPortHeight === scrollHeight) && (bodyScrollHeight > scrollHeight)) {
return bodyScrollHeight;
}
// In some cases we can have a challenge determining the height of the page
// due to for example a `vh` property on the body element.
// If that is the case we need to walk over all the elements and determine the highest element
// this is a very time consuming thing, so our last hope :(
let pageHeight = 0;
let largestNodeElement = document.querySelector('body');
if (bodyScrollHeight === scrollHeight && bodyScrollHeight === viewPortHeight) {
findHighestNode(document.documentElement.childNodes);
// There could be some elements above this largest element,
// add that on top
/* istanbul ignore next */
return pageHeight + largestNodeElement.getBoundingClientRect().top;
}
// The scrollHeight is good enough
return scrollHeight;
/**
* Find the largest html element on the page
* @param nodesList
*/
function findHighestNode(nodesList: any) {
for (let i = nodesList.length - 1; i >= 0; i--) {
const currentNode = nodesList[i];
/* istanbul ignore next */
if (currentNode.scrollHeight && currentNode.clientHeight) {
const elHeight = Math.max(currentNode.scrollHeight, currentNode.clientHeight);
pageHeight = Math.max(elHeight, pageHeight);
if (elHeight === pageHeight) {
largestNodeElement = currentNode;
}
}
if (currentNode.childNodes.length) {
findHighestNode(currentNode.childNodes);
}
}
}
}
|
# -*-coding:Utf-8 -*
# Copyright (c) 2010-2017 <NAME>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Objectif suivre_cap."""
from primaires.vehicule.vecteur import get_direction
from secondaires.navigation.constantes import *
from secondaires.navigation.equipage.objectifs.rejoindre import Rejoindre
# Constantes
DISTANCE_MIN = 3
class SuivreCap(Rejoindre):
"""Objectif suivre_cap.
Cet objectif, basé sur l'objectif rejoindre, demande à un
équipage de suivre un cap assigné. Cet objectif est responsable
de trouver la destination indiquée, la distance qui l'en sépare
et d'utiliser la vitesse nécssaire, ainsi que de modifier le
cap si besoin, quand le navire atteint sa destination.
"""
cle = "suivre_cap"
def __init__(self, equipage, vitesse_max=None):
Rejoindre.__init__(self, equipage)
self.vitesse_max = vitesse_max
def trouver_cap(self):
"""Trouve le cap (x, y, vitesse).
Cette méthode trouve le cap tel que renseigné par
l'équipage.
"""
equipage = self.equipage
navire = self.navire
commandants = equipage.get_matelots_au_poste("commandant", False)
if commandants:
commandant = commandants[0].personnage
if equipage.destination and not self.doit_reculer:
self.x, self.y = equipage.destination
distance = self.get_distance()
norme = distance.norme
vitesse = get_vitesse_noeuds(norme)
if norme < 40:
vitesse = 0.6
elif norme < 15:
vitesse = 0.2
elif vitesse > self.vitesse_max:
vitesse = self.vitesse_max
self.vitesse = vitesse
direction = round(distance.direction)
if distance.norme <= DISTANCE_MIN:
# On cherche le point suivant sur la carte
print("changement de point")
try:
cle = equipage.caps[0]
except IndexError:
suivant = None
else:
trajet = importeur.navigation.trajets[cle]
suivant = trajet.points.get(tuple(equipage.destination))
print("suivant", suivant)
if suivant is None:
del equipage.caps[0]
try:
cle = equipage.caps[0]
except IndexError:
# Le cap n'est pas circulaire
equipage.destination = None
self.vitesse = 0
equipage.demander("relacher_rames",
personnage=commandant)
equipage.demander("relacher_gouvernail",
personnage=commandant)
equipage.demander("plier_voiles", None,
personnage=commandant)
equipage.demander("jeter_ancre", personnage=commandant)
equipage.objectifs.remove(self)
equipage.retirer_controle("vitesse")
return
else:
trajet = importeur.navigation.trajets[cle]
suivant = trajet.point_depart
equipage.destination = suivant
# On retransmet les contrôles
Rejoindre.trouver_cap(self)
|
Empirical analysis of piezoelectric stacks composed of plates with different parameters and excited with different frequencies Piezoelectric materials offer an ability to exchange energy between electrical and mechanical systems with fair ease, by using the simple and inverse piezoelectric effect. Piezoelectric transceivers, sensors, microphones, actuators, or active acoustic noise cancellation devices are some of many applications for piezoelectric materials. Due to the natural ability of the material to convert electricity and mechanical strain, solutions based on piezoelectric materials are often compact and allow applications on micro scales. One of the forms of piezoelectric actuation involves the use of piezoelectric stacks. The stack is composed of multiple piezoelectric plates layered by thin dielectric sheets, with electrodes attached along the sides. The main aim of piezoelectric stacks is the increase in maximum displacement by multiplying the number of piezoelectric plates that make up the stack. Stacks are composed of plates with the same material properties and the same dimensions. This study aims to investigate the idea of composing piezoelectric stacks of plates that have separate control circuits inducing different carrier frequencies or plates with differing properties and dimensions, in search for new applications for piezoelectric stacks. The main point of interest is the investigation of the ability to use piezoelectric stacks to generate complex vibration spectrums composed of multiple frequencies, resulting from the use of different piezoelectric plates in the stack or different carrier frequencies that stimulate each plate. To achieve this, a stack composed of two piezoelectric plates, each controlled by its own circuit, will be measured by a laser vibrometer, to check the complexity of the output vibration pattern.
|
<reponame>israelite6/tellit-api
import { Test, TestingModule } from '@nestjs/testing';
import { QueueWorkerController } from './queue-worker.controller';
import { QueueWorkerService } from './queue-worker.service';
describe('QueueWorkerController', () => {
let queueWorkerController: QueueWorkerController;
beforeEach(async () => {
const app: TestingModule = await Test.createTestingModule({
controllers: [QueueWorkerController],
providers: [QueueWorkerService],
}).compile();
queueWorkerController = app.get<QueueWorkerController>(
QueueWorkerController,
);
});
describe('root', () => {
it('should return "Hello World!"', () => {
expect(queueWorkerController.getHello()).toBe('Hello World!');
});
});
});
|
package main
import (
"github.com/pseudomuto/protokit"
"html/template"
"os"
"testing"
)
var str = `
<h1 id={{.Name}}>{{.Name}}</h1>
<p>{{.Comments.Leading}}</p>
<ul>
{{range .Apis}}
<li><a href={{.Path}}>{{.Path}}</a></li>
{{end}}
</ul>
`
type htmlObject struct {
Name string
Comments *protokit.Comment
Apis []*api
}
func TestA(t *testing.T) {
ht, err := template.New("doc").Parse(str)
f, err := os.Create("./test.html")
if err != nil {
return
}
//ht.ExecuteTemplate(t.output, "doc", t)
err = ht.Execute(f, htmlObject{
Name:"fff",
Comments:&protokit.Comment{
Leading: "ffff",
},
Apis: []*api{&api{Path:"ffff"},
}})
//err = ht.ExecuteTemplate(f, "", twirp{
// Name:"fff",
// Comments:&protokit.Comment{
// Leading: "ffff",
// },
// Apis: []*api{&api{Path:"ffff"},
// }})
//f.Sync()
t.Error(err)
}
|
There were times earlier this season when David Barrett wouldn’t have minded a little silence, a little peace and quiet. The furnace-like verbal barrage he faced on a daily basis in the form of Jets defensive coordinator Donnie Henderson’s mouth had shattered his eardrums and his confidence, pulverized his pride and echoed late into the darkness of his training-camp bedroom.
Signed as a free agent cornerback in the offseason after four years with the Cardinals, Barrett was beginning to wonder what exactly he’d gotten himself into. But all football players know that when the coaches stop yelling is when it’s time to worry.
“Because he understands what we’re expecting of him, he understands what we’re gonna try to give him to be a good player,” Edwards added.
So they rode him. They rode him plenty hard, and the result has been Barrett’s transformation from verbal punching bag to defensive standout.
Barrett will be a focal point tomorrow against the Seahawks, a West Coast offense that has never been shy about throwing the ball. Henderson and Edwards will confidently deploy him against Seattle’s leading receiver, Darrell Jackson and his 74 catches.
In the past three weeks, Barrett has emerged as the Jets’ top corner. In the past three games, he has two interceptions and forced a fumble covering the opposition’s top receiving threat. He has 64 tackles on the season.
Jets are 5-1 at home this season, a dramatic change from Edwards’ teams that went 13-11 at the Meadowlands the last three years.
DE John Abraham (knee) is out . . . LG Pete Kendall (knee) and OLB Victor Hobson (ankle) are listed as probable.
|
Abundant bacteria shaped by deterministic processes have a high abundance of potential antibiotic resistance genes in a plateau river sediment Recent research on abundant and rare bacteria has expanded our understanding of bacterial community assembly. However, the relationships of abundant and rare bacteria with antibiotic resistance genes (ARGs) remain largely unclear. Here, we investigated the biogeographical patterns and assembly processes of the abundant and rare bacteria from river sediment at high altitudes (Lhasa River, China) and their potential association with the ARGs. The results showed that the abundant bacteria were dominated by Proteobacteria (55.4%) and Cyanobacteria (13.9%), while the Proteobacteria (33.6%) and Bacteroidetes (18.8%) were the main components of rare bacteria. Rare bacteria with a large taxonomic pool can provide function insurance in bacterial communities. Spatial distribution of persistent abundant and rare bacteria also exhibited striking differences. Strong selection of environmental heterogeneity may lead to deterministic processes, which were the main assembly processes of abundant bacteria. In contrast, the assembly processes of rare bacteria affected by latitude were dominated by stochastic processes. Abundant bacteria had the highest abundance of metabolic pathways of potential drug resistance in all predicted functional genes and a high abundance of potential ARGs. There was a strong potential connection between these ARGs and mobile genetic elements, which could increase the ecological risk of abundant taxa and human disease. These results provide insights into sedimental bacterial communities and ARGs in river ecosystems. Introduction Rivers play a vital and irreplaceable role in the process of human civilization and global biogeochemical cycling. Due to development of human society and rapid economic growth, river pollution has become a critical challenge (;). Antibiotic resistance genes (ARGs) and antibiotic-resistant bacteria (ARB) are recognized as emerging contaminants (Cosgrove, 2006;). Environmental pollution due to factors such as heavy metals may accelerate the enrichment and evolution of ARB and ARGs and increase the risk of transmission of the environmental resistome to humans. Bacterial community composition is a vital factor affecting the distribution of ARGs. For example, Cyanobacteria blooms promote the diversity of ARGs in aquatic ecosystems. The change in the bacterial community promotes the improvement of ARGs in the chlorination process of drinking water (). These studies on the correlation between bacterial community and ARGs were conducted at the overall level of the community. However, microbial communities in nature are comprised of a large number of species, while few of these species are abundant, and a large number of species are often called the "rare biosphere" (;). To date, we still know little about how spatial variation in ARG composition relates to bacterial taxonomic composition (i.e., abundant bacteria or rare bacteria) in a river continuum. Abundant and rare bacteria in sediments are major participants in the biogeochemical cycle of rivers (;). It is the core goal of community ecology to reveal the basic mechanism of the generation and maintenance of river microbial community diversity, and some interesting patterns have been discovered. For example, the physicochemical properties (such as pH, heavy metals content, and nutritional status) and spatial distribution (such as horizontal geographic distribution and vertical altitude distribution) were important drivers of the unique biogeographic patterns of microbial communities (b;). However, there has been little consistency in the studies so far due to the heterogeneity of the river ecosystem. In recent years, pollutants from industry and life have entered the water of the Lhasa River and caused a certain degree of pollution to the water quality. The changes in microbial community diversity and structure can indirectly or/and directly affect the aquatic ecological function, which is a comprehensive and sensitive index of environmental quality in the aquatic ecosystem (). The main objective of this study was to examine the biogeographical patterns and assembly processes of the abundant and rare bacteria in sediment and the potential association with the ARGs in the sediment of the Lhasa River. Therefore, 16S rRNA gene sequencing and qPCR reaction were used to analyze the sediments of the Lhasa River to determine the adaptation mechanism of microorganisms and resistance genes in the sediments. We hope this study could provide new insights into sedimental bacterial communities and ARGs in river ecosystems. Materials and methods Study sites and sample collection Lhasa River (90.08-93.33°E, 29.33-31.25°N) known as the "Water Tower of Asia" is located in the Qinghai-Tibetan Plateau and is one of the highest rivers in the world (). The Lhasa River basin is about 568 km long from east to west, and the altitude is between 3,570 and 5,200 m above sea level. More than 70% of the population of the Lhasa River Basin is concentrated from Mozhugongka county to Qushui county. Therefore, we set up 10 sampling sites along the Lhasa River from the Mozhugongka to the Qushui county with detailed geographic information on the sampling sites (Supplementary Table S1; Supplementary materials). Surface sediment (0-5 cm) was collected from each site in September 2019 using a stainless-steel core sampler. Three sub-samples were collected from each site, mixed as one sample, kept in a car refrigerator, transported to the laboratory, and stored at −80°C before 16S rRNA gene sequencing. The contents of Cr, Co, Cu, Zn, As, Cd, Hg, and Pb in sediment were detected by inductively coupled plasma mass spectrometer (ICP-MS, X Series 2, Germany). Detailed data and measurement methods are shown in Supplementary Table S2. Detailed information about sediment physicochemical properties (temperature, pH, salinity, and conductivity) and nutrients are shown in Supplementary Table S3. 16S rRNA gene sequencing Genomic DNA of the bacterial community from each site was extracted using a bacterial DNA Extraction Kit (Tiangen Biotech, Inc., Beijing, China) according to the manufacturer's protocols (a). The DNA served as a template for PCR amplification of the V4 region of 16S rRNA using the primer set 515F/806R (;). The sequencing library was set up when the amplicons of 16S rRNA were purified, and Ion S5™XL of Thermofisher was used in the sequencing. The raw fastq data were quality-filtered by low-quality parts and chimeric sequences to get clean reads (Martin, 2011;). The clean reads were clustered into operational taxonomic units (OTUs) at the 97% similarity level using Uparse. Since this study only focused on bacteria, we deleted all OTUs that did not belong to bacteria. The MUSCLE method and the SSU rRNA database of silva132 were used for the annotation species analysis (;). We followed a previously reported method () and applied Tax4Fun to reveal the functional and redundancy index (FRI) of the sequenced bacterial genome. Analysis of ARGs in the sediment from the Lhasa River A total of 23 ARGs and the 16S rRNA gene were selected to investigate the distribution of ARGs in the sediment from the Lhasa River. Herein, the representative ARGs in the environment and clinically important ARGs were taken into account based on the potential ecological risks and threats to human health (a;b). The 23 ARGs including seven major classes of antibiotic-related ARGs, which were the colistin (mcr-1, mcr-3, and mcr-7), beta-lactam (bla CTX-M-32, bla NMD-1, bla CMY, bla CTX-M, and bla TEM ), aminoglycosides (aadA, strB, and armA), macrolide (ereA, ereB, and mphA), quinolones (qnrA, qnrB, and qnrS), sulfonamides (sul1, sul2, and sul3), and tetracycline (tetA, tetM, and tetX) resistance genes, respectively. Besides, the transposase gene (tnpA) and class 1 integronintegrase gene (intI1) were selected to investigate the transfer or propagation of ARGs in the Lhasa River sediment. Detailed information on the primers and their corresponding target genes was given in Supplementary Table S4. The qPCR reaction in a 10 l reaction volume was performed according to the denaturation at 95°C for 30 s, followed by the thermal cycles of qPCR consisting of 40 cycles of 10 s at 95°C, 30 s annealing at 55°C, and 1 min extension at 72°C. The relative abundances of ARGs and mobile genetic elements (MGEs) were calculated using the 2 −CT method . The relative abundance of Statistical analysis Previous studies have generally defined OTUs at the regional level with average relative abundances >0.10% as "abundant, " those with average relative abundances <0.01% as "rare" and those in between as "intermediate" (Jiao and Lu, 2020a;a;b;Zhang Y. et al., 2021). However, this definition is not suitable for our data because the total abundance of these abundant bacterial OTUs in some samples was lower than 50% accounting for this sample's total reads. Similarly, some previous studies also defined OTUs at the regional level with average relative abundances >0.05% as "abundant" (;). Thus, across all sediment, the average relative abundance of OTUs above 0.05% was defined as abundant bacteria, while the average relative abundance of OTUs below 0.01% was regarded as rare bacteria. The remainder OTUs (0.01-0.05%) were deemed as "intermediate. " The community similarity (1-Bray-Curtis distance) and phylogenetic similarity (1-NMTD) of abundant and rare bacteria were calculated based on taxonomic distance and phylogenetic distance, respectively. Then, the distance-decay relationship (DDR) was used to reveal the responses of community similarity and phylogenetic similarity to horizontal (geographic distance) and vertical (altitude distance) spatial distribution and environmental heterogeneity (Bray-Curtis distance). The network was constructed by Spearman correlation and visualized via Gephi software (0.9.1; Gephi, WebAtlas, France). We identify the contribution of different assembly processes of abundant and rare bacteria in the Lhasa River sediments via applying a null model analysis by Stegen et al.. Composition and distribution of abundant and rare bacteria The relative abundance of abundant bacteria (mean = 69.6%) was higher than rare ones (10.5%; Figure 1A). Conversely, the Chao1 richness (381.6), Shannon diversity (5.08), and Pielou evenness (0.86) of abundant bacteria were lower than the rare ones (1937.5, 6.88, and 0.95, respectively; Figure 1A). At the bacterial phylum level, abundant bacteria were dominated by Proteobacteria (55.1%), Cyanobacteria (13.8%), and Bacteroidetes (11.6%), while the Proteobacteria (36.1%), Bacteroidetes (19.3%), and Actinobacteria (7.17%) were the main components of rare bacteria ( Figure 1B). Abundance-occupancy relationships showed that rare bacteria possessed stronger positive correlations than abundant bacteria ( Figure 1C). Meanwhile, abundant bacterial taxa had a wider distribution than the rare bacterial taxa. The petal diagram showed that abundant bacteria had 325 OTUs that persisted in all sediments, while rare bacteria only had 28 OTUs ( Figure 1D). Even these persistent abundant and rare bacteria had obvious differences in spatial distribution. The community similarity ( Figure 2A) and phylogenetic similarity ( Figure 2E) of abundant bacteria were higher than rare bacteria, indicating that the rare bacteria had more taxonomic and phylogenetic variation than the abundant bacteria. Furthermore, the community similarity for abundant and rare bacteria had significantly positive correlations with their corresponding phylogenetic similarity, and the correlations of rare bacteria were stronger than that of abundant bacteria (Supplementary Figure S1), indicating that the phylogeny of these abundant and rare bacteria had different sensitivities to environmental changes. The DDR showed that the community similarity of both abundant and rare bacteria significantly decreased with the increased geographical distance ( Figure 2B). Interestingly, the effect of geographical distance on the community composition of rare bacteria (R 2 = 0.15) was greater than that of abundant bacteria (R 2 = 0.12), whereas the community composition of abundant bacteria (Slope = −0.083) had more community turnover with increased of geographical distance. Besides, the composition of rare bacteria was also significantly affected by altitude in biogeographic patterns, and the community similarity significantly Frontiers in Microbiology 04 frontiersin.org decreased with the increased altitude distance ( Figure 2C). Similarly, the effect of environmental heterogeneity on rare bacteria (R 2 = 0.35) was greater than that of abundant bacteria (R 2 = 0.18). Rare bacteria (Slope = −0.609) had more community turnover with the increase in environmental change ( Figure 2D). Specifically, the taxonomic composition of rare bacteria was significantly affected by heavy metals, such as Cu, Zn, Cd, and As, whereas only the composition of abundant bacteria was significantly affected by Cu (Supplementary Table S5). Especially, Cu had more influence on the taxonomic composition of rare bacteria than abundant bacteria (Supplementary Table S5). However, the phylogenetic similarity for abundant and rare bacteria did not decrease significantly with the increased geographical distance and altitude distance ( Figures 2F,G). Only the phylogenetic similarity of rare bacteria was significantly affected by environmental heterogeneity ( Figure 2H). The different responses of abundant and rare bacteria to geographic and environmental factors in the taxonomic and phylogenetic composition may indicate the presence of had distinct community ecological assembly processes. Community assembly processes of abundant and rare bacteria in the sediment Although the niche width of abundant bacteria (mean = 5.58) was higher than rare bacteria (2.80), the niche of the abundant bacteria showed higher differentiation ( Figure 3A). Results from the null model showed that the differentiating was the dominant process for both abundant (99.8%) and rare bacteria (88.9%) assembly, while the homogenizing process (4.44%) had little impact on rare bacteria assembly ( Figure 3B). Additionally, the stochastic process (62.2%) was the main assembly pattern of rare Frontiers in Microbiology 05 frontiersin.org bacteria in the sediment of the Lhasa River, followed by the deterministic process (37.8%). The deterministic process (53.3%) dominated the assembly of abundant bacteria, followed by the stochastic process (46.7%). The results show that the contribution of the stochastic and deterministic process for the assembly of abundant and rare bacteria in the sediment of the Lhasa River was different. Mantel tests suggested that the NTI values of both abundant and rare bacteria had a significant correlation with the geospatial factor (latitude; Supplementary Table S6), indicating that the community assembly of abundant and rare bacteria may be affected by latitude. Furthermore, the NTI values of abundant bacteria significantly correlated with pH, conductivity, and heavy metal (Cd; Supplementary Table S6). The results of the null model further suggested that the variable selection (53.3%) was the dominant assembly process of abundant bacteria, whereas the dispersal limitation (51.1%) was the dominant assembly process of rare bacteria ( Figure 3C). These results suggested that the assembly of abundant bacteria was susceptible to the environmental selection, while the assembly of rare bacteria was susceptible to geospatial factors. Co-occurrence patterns of abundant and rare bacteria A metacommunity network was conducted based on the strong (|r| > 0.8) and significant (p < 0.01) Spearman correlations to explore the co-occurrence patterns of the sedimental microbial communities of the Lhasa River ( Figure 4A). The network consisted of 2,699 nodes linked by 40,344 edges. Degree, Betweenness centrality, Closeness centrality, and Eigen centrality of the network within abundant bacteria were significantly higher than within rare bacteria ( Figure 4B), indicating that the abundant bacteria played an important role in maintaining community structure. Abundant bacteria interacted more with other bacterial taxa than within themselves ( Figure 4A). Although the number of positive correlation edges (59.1%) was higher than that of negative correlation edges (40.9%) in the co-occurrence network of whole bacteria in sediment, the proportion of negative correlation was different within and between the different bacterial taxa. For example, the proportion of negative correlation within bacterial taxa was lower than between these bacterial taxa, suggesting there may be stronger competition between different bacterial taxa than that within these bacterial taxa. Further, the proportion of negative correlation within rare bacteria (40.1%) was higher than within abundant bacteria (38.9%), indicating that there may be stronger competition within rare bacteria than within abundant bacteria. Potential function analysis of abundant and rare bacteria When compared to rare bacteria, abundant bacteria not only had the highest abundance of metabolic pathways of potential Beta-diversity patterns of taxonomic and phylogenetic for both abundant and rare bacteria in sediment from the Lhasa River. (A) Community similarity (1-Bray-Curtis distance) of abundant and rare bacteria. (B-D) Relationship of community similarity for both abundant and rare bacteria with the geographical distance, altitude distance, and environmental heterogeneity, respectively. (E) Phylogenetic similarity (1-MNTD) of abundant and rare bacteria. (F-H) Relationship of phylogenetic similarity for both abundant and rare bacteria with the geographical distance, altitude distance, and environmental heterogeneity, respectively. Asterisks denote significance (*p < 0.05; **p < 0.01; and ***p < 0.001). Frontiers in Microbiology 06 frontiersin.org drug resistance (such as antimicrobial and antineoplastic resistance), chemical structure transformation maps, cell growth and death, cell motility, and xenobiotics biodegradation and metabolism but also had the more potential pathogenic potential of human disease (such as infectious diseases, etc.) in all predicted functional genes ( Figure 5A). However, abundant taxa had weak global and overview maps (such as biosynthesis of antibiotics and carbon metabolism), carbohydrate metabolism, metabolism of terpenoids and polyketides, nucleotide metabolism, glycan biosynthesis and metabolism, and biosynthesis of other secondary metabolites. The function redundancy index (FRI) of abundant bacteria was lower than that of rare bacteria (8,878; Figure 5B), indicating that the probability of potential function loss of rare bacteria after the disturbance was lower than that of abundant bacteria. Composition and distribution of ARGs A total of 20 ARGs were detected in sediment samples of the Lhasa River, which including colistin (mcr-1, mcr-3, and mcr-7), beta-lactam (bla CTX-M-32, bla CMY, bla CTX-M, and bla TEM ), aminoglycosides (aadA and strB), macrolide (ereA, ereB, and mphA), quinolones (qnrA, qnrB, and qnrS), sulfonamides (sul1, sul2, and sul3), and tetracycline (tetM and tetX) resistance genes, respectively ( Figure 6A). However, bla NMD-1, armA, and tetA were not detected in any sediment sample. The total relative abundance of ARGs ranged from 4.60 10 −3 to 1.72 copies per 16S rRNA, indicating that the ARGs were widely distributed in the sediment of the Lhasa River. The bla TEM was the only ARG detected in all sediments and was the most abundant ARGs (mean relative abundance was 1.92 10 −1 copies per 16S rRNA), followed by the tetM, sul1, and aadA. Besides, aadA, strB, sul1, sul2, and tetM were also detected in all sediments. Among the 10 sediment samples, the total relative abundance of ARGs at S6 was significantly higher than those of other locations. The total relative abundance of ARGs downstream from Lhasa (from S6 to S10) was significantly higher than those upriver from Lhasa (from S1 to S5), suggesting that human activities may promote the accumulation of ARGs in the sediments of the Lhasa River ( Figure 6B). Potential hosts and co-occurrence patterns of ARGs in sediment Network analysis showed that more members of rare bacteria (36.5%) were the potential host of ARGs ( Figure 7A). An ARG may have more potential hosts, such as the potential hosts of mcr-7 belonging to both abundant and rare bacteria. However, network topology features showed that abundant bacteria rather than rare bacteria had stronger connectivity and centrality, indicating that abundant bacteria may be the potential host of more ARGs (Supplementary Figure S2). Besides, the relative abundance of abundant bacteria (15.5%) in the whole bacterial community was higher than that of rare bacteria (0.57%; Supplementary Figure S3). This also suggested that the abundant bacteria were the main potential host of ARGs. The relative abundance of ARGs and their potential hosts downstream was higher than upstream, suggesting that urbanization may promote the occurrence of ARGs and their potential hosts. FIGURE 3 Niche width (A) and community assembly processes (B,C) of abundant and rare bacteria in sediment from the Lhasa River. Stochastic = Dispersal limitation + Homogenizing dispersal + Undominated processes; Deterministic = Variable selection + Homogeneous selection; Homogenizing = Homogeneous selection + Homogenizing dispersal; and Differentiating = Variable selection + Dispersal limitation. Frontiers in Microbiology 07 frontiersin.org Furthermore, the network results showed coexistence patterns among some ARGs, such as bla TEM had a significant correlation with the strB, mphA, sul3, and tetM. More important, transposase gene tnpA had a significant correlation with the aadA, strB, ereA, ereB, qnrS, sul1, sul2, and tetX. intI1 had a significant correlation with the aadA, strB, ereB, qnrS, sul1, sul2, and tetX. These results suggested that some ARGs (aadA, strB, ereB, qnrS, sul1, sul2, and tetX) in the sediment of the Lhasa River may co-exist on MGEs, which may increase the risk of transmission of these ARGs in aquatic ecosystems. Discussion Bacterial communities are the foundation of every ecosystem on earth. They are often composed of abundant bacterial taxa with fewer species and rare bacteria with more species (Pedrs-Ali, 2012; ;b). Research on abundant and rare bacteria has expanded our understanding of bacterial community structure, but the relationships of abundant and rare bacteria with ARGs remain largely unclear. Revealing the dominant host bacteria (e.g., abundant or rare bacteria) of ARGs and assembly processes provide a good hold on the potential risks of ARGs. Therefore, we investigated the composition of abundant and rare bacteria and their relationship with ARGs. We also characterized the ecological assembly mechanism of sedimental abundant and rare bacteria in the Lhasa River, China. Rare taxa can serve as function insurance of sediment bacterial communities In this study, rare bacteria with low relative abundance were found to have high richness and diversity ( Figure 1A), which was consistent with findings of the studies on the sediment of Erhai Lake (b) and Hangzhou Bay (). Although rare bacteria did not dominate the taxonomic community, they may still play an important role in maintaining the bacterial community's stability in the Lhasa River sediment due to their large taxonomic pool. Because the more members of the rare bacteria, the stronger the buffering effect of their functional composition on environmental variation (). Previous studies showed that functional redundancy could protect microbial communities by maintaining ecosystem function homeostasis (). Our study found that rare bacteria had stronger functional redundancy than abundant ones ( Figure 5B). Besides, rare bacteria with high functional redundancy show a stronger adaptation to anthropogenic A B FIGURE 4 Co-occurrence network of abundant and rare bacteria in the sediment of the Lhasa River. (A) The network analysis showed the intra-associations within each bacterial taxa and inter-association between different bacterial taxa. OTUs occurred in more than half of samples were used for network analysis. A connection based on a strong (|r| > 0.8) and significant (p < 0.01) correlation via Spearman. The size of each node is proportional to the degree. Numbers outside and inside parentheses represent total edge numbers and negative edge numbers and their ratio, respectively. Frontiers in Microbiology 08 frontiersin.org disturbances. Thus, rare bacteria can serve as an insurance source for the function of sediment bacterial communities in the Lhasa River during external disturbance. Biogeographical patterns of abundant and rare bacteria in the sediment Our study found obvious differences in diversity, taxonomy composition, and phylogenetic composition between abundant and rare bacteria. Even the persistent abundant and rare bacteria showed different biogeographical patterns. The number of spatial persistence existing OTUs in abundant bacteria was outdistanced than that in rare ones, which was consistent with the spatial persistence existing pattern of rare and abundant bacteria in the sediment from Erhai Lake (b). Furthermore, this study found that the community similarity of abundant bacteria stands out from rare bacteria (Figure 2A), suggesting the species composition of rare bacteria was more susceptible to geographical and environmental filters. Some studies found that the stronger A B FIGURE 5 Comparison of functional differences (A) and functional redundancy (B) between rich and rare groups in sediments of the Lhasa River. FIGURE 6 Main composition of the sedimental ARGs of the Lhasa River. Upriver including the sediment samples from sampling site S1 to S5, and downriver including the sediment samples from sampling site S6 to S10. "*" showed a significant difference at the 0.05 level. Frontiers in Microbiology 09 frontiersin.org spatial variation within the microbial community, the more susceptible it is to environmental change b). It also may be foreshadowed that rare bacteria in sediments from the Lhasa River were more susceptible to environmental changes. Geographic distance and environmental heterogeneity are abiotic factors that govern bacterial community assemblage (;Langenheder and Lindstrm, 2019). This study found that abundant and rare bacteria had complicated responses to geographical and environmental differences. The spatial turnover of the bacterial community has been reported to be related to dispersal limitations (;). Our study found that rare and abundant bacteria had a more significant variation in horizontal spatial distribution because their community similarity decreased with the increase in geographical distance, which was similar to the biogeographical pattern of river microorganisms reported previously (a,b;. This study also found that altitude was another important spatial factor that affects the community turnover of rare bacteria in the sediment. The R 2 value of DDR showed that the geographical distance, altitude distance, and environmental heterogeneity had greater effects on rare bacteria than on abundant bacteria, showing that rare bacteria rather than abundant bacteria were more susceptible to environmental changes. The slope of DDR showed that the effects of altitude and environmental heterogeneity on the spatial turnover of rare bacteria surpass abundant bacteria. These results portend that geographic and environmental factors together shaped the unique biogeographic pattern of sediment abundant and rare bacteria in the Lhasa River. This also means the greater impact of geographic and environmental factors on rare bacteria resulting in the community similarity of rare bacteria far below abundant bacteria. Potential associations of bacterial communities Association among the microbe-microbe is an essential biotic factor in the assembly processes of microbial communities except for the abiotic factor (geographic and environmental selection; ;b). In the community ecological assembly processes, network analysis could provide new insights into the associations within individual bacterial taxon and linkages between different bacterial taxa (;b). The nodes in networks with high connectivity may play a crucial role in protecting the structural stability of the bacterial community (). This study found that the connection within the abundant bacteria significantly overtopped rare bacteria, indicating that abundant bacteria may play an irreplaceable role in maintaining bacterial community structure. Furthermore, the positive interaction links in the network are mainly considered cooperative relationships among microbial members, while the A B FIGURE 7 Co-occurrence patterns of ARGs and their potential hosts in the sediment of the Lhasa River. (A) Network analysis showed the co-occurrence patterns of ARGs and their potential hosts. The percentages were of these taxa OTUs or genes that accounted for total OTUs or genes in networks. OTUs and ARGs occurred in more than half of the samples were used for network analysis. A connection based on a strong (r > 0.8) and significant (p < 0.01) correlation via Spearman. (B) Major network topological properties of co-occurrence patterns of ARGs and their potential hosts. Asterisks denote significance (NS, p ≥ 0.05; *p < 0.05; **p < 0.01; and ***p < 0.001). Frontiers in Microbiology 10 frontiersin.org negative interaction links are mainly thought of as competitive relationships among microbial members (). Cooperation among bacterial members helps improve the resilience of bacterial communities to respond to changing environments (). Cooperation among the abundant bacteria was more than among rare bacteria, which may be an important reason for the widespread of abundant bacteria in the sediment of the Lhasa River. Deterministic process was the dominant assembly process in abundant bacterial taxa Traditional niche theory generally agrees that deterministic process mediated community structure is governed by species interaction (e.g., competition and mutualisms, etc.) and environmental variables (e.g., pH and temperature, etc.; ;Zhou and Ning, 2017), whereas neutral theory assumes that community structure is shaped by limited dispersal and random fluctuations in species abundance (e.g., birth, death, and extinction, etc.;Chave, 2004;Zhou and Ning, 2017). Although deterministic and stochastic were generally accepted that occur simultaneously in the community assembly processes, their relative contribution to regulating community structure and biogeography is debatable (Zhou and Ning, 2017). This study showed that both deterministic and stochastic processes occur during the assembly of abundant and rare bacteria, which was consistent with previous studies (Zhou and Ning, 2017;Jiao and Lu, 2020b;b;b). Among them, the deterministic process was the dominant assembly mechanism of abundant bacterial taxa, while the stochastic process was the dominant assembly mechanism of rare bacteria. One possible reason was that the high diversity of rare bacteria species allows them to occupy various ecological niches, while more rare species occurs spatial turnover in biogeographic distribution, which leads to the strong influence of stochastic processes on the assembly of rare bacterial taxa (). Similarly, more persistent species from the abundant bacterial taxa were detected in the Lhasa River sediment, which may be one reason why the assembly processes of the abundant bacteria were more inclined to deterministic processes. Furthermore, environmental and spatial variables also seemed to control the biogeographic patterns and assembly of abundant and rare bacteria. Null model results showed that the variable selection was the dominant assembly process of abundant bacteria, followed by dispersal limitation. Conversely, the dispersal limitation was the main assembly process of rare bacteria, followed by variable selection. Moreover, the Mantel tests of geospatial and environmental factors against NTI values also suggested that the NTI values of abundant bacteria had a significant correlation with more geospatial and environmental factors (such as latitude, pH, conductivity, TN and TC ratio, and Cd), whereas NTI of rare bacterial taxa only had a significant correlation with latitude. This may be a decisive reason why the abundant bacteria were more influenced by variable selection than rare bacteria. Urbanization increased the occurrence of ARGs in the Lhasa River sediment Antibiotic resistance genes were widely distributed in the sediment of the Lhasa River, among which bla TEM,aadA,strB,sul1,sul2, and tetM were detected with 100%. Notably, bla TEM was detected in all sediment samples with the highest abundance, consistent with bla TEM in the surface sediments of Danjiangkou Reservoir (b). The bla TEM was the clinically relevant ARG, which can be used as an indicator gene for ARG contamination caused by human activities (;). This also indicates that sediments of the Lhasa River were contaminated by ARGs related to human activities. Human activities have increased correspondingly with the decrease of the altitude of the Lhasa River, which also leads to the increase in the abundance of ARGs in the sediments. In contrast to a global survey that found that urbanization was strongly associated with lower rates of antibiotic resistance (), studies have reported that urbanization could promote the development of bacterial resistance to antibiotics in rivers (;). This study found that urbanization promoted the enrichment of ARGs, which was consistent with the results found in the Yarlung Tsangpo River. Therefore, more attention should be paid to the pollution of ARGs caused by urbanization on the watershed scale. Abundant bacterial taxa were the main potential hosts of ARGs Bacterial antibiotic resistance is one of the most serious global threats to environmental safety and human health (;;). Cyanobacteria were found to be a reservoir and source of ARGs (Wang Z. et al., 2020), which contributes to the diversity increase of ARGs in the aquatic ecosystem. In our study, the relative abundance of Cyanobacteria was second only to that of Proteobacteria among the abundant bacterial taxa, but not in rare bacterial taxa. Previous studies also found that Proteobacteria, Bacteroidetes, and Actinobacteria were often antibiotic producers or have the ability to transform/metabolize. Proteobacteria, Bacteroidetes, and Actinobacteria were the dominant phyla in both abundant and rare bacteria. Furthermore, abundant bacteria account for the highest proportion in whole bacterial communities, indicating abundant bacteria may be the main hosts of ARGs in the sediments from the Lhasa River. Function prediction results show that abundant bacteria not only had a strong pathogenic potential for human diseases but also had a strong potential for drug resistance. Environment ARGs could threaten human health by increasing pathogenic ARB, leading to inefficient or ineffective use of therapeutic antibiotics for humans (;a). To date, we still know little about how spatial variation in ARGs composition relates to bacterial taxonomic composition (i.e., abundant bacteria or rare Frontiers in Microbiology 11 frontiersin.org bacteria) in a river continuum. Therefore, we further explored the relationship of the ARGs with abundant bacteria and rare bacteria based on understanding the biogeographic patterns of ARGs. The co-occurrence network is also a widely used as an important tool to explore the interaction between ARGs and their potential hosts (;). Important nodes in a network can be identified by network central location and high connectivity (). In this study, the network analysis showed that the connectedness between the abundant bacteria and ARGs was higher than the rare ones ( Figure 7B), indicating that the abundant bacteria may be the potential hosts of more ARGs. The relative abundance of these potential hosts belonging to abundant bacteria was also higher than that of rare bacteria. This may also be an important reason for the strong potential drug resistance in the abundant bacterial taxa. More importantly, ARGs in the environment lead to the rapid increase in the spread and number of ARB through horizontal gene transfer, which will also make antibiotic resistance an important and unavoidable global health problem affecting human health (;;;Yadav and Kapley, 2021). This study found that abundant bacteria not only had strong potential drug resistance but also have a high abundance of potential ARGs. There was a strong and significant correlation between these ARGs and MGEs, which could increase the ecological risk of abundant taxa and the potential for human disease. Some potential limitations merit further discussion. Our analyses were focused on the abundant and rare bacterial taxa level, and we did not know the exact host bacteria for each ARG at the species level. Network results on the co-occurrence patterns between ARGs and bacterial taxa indicated the possible host information of ARGs. Therefore, further studies are needed to verify the ARG bacterial hosts at the species level. Abundant bacteria had a high abundance of metabolic pathways of potential drug resistance in all predicted functional genes may be due to their higher relative abundance in the whole community, but it also does not mean that rich taxa contain more ARGs than rare taxa. Therefore, further studies are needed to verify the ARG bacterial hosts at the species level. Conclusion In this study, we investigated biogeographical patterns and assembly mechanisms of rare and abundant bacteria and revealed the potential association among the ARGs with abundant and rare bacteria from the Lhasa River sediment. The different and complex responses of abundant and rare bacteria to geospatial and environmental changes may be influenced mainly by deterministic and stochastic processes, respectively. Rare taxa can serve as function insurance of bacterial communities in the Lhasa River sediment. This shall provide novel insights to explain the assembly and biogeographical patterns of abundant and rare bacteria in the sediment. To our knowledge, this study was the first time to reveal that the abundant bacteria have a high abundance of potential ARGs in Plateau Rivers, with strong pathogenic potential for human diseases. In particular, abundant bacteria with potential ARGs were also maybe the main potential hosts for the presence of MGEs, which may increase the ecological risks of abundant bacteria. These results provide new insights into understand the ARGs' association with abundant and rare bacteria in plateau river sediment. Given the importance of ARGs to the health of aquatic ecosystems, the findings of this study should be validated experimentally at the bacterial species level in more diverse freshwater and marine ecosystems. Data availability statement The data presented in the study are deposited in the Sequence Read Archive (SRA) repository (https://submit.ncbi.nlm.nih.gov/ subs/sra/), accession number PRJNA681935.
|
/*
* Copyright (C) 2012 The Guava Authors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.google.common.io;
import java.io.FilterWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import static com.google.common.base.Charsets.UTF_8;
import static com.google.common.base.Preconditions.checkNotNull;
/** @author <NAME> */
public class TestWriter extends FilterWriter {
private final TestOutputStream out;
public TestWriter(TestOption... options) throws IOException {
this(new TestOutputStream(ByteStreams.nullOutputStream(), options));
}
public TestWriter(TestOutputStream out) {
super(new OutputStreamWriter(checkNotNull(out), UTF_8));
this.out = out;
}
@Override
public void write(int c) throws IOException {
super.write(c);
flush(); // flush write to TestOutputStream to get its behavior
}
@Override
public void write(char[] cbuf, int off, int len) throws IOException {
super.write(cbuf, off, len);
flush();
}
@Override
public void write(String str, int off, int len) throws IOException {
super.write(str, off, len);
flush();
}
public boolean closed() {
return out.closed();
}
}
|
New erythropoiesis-stimulating agents and new iron formulations. Today, erythropoiesis-stimulating agents (ESAs), together with iron supplementation, are the main tool for anemia correction in chronic kidney disease patients. Over the past decades, a number of attempts have been made to modify the erythropoietin molecule in order to improve its pharmacokinetic and pharmacodynamic properties. More recently, small peptides, which are unrelated to erythropoietin but bind to the same receptor, have been developed. In addition to this, other strategies to stimulate erythropoiesis have been followed, such as activin inhibition or stabilization of hypoxia-inducible transcription factors. Interestingly, the latter have the advantage of being administered orally. New iron molecules, such as ferumoxytol, ferric carboxymaltose and iron isomaltoside 1000, have recently been marketed. These new agents can administered at high doses while releasing minimal free iron. Their safety profile is good, but long- term post- marketing data are still needed to evaluate the occurrence of rare adverse events.
|
Earphone showdown, part one the etymotic ER4, audio-technica ATH-IM70, and LKERi1 Even for dedicated audiophiles, the musical experience nowadays is frequently delivered by earphones. Happily, there's been an explosion in earphone ingenuity, with many models offering stunning sound reproduction and remarkable value. If you're still using the earbuds that came with your smartphone, it's time to trade up.. I recently auditioned a sample of this new earphone wave. Here I'll consider a couple of new models from Etymotic Research, a brand I've admired for years, along with a Chinese product that stunned me with its performance/price ratio. In a follow-up article, I'll consider three other surprising, audiophile-grade Chinese earphones.. Etymotic Research got its start in 1983 producing earphones for industrial, professional, medical, and scientific uses. The company had a big breakthrough in the consumer market in 1991, when it offered the ER-4S, which it claims were the first noise-isolating earphones designed to be inserted into the ear canal. The ER4 line remains Etymotic's flagship, so I was eager to check out its latest update.
|
Proposed Syrian peace talks set for next week in Geneva are likely to be cancelled over a row about whether Kurdish groups will be represented among rebel ranks.
Russia is demanding that the PYD, the political arm of the main Kurdish militia fighting in the north-east of the country, be invited to the talks as part of the rebel delegation.
However, that is being adamantly opposed by the opposition and the Turks who say their presence is a “red line”.
“The PYD is not real opposition,” Ahmet Davutoglu, the Turkish prime minister, said in an interview. He said it would be a “mistake” for the international community to insist they sit with rebels in the talks.
“Russia is insisting. They can be there on the regime side, but on the opposition side - no.”
The planned talks, the third set of Geneva negotiations since the start of the five-year-long conflict in Syria, were announced with much fanfare after the Russians confirmed it had joined the war on the regime’s side in October.
Following behind-the-scenes discussions between the international backers of the main participants, a date was set for next Monday, January 25. A wide cross-section of both political and armed opposition met last month in Riyadh to choose representatives who will meet the regime.
However, the Kurdish faction was not represented. The PYD’s military wing has fought a series of battles within the conflict against both mainstream and jihadist rebel groups, but also say they are also opposed to the Assad regime.
Their secular, Left-wing politics and quest for greater self-determination puts them at odds ideologically with both the regime and "moderate opposition", who are Syrian or Arab nationalist, and Islamists.
The rebels and their backers say that the PYD have “co-ordinated with” the regime on numerous occasions - and the regime maintains a small but unthreatened presence in areas now effectively under autonomous control of the Kurds in the north-east of the country.
• Turkey cannot become open air prison for refugees, Turkish deputy PM warns Europe
The row is the most visible part of a series of conflicts that lead diplomats and analysts to say they believe the talks are unlikely to go ahead. The other main sticking point is the fierce war of words that has erupted between Iran, the regime's key backer, and Saudi Arabia, which supports the rebels.
No invitations have yet been issued, and the United Nations itself, which is supposed to oversee them, said it was still waiting to find out who would take part.
"At this stage, the UN will proceed with issuing invitations when the countries spearheading the international Syrian Support Group process come to an understanding on who among the opposition should be invited," a spokesman said.
Saudi officials also believe that Russia is deliberately trying to stall the talks, as Assad regime troops are currently making progress on the ground thanks largely to its air strikes. Although supposedly aimed at Islamic State of Iraq and the Levant, most have targeted non-Isil rebel groups.
Under Russian air cover, regime troops have retaken rebel-held areas in Latakia and Aleppo provinces in the north and north-west, a key air base near Damascus, and towns in the south near the Jordanian border.
The longer the peace talks are delayed, the stronger the regime’s negotiating position.
Turkey has strongly backed the rebels, but has an even stronger interest in preventing the Kurds extending their sway on the northern border of Syria. Ankara is fighting a renewed insurgency on the other side of the border against the Kurdish PKK guerrilla organisation, which is formally proscribed as a terrorist group by the United States and Britain.
The PYD is seen as not only allied to but a structural arm of the PKK. “The PYD is an extension of the PKK,” Mr Davutoglu said. “They are collaborating with the terrorist groups inside Turkey.”
Michael Stephens, who is monitoring the “peace process” for the Royal United Services Institute, said that not being at the talks was to an extent beneficial for the Kurds too, who were creating “facts on the ground” as they carve out autonomous territory between the regime, Isil and non-Isil rebels.
He also said that by taking a hardline position Saudi Arabia and Turkey were both standing in the way of their allies in the West, who were committed to supporting both the rebels against the regime and the Kurds against Isil.
“They have Russian political backing and American political backing,” he said. “They are in a strong position.”
|
def ignores(self, absolute_file_path: builtins.str) -> builtins.bool:
return typing.cast(builtins.bool, jsii.invoke(self, "ignores", [absolute_file_path]))
|
Freedom of Expression and Limits in the European Convention on Human Rights Freedom of expression is like an umbrella that encompasses many other areas of freedom. Numerous derivative areas of freedom, such as obtaining information, sharing information, organizing and holding meetings, play pioneering roles in the development of democratic rule of law states. However, freedom of expression should not be interpreted as an absolute freedom domain; it is natural for democratic countries to have borders as well as freedom of expression. These borders are defined in national and international legislation, as well as shaped by the decisions of judicial bodies. In our study, we will evaluate the scope of freedom of expression, its limits and the framework of interventions in the field of freedom. In this context, the regulations in the European Convention on Human Rights and the decisions of the European Court of Human Rights will be taken into consideration.
|
/**
* Reads an encrypted private key in PKCS#8 or OpenSSL "traditional" format
* from a file into a {@link PrivateKey} object. Both DER and PEM encoded
* keys are supported.
*
* @param file
* Private key file.
* @param password
* Password to decrypt private key.
*
* @return Private key containing data read from file.
*
* @throws CryptException
* On key format errors.
* @throws IOException
* On key read errors.
*/
@SuppressWarnings("resource")
public PrivateKey read(final File file, final char[] password)
throws IOException, CryptException {
byte[] data = IOHelper.read(new FileInputStream(file).getChannel());
data = decryptKey(data, password);
return decode(data);
}
|
A dynamic hybrid model based on wavelets and fuzzy regression for time series estimation In the present paper, a fuzzy logic based method is combined with wavelet decomposition to develop a step-by-step dynamic hybrid model for the estimation of financial time series. Empirical tests on fuzzy regression, wavelet decomposition as well as the new hybrid model are conducted on the well known $SP500$ index financial time series. The empirical tests show an efficiency of the hybrid model. Introduction The study of time series is an interesting task especially in financial contexts such as modeling, estimating, approximating and prediction. It necessitates a precise and deep comprehension of the series characteristics for a suitable choice of the model to be applied. The estimation process guarantees the detection of passed disfunction causes and therefore, it helps to take the eventual and possible precautions at the suitable time. A fine and preventive analysis guarantees a good preparation for the future and a robust prediction in front of random breaks and non anticipated changes. Financial time series are for example, are characterized by very specific stylized facts where a respect with estimation method proves its efficiency. Observing the distribution tail, for the leptokurtic cases always evaluated by the kurtosis, the series values far from the mean of the series appears with probabilities that overcome the normal distribution. In financial case, the studies have shown that the tail distribution is not leptokurtic but in the contrary, it has a kurtosis exceeds the normal case. Furthermore, observing the volatility clustering, financial time series are characterized by complex combinations of components with high frequencies. These facts are somehow due to the presence of the random or stochastic behavior of the markets. Besides, the market may be characterized by infinite volatility allowing long memory process. This induces the appearing of scaling law invariance on the volatility. Indeed, Walter expects that the conciliation between absence of long memory on profitability and its presence on volatility is a modeling financial problem. Due to these facts, some classical methods have been classified as incapable to analyze financial series. ARCH and GARCH models did not take into account the kurtosis degree of the series. Furthermore, ARCH model and its terminologies have attained their limits in the field of financial modeling due to the fact that the scaling law in volatility has not been included in the model. (See also Walter, 2001). For this aim, researchers in financial time series field have thought to introduce other methods that may induce more efficient models and to understand some aspects of non stationary, auto-regression, filtering, support vector machine models and prediction, neural networks models and predicting. See (Chang et al, 2001), (Chen et al, 2006),, (Klir et al, 1995), (He et al, 2007), (Khashei et al, 2008), (Kim et al, 1996), (Mitra et al, 2004), (Podobnik et al, 2004),,, (Tanaka et al, 1982), (Tseng et al, 1999), (Tseng et al, 2001), (Wang et al, 2000),, (Wu et al,2002), (Zopoundis et al, 2001). In the present paper, one aim is to apply wavelet theory and fuzzy logic theory to develop an estimation model for financial series. We search to judge the efficiency of fuzzy regression to estimate financial series. Next, we apply the discrete wavelet decomposition which improve especially the study of the local behavior of the series. Comparing the two methods of estimation, we have discovered that an hybrid model combining wavelet estimation with fuzzy logic estimation is possible. We then developed such a model which takes into account the non stationary behavior of the series as well as its local fluctuations and its fuzzy characteristics. The model combines wavelet decomposition with fuzzy regression. Next, an empirical study based on the famous SP 500 index is provided in order to improve the theoretical parts. The present paper is organized as follows. A first section is devoted to the presentation of the series characteristics. Section 2 is devoted to the development of the fuzzy regression model for the estimation of financial time series. In section 3, a wavelet analysis of time series is provided. In section 4, the hybrid model deduced by combining fuzzy logic with wavelet decomposition is developed. Finally, an empirical study on the SP500 index is developed in section 5 leading to a comparison between the different models and improving the impact of the hybrid scheme. The Data Description In the present paper, we propose to study the behavior of the well known financial index SP 500 which is a stock index describing the fluctuations of the stock capitalization due to the 500 most large economic societies of the American stock. It is composed of a number of 380 industrial firms, 73 financial societies, 37 public service firms, and 10 transport ones. The choice of such an index is motivated essentially by its central role as a measure of the American economy performance. Besides, the international financial integration is often increasing which forces the international exchanged productions to be strongly related. So that, as the American market is the center of international transactions, any variation of its index such as SP 500 immediately affects on other external markets. Furthermore, the study of the USA market index is of interest nowadays due to the financial international crisis which has been started from this market and next affected the world-wise markets. So, searching a good solution to understand the crisis is of priority. The data basis consists of SP 500 index monthly values during the period from August 1998 to March 2009 allowing a basis of size N = 128 = 2 7. We applied the log-values of the series in order to reduce the range of the series. The statistic characteristics of the series are resumed in the following We notice a kurtosis value over-crossing the normal value 3 which means that the series is leptokurtic. The skewness of the series induces a negative value which means that the data are spread out more to the left relatively to the means of the series than to the right. The following figure represents the original series S(t) = log(SP (t)), where SP (t) is the corresponding value of the index SP 500 at the month t. A fuzzy regression model The reasons behind the test of fuzzy regression for modeling financial series has many justifications. Firstly, financial series have always an ambiguous relation concerning dependent variables and independent one; The time variable here. Such an ambiguity is not taken into account in almost all statistical methods, but in the contrary they assume that the behavior is always definite. Furthermore, financial series such as SP 500 are already fluctuated with an unpredicted behavior. This permanence makes the future values of the series to be fuzzy and/or imprecise. The fuzzy regression was already applied as a privileged method for the estimation of uncertain and imprecise data. See (He et al, 2007), (Khashei et al, 2008), (Kim et al, 1996), (Sanchez et al, 2003),,, (Tseng et al, 1999), (Tseng et al, 2001),, (Wu et al, 2002), (Zopoundis et al, 2001). In this section, a fuzzy regression model is applied to estimation the SP 500 index series. The model due to Watada 1992, is applied here. This model is reviewed hereafter. It is based on the following fuzzy linear programming. The problem is resolved using the Software LINGO9 resulting in the following fuzzy coefficients (Triangular fuzzy numbers). a 0 = (6.887995, 0) and a 1 = (0.01066521, 0.06912872). ( As a result the lower and upper estimations of the index series is provides resulting in the following fuzzy regression equation. Y i = (6.887995, 0) + (0.01066521, 0.06912872) * 0, 5 * t i. The original series with its fuzzy estimation are shown in the Figure 2 following. We notice that although the fuzzy regression model takes into account the uncertain behavior of the information, it did not fits well the tendency of the series, and it assumes that a monotone behavior exists which means that it ignores the fluctuations already characterizing the data. Besides, the error estimation is important resulting in the values As a conclusion, the fuzzy regression has been proved to be incapable for a robust estimation with a least error for the series applied. It necessitates to be corrected to fit the fluctuations and then the random behavior of the series. So, an analysis permitting to localize these fluctuations is necessary. It consists of wavelet analysis which will be developed in the next section. Wavelet analysis of the series Wavelet analysis is always applied to show how the series is volatile, and then to detect eventual fluctuations,. Wavelet analysis permits also to represent the strongly fluctuated series without necessitating a knowledge of the explicit functional dependence. Such a capacity is of great role especially for financial time series where such a dependence is always unknown. We propose hereafter to conduct a wavelet analysis of the series due to the index SP 500 in order to localize well the fluctuations of the series. A maximum level decomposition J = 6 is fixed allowing a decomposition or a projection on the approximation space V 6 relatively to a Daubechies DB4 multi-resolution analysis with Matlab7 software. where A 6 is the global form of S(t) at the level 6 called also the trend or tendency, and D 1, D 2, D 3, D 4, D 5 and D 6 are the detail components of S(t) obtained by projecting the series on the detail spaces W 1, W 2, W 3, W 4, W 5 and W 6. These components are represented hereafter. We notice easily from these figures the localizations of the fluctuations of the series. The component A 6 shows the low frequency fluctuations. The components D i, i = 1, 2,..., 6 represents the high frequency behavior. We remark that the series is more fluctuated at detail levels D 4 and D 3 more than D 5 and D 6. The volatile aspect of the series is clearly observed from D 1 and D 2. Hybrid estimation model As we have localized the fluctuations of the series, we propose to return to the fuzzy regression model and to conduct a correction on it consisting in re-developing a dynamic fuzzy regression taking into account both the fluctuations and the uncertain aspect of the series. Denote S(t) the financial time series due to the SP 500 index introduced previously. The proposed hybrid model is described by the following steps. We remark easily that the proposed model fits the peace-wise monotonicity of the time series. On each interval, where the series is monotone the fuzzy regression is applied with corresponding fuzzy numbers. The results due to this model are shown in following figures. As we see, the new estimation due to the hybrid model fits more the original series as the detail level decreases. Here, we stress the fact that Daubechies wavelets in the software Matlab7 uses the frequency index −j contrarily to the theoretical way of wavelet basis definition which uses instead the index j. So, we seek here an increasing in the detail approximation as j decreases. Indeed, the estimation relatively to D 6 is somehow abusive (See Figure 10). This is due to the fact this component does not contain an important number of extremum points or fluctuations. The estimation becomes more efficient when using D 5 (See Figure 10). Next, Figures 12, 13, 14, 15 show an increasing in the fitness between the original series and the hybrid model estimated one. This is due to the fact that the hybrid model follows well the fluctuations of the series. To finish with this model, we provided in the following table the different error estimates corresponding to the details D i ; i = 1, 2,..., 6. Table 2 Error estimates Conclusion In the present paper, a fuzzy regression estimation is applied to estimate financial time series. Such estimation is shown to be not efficient. It gives an estimation with affine boundaries to the series which did not follow the fluctuations well. As financial time series are very volatile, a wavelet decomposition is applied next to localize the fluctuations and then to prepare to a more sophisticated fuzzy model taking into account the fluctuations. As a result, an hybrid model combining fuzzy regression and wavelet decomposition is developed. Finally, the different models are tested on the well known financial time series of the SP 500 index. The empirical tests show an efficiency of the hybrid model. We intend in the future to apply the hybrid method or modified versions for other time series and for prediction aims.
|
"I had to replace [Giancarlo] Stanton today, so maybe I should at least hit one."
— Miami's Ichiro Suzuki, 43, after hitting his second homer of the year. It's the first time he's hit more than one in a season since 2013.
Innings needed for Max Scherzer to reach 2,000 strikeouts, third fewest behind Pedro Martinez and Randy Johnson.
Calls overturned at first base during the four-game Mets-Braves series.
Number of his 601 career homers Albert Pujols has hit vs. the Astros, his most against any opponent.
The replacement for injured Mike Trout is batting .318 (14-for-44) in 13 games since the two-time AL MVP went on the disabled list, including a tiebreaking three-run homer in a six-run fifth inning Sunday vs. Houston. The Angels are 7-6 in Trout's absence.
|
An Approach to the De Novo Synthesis of Life Conspectus As the remit of chemistry expands beyond molecules to systems, new synthetic targets appear on the horizon. Among these, life represents perhaps the ultimate synthetic challenge. Building on an increasingly detailed understanding of the inner workings of living systems and advances in organic synthesis and supramolecular chemistry, the de novo synthesis of life (i.e., the construction of a new form of life based on completely synthetic components) is coming within reach. This Account presents our first steps in the journey toward this long-term goal. The synthesis of life requires the functional integration of different subsystems that harbor the different characteristics that are deemed essential to life. The most important of these are self-replication, metabolism, and compartmentalization. Integrating these features into a single system, maintaining this system out of equilibrium, and allowing it to undergo Darwinian evolution should ideally result in the emergence of life. Our journey toward de novo life started with the serendipitous discovery of a new mechanism of self-replication. We found that self-assembly in a mixture of interconverting oligomers is a general way of achieving self-replication, where the assembly process drives the synthesis of the very molecules that assemble. Mechanically induced breakage of the growing replicating assemblies resulted in their exponential growth, which is an important enabler for achieving Darwinian evolution. Through this mechanism, the self-replication of compounds containing peptides, nucleobases, and fully synthetic molecules was achieved. Several examples of evolutionary dynamics have been observed in these systems, including the spontaneous diversification of replicators allowing them to specialize on different food sets, history dependence of replicator composition, and the spontaneous emergence of parasitic behavior. Peptide-based replicator assemblies were found to organize their peptide units in space in a manner that, inadvertently, gives rise to microenvironments that are capable of catalysis of chemical reactions or binding-induced activation of cofactors. Among the reactions that can be catalyzed by the replicators are ones that produce the precursors from which these replicators grow, amounting to the first examples of the assimilation of a proto-metabolism. Operating these replicators in a chemically fueled out-of-equilibrium replication-destruction regime was found to promote an increase in their molecular complexity. Fueling counteracts the inherent tendency of replicators to evolve toward lower complexity (caused by the fact that smaller replicators tend to replicate faster). Among the remaining steps on the road to de novo life are now to assimilate compartmentalization and achieve open-ended evolution of the resulting system. Success in the synthesis of de novo life, once obtained, will have far-reaching implications for our understanding of what life is, for the search for extraterrestrial life, for how life may have originated on earth, and for every-day life by opening up new vistas in the form living technology and materials. ■ KEY REFERENCES 1 This work introduces a new mechanism for self-replication based on self-assembly of specif ic oligomers in a dynamic mixture in which these can interconvert. It shows how the mode of agitation (shaking or stirring) can control the nature of the self-replicator that emerges in a system where two replicators compete for the same building block. Exponential self-replication enabled through a fibre elongation/breakage mechanism. Nat. Commun. 2015, 6, 7427. 2 Detailed study of the mechanism of self-replication, showing that a growth-breakage mechanism enables exponential self-replication. Exponential growth is an important enabler for Darwinian evolution. This mechanism breaks with the square-root law of autocatalysis that explains why most replicators developed until now grow parabolically. Ottele, J.; Hussain, A. S.; Mayer, C.; Otto, S. Chance emergence of catalytic activity and promiscuity in a self-replicator. Nat. Catal. 2020, 3, 547−553. 3 Shows how peptide assembly that drives self-replication can be co-opted to catalyze dif ferent types of chemical reactions, including one in which the replicator accelerates the production of the precursors f rom which it grows. This work is one of the first examples of the integration of replication with metabolism. Yang, S.; Schaeffer, G.; Mattia, E.; Markovitch, O.; Liu, K.; Hussain, A. S.; Ottel, J.; Sood, A.; Otto, S. Chemical fueling enables molecular complexification of selfreplicators. Angew. Chem., Int. Ed. 2021, 60, 11344− 11349. 4 Two competing self-replicators were placed in a chemically f ueled out-of-equilibrium replication-destruction regime. While under equilibrium conditions, the faster and more stable small replicator prevails, chemical fueling allows the slower, molecularly more complex replicator to win the competition. This work demonstrates a concept that is important in the context of evolution: that f ueling can drive replicator complexification. ■ INTRODUCTION Chemistry focuses on creating new entities (molecules, materials, systems), more so than perhaps any other science. Over the centuries, chemists have learned how to synthesize molecules of impressive complexity. With the advent of Systems Chemistry, 5−8 the remit of chemistry has expanded to also include the design and synthesis of systems of molecules, that harbor systems-level properties that go well beyond the sum of their parts (i.e., emergent properties). Among the most intriguing and challenging of such properties is life. Life represents a huge synthetic challenge, 9−11 deemed intractable for many years. Yet with recent developments in synthetic methodology, supramolecular chemistry, and analytical tools and an improved understanding of biochemistry and evolution, opportunities are opening up to create new forms of life, that are not necessarily constituted of the types of biomolecules that are found in extant life (i.e., proteins, nucleic acids etc.). Approaches to de novo life differ from efforts to create synthetic cells 12,13 by aiming to build life from scratch, utilizing man-made building blocks rather than molecules obtained from current life. Efforts directed toward the synthesis of life will also inform on the possible origins of life on Earth. 14−20 Even though the development of de novo life is not necessarily guided or constrained by issues of prebiotic relevance and geochemical considerations, the general principles and concepts discovered in the process may well extend to current life's origin. Addressing the challenge of the de novo synthesis of life starts with defining the target. While a generally accepted definition of life does not exist, 21,22 a pragmatic approach to defining this target is to list the key (functional) characteristics that any form of life should encompass. 11 These characteristics are summarized in Figure 1 and include: (i) Self-replication, which is the ability of a system to autonomously catalyze the formation of copies of itself; (ii) Metabolism, which is the ability of a system to form its constituents from precursors and connect the internal maintenance of the system to an external energy source; (iii) Compartmentalization, which is the means by which a systems prevents the uncontrolled spreading of its components into its environment. (iv) Out-of-equilibrium state; life requires a continuous input of energy to keep it from collapsing into a lifeless thermodynamic minimum. (v) Darwinian evolution: the process of natural selection among mutants produced when replication is accompanied by variability (i.e., mutation). These characteristics are in part overlapping; i.e., practical implementation of Darwinian evolution requires a reproduction/destruction regime, which is inherently out of equilibrium and the energy harvesting part of metabolism also contributes to maintaining systems out of equilibrium. With this list of characteristics, a stepwise approach to the synthesis of a minimal form of life presents itself. As a first step, a system may be developed that implements only one of these characteristics and additional features may then be integrated in a stepwise manner. Considerable progress has been made on the development of systems that capture one of the first four of the five characteristics listed above: Self-replicating molecules have been made based on many different molecular designs; 23−26 reaction networks have been identified that enable chemical complexity to be built up; 27 many different forms of compartmentalization have been investigated, 28,29 including bilayer vesicles, 30,31 microdroplets, 32 coacervates, 33−35 and even absorption on surfaces. 36,37 Finally, out-of-equilibrium chemical systems are attracting renewed attention, particularly in the area of self-assembly. 38−43 Only the implementation of Darwinian evolution in chemical systems outside the realm of biology/ biomolecules has not yet received much attention. 44 The current frontier in the de novo synthesis of life encompasses the binary integration of different subsystems; the first reports combining replication (mostly enzyme-mediated) with compartmentalization 45−47 or replication with metabolism 3,48,49 have appeared. Furthermore, methodology has been developed to maintain compartments 50−54 and replicators 4,55−58 out-of-equilibrium. While the stepwise integration of subsystems is a worthwhile and logical approach, we should not dismiss the possibility that systems that integrate several of life's characteristics can emerge in a single event. Indeed, as we will show below, protometabolic capabilities can arise in systems that were primarily selected for their ability to self-replicate, in a single joined emergence step. In this Account, I review the progress that my research group made toward the de novo synthesis of life, starting from the serendipitous discovery of self-replicating molecules and then showing how these systems can exhibit dynamics that are relevant to evolutionary scenarios. I will discuss how the Accounts of Chemical Research pubs.acs.org/accounts Article replicating systems can acquire metabolic activity and how imposing an out-of-equilibrium replication/destruction regime can, in principle, enable their molecular complexification. The Account closes with an inventory of the steps that still need to be made on the path to de novo life and a brief discussion of the implication of reaching the end of this path. ■ SELF-REPLICATION Our approach to the de novo synthesis of life started with the serendipitous discovery of self-replicating molecules. 1 So the first step in our approach involved acting on an opportunity that arose, as opposed to being a rational choice. At the time, we worked on dynamic combinatorial libraries 59 and had designed building blocks that we hoped would form folded molecules. The idea was that noncovalent interactions between building blocks within the same library member would shift the composition of the mixture of interconverting molecules toward those that adopt well-defined and stable conformations. We designed building blocks to contain short peptide units, alternating hydrophobic and hydrophilic amino acids. Such a motif is known to fold into -sheet structures. 60,61 And indeed, -sheets were found, but, surprisingly, not within but between the molecules formed from the building blocks. Thus, instead of foldamers, we obtained self-assembling stacks of macrocycles. 1 As shown in Figure 2, this stacking process drives the synthesis of the very macrocycles that stack, amounting to their selfreplication. The mechanism of self-replication starts with the oxidation of the thiol moieties of the building block (such as 1a) to give rise to a mixture of macrocyclic disulfides that exchange building blocks with each other through reversible thiol− disulfide chemistry. 62 Stacking of the central aromatic rings, together with -sheet formation between the peptide side groups, allows a specific macrocycle to assemble (in the example in Figure 2a this is the hexamer). The growth of these replicators usually exhibits a pronounced lag phase, typical for autocatalytic systems, as shown for the emergence of (1a) 6 in Figure 2b, The 1-D assemblies grow from their ends (evident from experiments using isotopically labeled material), 63 and, when subjected to mechanical energy through agitating the sample, the stacks break, exposing additional stack ends. This growth-breakage mechanism enables exponential growth of the fibers and the selfreplicating macrocycles from which they are constituted. 2 Detailed investigation of the mechanism of self-replication by high-speed AFM and MD simulations on hexamer replicators made from 1a revealed that fiber growth involves the recruitment of precursors that bind as aggregates to the sides of the fibers (Figure 2c,d). 64 This material diffuses along the grooves on the fibers to the fiber ends, where fiber growth takes place. This way of guided assembly represents an interesting new mechanism for supramolecular polymerization, simplifying a 3-D search problem (where monomers have to find the fiber ends in solution) into a 1-D search problem (where diffusion along the fiber surface allows those ends to be found faster). While several peptide-based self-replicators 65,66 and replicator networks 67 have been reported, these typically involve -helices that interact through helix-bundle formation and replicate through a different mechanism. 6. 2 (c) Assembly of precursors on the sides of the fibers promotes self-replication as evident from (d) high-speed AFM images of a fiber growing from a bound precursor aggregate at t = 0, 2, and 5 min (data taken from ref 64). (e) Building blocks with which self-assembly driven self-replication has been observed include peptide derivatives 1a−f, but also amino-acid nucleic-acid chimeras 2 and 3 as well as molecules 1g and 4 lacking any of life's current building blocks. The exponential replication mediated by the growth-breakage mechanism solves a problem that has thwarted the replicator field for decades: the inhibition of replication resulting from the tendency of replicators to remain associated with each other. 68 Most other systems of self-replicators involve the ligations of two precursor molecules on a template to produce a dimer of the template, which needs to dissociate before further replication can take place. Dissociation is normally difficult, resulting in parabolic growth, as opposed to exponential growth (termed the "square root law" of replication by von Kiedrowski). 68 Szathmay showed that parabolic replicators tend to co-exist indefinitely, while exponential replication leads to survival of the fittest and extinction of the weakest replicators. 69 Thus, parabolic replicators cannot normally undergo Darwinian evolution, while exponential replicators can. Intriguing parallels exist between the replication mechanism shown in Figure 2a and amyloid assembly 70 (implicated in prion diseases, but also suggested to have played a role in the origin of life 71 ). Both processes are autocatalytic, exhibit a growthbreakage mechanism, may give rise to different strains and feature roles (albeit different in nature) of fiber sides. The mechanism shown in Figure 2a appears general. Many different peptides sequences have been used, included remarkably short one such as 1f, 72 giving rise to replicators with different ring size (the more strongly interacting peptides yield smaller rings, in line with a minimal interaction strength and associated degree of multivalency needed for assembly). 73 In select cases, competition between replicators of different ring sizes occurs with environmental conditions determining the winner. For example, a hexamer of building block 1c prevailed when shaking, while a heptamer forms under stirring 1 or either a hexamer or octamer of 1b prevailing, depending on the solvent environment. 74 We have shown that also chimeric building blocks featuring an amino acid and a nucleobase (2 and 3 in Figure 2e) can give rise to exponentially growing replicators. 75 Even building blocks lacking any peptide and even lacking any similarity to the building blocks of current life (i.e., not featuring any amino acids or nucleobases) can give rise to self-replicators, as evident from the behavior of samples made from oligoethylene oxide substituted building block 1g 76,77 or dimercaptonaphthalene 4. 78 In contrast to the one-dimensional fibrous assemblies formed from oligomers of 1a−g, 2, and 3, building block 1g gives rise to cyclic hexamers that autocatalytically assemble into sheets, while 4 gives rise to cyclic tetramers of which one particular isomer autocatalytically forms sheetlike aggregates. Thus, the growth-breakage replication mechanism works for 1-D as well as 2-D assemblies. The mechanism of selfreplication is also not limited to disulfide chemistry, as Ashkenasy and co-workers demonstrated a similar replication behavior involving native chemical ligation. 79,80 Note that, some 10 years after the discovery of the selfreplicators in the course of aiming for the formation of foldamers, we did succeed in obtaining folded molecules by shortening the peptide sequence, with which -sheet formation is less feasible, 81 or by introducing nucleobase residues. 82 We recently also explored systems at the boundary of self-replication and folding, describing self-sorting between the two assembly modes 83 as well as the conversion of foldamers into replicators. 84 We conclude that these two modes of assembly are two sides of the same coin. Whether a systems folds or forms self-replicators is difficult to control and depends on whether assembly processes occur intra-or intermolecularly, respectively. 84 In fact, even after millions of years of evolution, the competition between assembly and folding in biology is, in some instances, still poorly controlled, as evident from prion diseases that are caused by the autocatalytic assembly of proteins into sheets as opposed to folding. ■ EVOLUTIONARY DYNAMICS The examples discussed above featured systems prepared from only a single block which constrains the diversity of products formed. Including a second building block was found to lead to much richer dynamics and revealed behavior that starts to resemble aspects of evolutionary dynamics that we know from living systems. A first example is the spontaneous diversification of self-replicating molecules into two sets observed in dynamic Oxidizing a mixture of building blocks 1a and 1b leads to two separate sets of replicators that emerge at different times. The first set of hexamers rich in 1a induces the formation of a second set of hexamers that specialize on 1b. (b) Sample history dictates replicator composition. Whether building block 1d gives rise to hexamer or octamer replicator depends on whether the sample was exposed to independently prepared hexameric or octameric replicators, which cross-catalyze the formation of the replicator of 1d with the corresponding ring size. (c) Parasitic/predatory behavior in which replicator (1b) 8 cross-catalyzes the formation of (1f) n (1b) 6−n which subsequently consumes the original replicator (1b) 8 mixtures prepared from building blocks 1a and 1b, differing in only a single amino acid residue. 85 In isolation, the two building blocks give rise to hexamer and octamer replicators, respectively. 73 However, upon mixing, only hexamers emerge. First a series of hexamer mutants rich in 1a appears, followed later by the emergence of another set of hexamers, rich in the remaining building block 1b (Figure 3a). Seeding experiments showed that the first set of hexamers cross-catalyzes the formation of the second set, indicative of an ancestral relationship. This behavior resembles the process by which species form in biology. Interestingly, upon repeating the experiment, mixed hexamer (1a) 3 (1b) 3 sometimes emerges as part of the first set and sometimes as part of the second set. Such stochastic behavior is rare in chemistry, but more common in evolutionary biology. Another mixed building blocks system showed another feature that is important for evolutionary scenarios: history dependence. Starting from building block 1d, hexamer replicators formed when the mixture was exposed to preformed hexamer replicators (separately prepared from 1a), while preformed octamer replicators (made from 1b) funneled the building block into octamers (Figure 3b). 86 Thus, the nature of the self-replicating molecules that form is dictated by the interactions with self-replicators that were already present, 6 catalyzes the retro-aldol reaction of methodol 5 involving imine formation between the nonprotonated lysine residues and 5. (b) The close proximity of many lysine side groups in the assemblies of replicator (1a) 6 perturbs the pK A of the lysine groups resulting in the presence of nonprotonated lysines at neutral pH. (c) Proto-metabolism arising from replicator (1a) 6 catalyzing the cleavage of FMOC-glycine to yield dibenzofulvene which accelerates the oxidation of building block 1a into the small-ring precursors from which the replicator grows. (d) Postulated mechanism through which (1a) 6 catalyzes the cleavage of FMOC-glycine, relying on the simultaneous presence of protonated and nonprotonated lysine amine groups. (e) In an agitated sample prepared from dithiol building block 1a (200 M) and FMOC-glycine 6 (100 M) the emergence of (1a) 6 (dark blue circles) coincides with the onset of FMOC cleavage (red circles). Upon repeating the experiment in the absence of FMOC-glycine, replicator (1a) 6 emerges at the same time, but grew significantly slower (light blue circles). Adapted with permission from ref 3. Copyright 2020, Springer Nature. (f) Proto-metabolism arising through binding of dyes to replicator (1a) 6 which enhances the conversion of triplet to singlet oxygen, accelerating the production of replicator precursor. (g) Dyes used as cofactors for photomediated singlet-oxygen production. overriding preferences innate to the structure of the building blocks. A similar situation is found for life, which is a state of matter that derives its organization from previous forms of life and this organization is very different from the thermodynamically most stable arrangement of its constituents. A final example of interesting evolutionary dynamics that was found upon mixing building blocks shows similarities to parasitism and predation. This system features building block 1f, which differs from the peptide building blocks discussed so far by containing an additional methylene unit in the backbone of the first amino acid (-alanine instead of glycine). Dynamic mixtures formed upon oxidizing this building block are sluggish at producing any self-replicators. However, in the presence of previously formed octamer replicator (1b) 8 a hexamer replicator is formed rapidly which incorporated both building block 1f and 1b. 87 Once formed, the new replicator consumes the original replicators to which it owes its existence. These results show that parasitism is to be reckoned with already at the very early stages of the emergence of life. ■ INTEGRATING SELF-REPLICATION AND METABOLISM The synthesis of life requires the integration of the different functional subsystems (Figure 1). We recently succeeded in integrating self-replication with a proto-metabolism by making use of the proven potential of peptide assemblies to catalyze chemical reactions. 88−90 Following a number of not very fruitful attempts at engineering catalytic sites into our self-replicators, we discovered that the already existing systems already exhibited impressive catalytic activity for several different chemical reactions, without needing any structural alterations. Specifically, hexamer replicators made from building block 1a (but not 1a itself, nor the nonfibrous small ring macrocycles it forms upon oxidation) were able to catalyze a retro-aldol and an FMOC cleavage reaction (Figure 4a−e). 3 The catalysis of the retro-aldol reaction of methodol 5 proceeds through imine formation between 5 and unprotonated lysine residues of (1a) 6 ( Figure 4a) which exist at neutral pH as their pK A is lowered due to the close proximity of other protonated lysines in the assemblies (Figure 4b). Interestingly, even without any optimization, the catalytic activity of the replicator is similar to the best designer enzymes that have been developed for this reaction. 91−94 The catalysis of the cleavage of FMOC-glycine (6; Figure 4c) also relies on the simultaneous presence of protonated and deprotonated lysines (Figure 4d). The latter reaction produces a dibenzofulvene product that enhances the rate at which dithiol building blocks 1a oxidizes to give rise to the mixture of 3-and 4-membered macrocycles that are the precursors of replicator (1a) 6. By increasing the rate at which its own precursors are produced, also the rate of replication increases (blue arrow in Figure 4e). Thus, the replicator is able to catalyze a reaction that promotes the formation of its own precursors, amounting to proto-metabolism. This behavior falls short of fullfledged metabolism in that it does not tap into an energy source. Using another strategy we were able to design a light-driven proto-metabolism relying on replicator (1a) 6 binding and activating a cofactor capable of photoredox catalysis. 49 Simply mixing this cofactor (Rose Bengal or tetraphenylporphyrin; Figure 4g) with building block 1a led to the emergence of replicator (1a) 6, which then binds the cofactor enhancing the rate at which it photochemically converts triplet into singlet oxygen (Figure 4f). The latter then accelerates the oxidation of building blocks 1a yielding the small-ring precursors from which the replicator grows. So, similar to the FMOC cleavage reaction, the replicator enhances the rate at which its own precursors get produced. In all these reactions, the replicator is far superior at catalysis and cofactor activation compared to its building block or its small-ring precursors as only the replicator assembly provides the microenvironment that results in a perturbed lysine pKa and only the assembly provides the hydrophobic binding pockets for cofactors and substrates. Thus, catalytic and proto-metabolic activities are emergent properties of the system. The fact that structures that were selected solely on their ability to self-replicate also exhibit additional and promiscuous catalytic activity is significant. Such chance emergence of function resembles a mechanism of evolutionary invention called co-option, where a feature that emerged as it provided a certain function was also capable of fulfilling a completely unrelated one (a famous example in evolutionary biology are feathers, which are believed to have originated as they improved temperature control, but were then co-opted to facilitate flight). In the present system of replicators -sheets arise in the assembly process that drives self-replication and inadvertently organize amino acids in space to yield catalytically active sites or pockets for cofactor binding. The spontaneous occurrence of inventions of this type is highly encouraging as it bodes well for achieving one of the most challenging aspects in the synthesis of life: open-ended evolution (see below). ■ OPERATING SELF-REPLICATION OUT OF EQUILIBRIUM Life needs to be maintained out of equilibrium and also the process of Darwinian evolution relies on an input of energy to maintain the replication-destruction cycles that enable evolution. In living systems, metabolic processes maintain the internal organization away from equilibrium. While the norm in biology, in chemistry out-of-equilibrium systems have, historically, received only little attention (perhaps with the exception of, for example, oscillating reactions 95 ). However, this situation is now changing and out-of-equilibrium systems are increasingly in the spotlight, particularly in the area of self-assembly. 38−43 One of the simplest ways of maintaining self-replicating systems out of equilibrium is by placing them in a flow reactor, in which precursors are continuously supplied and part of the reaction mixture is removed. In such setup outflow means death. For homogeneous systems, death though outflow is nonselective (i.e., each replicator has the same probability of being removed in a given time span) and any selection is therefore solely based on the efficiency of replication. We recently implemented a replication-destruction regime in which death is mediated chemically and is therefore potentially selective (i.e., different replicators may exhibit different levels of resilience against chemical decomposition). 4 In such systems, replicator persistence depends on a combination of replication efficiency and resilience to destruction. We have shown that in this regime molecularly more complex replicators may outcompete simpler ones, which is a desirable but not trivial (see below) evolutionary outcome. We set up a competition between two replicators that differ in molecular complexity: our workhorse replicator (1a) 6 and the smaller replicator (1a) 3 which forms from the same building blocks 1a in the presence of guanidinium chloride. 4 In the presence of this salt, the trimer replicates faster than the hexamer, consistent with the notion that simpler molecules can Accounts of Chemical Research pubs.acs.org/accounts Article be replicated faster than more complex ones. Furthermore, various experiments also suggest that the trimer is most likely the most stable state of the system (trimer grows at the expense of hexamer when a mixture of the two is stirred), consistent with the notion that simpler molecules are entropically more favorable than more complex ones. Nevertheless, it turned out that the hexamer replicator was able to outcompete the trimer upon exposing it to a replication-destruction regime. This regime was implemented by continuous and simultaneous addition of oxidant (sodium perborate; NaBO 3 ) and reductant (TCEP; Figure 5a). The perborate mediates the oxidation of thiols to disulfides, while TCEP induces the reverse reaction. Since both redox reagents are present at a low stationary concentration, their direct short-circuiting reaction is insignificant relative to the reaction with thiols and disulfides, which are present at much higher concentrations. Thus, the redox reagents cause a continuous flux of building block through the two competing replicators. The observation that, in this chemically fueled replicationdestruction regime, the molecularly more complex and slower hexamer replicator was able to outcompete the simpler and faster trimer was attributed to the hexamer being more resilient to destruction than the trimer (as confirmed in competitive reduction control experiments; most likely a consequence of steric hindrance of approach of the reductant). These results can be rationalized using the Gibbs energy diagram shown in Figure 5b, which features separate pathways for perborate mediated replication and TCEP mediated destruction. The fact that detailed balance is broken (i.e., disulfide bond formation and breakage take place through separate pathways, each coupled to the conversion of a different high-energy reactant) is essential to escape from thermodynamic control and outcompete the faster but simpler replicator. These results represent one of the first manifestations of selection of a synthetic self-replicator based on its dynamic kinetic stability. 16,96 Even though the difference in molecular complexity between trimer and hexamer replicator may not be very large, these results are nevertheless conceptually important as they address a problem in early evolution that has become known as the "Spiegelman monster". In the 1960s Spiegelman showed that replicase-mediated in vitro evolutionary experiments on RNA resulted in the dramatic shortening of the RNA sequence. 97 In these experiments serial transfer was used to implement a replication-destruction regime (where not being transferred to the next experiment amounts to death). As all replicators have the same probability of being transferred, death is not selective and selection occurs only based on the speed of replication. Short RNA sequences are replicated faster than long ones and therefore have a competitive advantage in such setting. Thus, there is an inherent tendency to evolve toward reduced complexity. Yet, the emergence and early evolution of life is likely to require an increase in complexity. Our experiments demonstrate an obvious solution to this problem: make sure that complex replicators die slower. In our system, this feature is achieved chemically. In previous work by Braun et al., a similar result is achieved by selective retention of more complex molecules in a flow reactor featuring a temperature gradient enabling selective thermophoretic trapping. 98 ■ THE NEXT STEPS In little more than a decade of effort by our lab, substantial progress toward the long-term goal of de novo life has been made and the path ahead is becoming increasingly clear. Selfreplication and its integration with a proto-metabolism have been achieved, 3,49 and the resulting binary system can be operated in out-of-equilibrium replication-destruction regimes where replication is accompanied by mutation and selection. 85 The last main ingredient of life that still needs to be integrated is compartmentalization, including a mechanism for compartment division. 99 Integrating this feature would clear an important evolutionary hurdle. It would allow for further development of metabolically active replicators through Darwinian evolution. Without compartments, evolutionary selection for metabolically relevant catalytic activity is challenging as the products of the catalytic activity will benefit not only the replicator that generated them but also all other replicators in the sample, irrespective of whether they contributed to catalysis. When a replicator and the products it generates through catalysis are confined within a compartment, then only the replicator that is responsible for catalysis benefits and can thereby be selected based on its catalytic proficiency. Thus, compartmentalization provides protection against kleptoparasites. In fact, various studies on RNA-based systems have shown that compartments may also protect against other forms of parasitism. 46,47 In addition compartments can enhance the rate of evolutionary adaptation in directed evolution experiments. 100 We already Accounts of Chemical Research pubs.acs.org/accounts Article witnessed in our own experiments that parasites may emerge at early stages in the development of life (see above). 87 Once compartmentalization has been achieved, the next and possibly final challenge will be to achieve Darwinian evolution of the resulting system in a meaningful way. This step may well represent the biggest challenge of all, requiring the systems to be sufficiently robust to withstand death by entropy (i.e., mutating at a speed that any information it may have acquired is lost again). Yet at the same time the system should have a large enough chemical/structural space available to explore to ensure that evolutionary inventions can be made without limits, making evolution open-ended. 25,101,102 It is essential that the exploration of this space should occur in a restrained manner, where evolution dictates which very small part of the very large space the system occupies at any one time ( Figure 6). Such behavior places high demands on the fidelity of replication, particularly when, in the course of evolution, the information content of the system increases. Related to this issue is the notion of evolvability, which is an aspect that has received hardly any attention in experimental work on synthetic self-replicating systems. The challenge here is to be receptive to aspects of heredity and evolvability that may well differ from the way we are used to thinking about these phenomena, i.e., different from the way nucleic acid sequences evolve. For example, in our systems of replicators, the copying of information appears to occur at the fiber ends (where the fiber end acts as the template for the next ring that is attached). Yet, also the fiber sides play a role in catalyzing the formation of precursors for the replicator (Figure 3c and f) and in channeling this material to the fiber ends ( Figure 2c,d). The exact amino-acid composition at the sides of the fiber will therefore impact on the rate of growth of this fiber and if this happens in a way that biases amino-acid incorporation to favor production of the most active amino-acid compositions then such compositions will be heritable. This represents a mechanism of heredity that is conceptually different from the template-based replication of nucleic acids but bears some similarity to the GARD model developed by Lancet et al. based on a lipid-world scenario. 103 Indeed, if our systems are to evolve toward open-endedness, then their information content will eventually have to exceed what is possible based on the limited number of permutations allowed by varying ring size and composition. Once fiber compositions (i.e., the sequence of rings along the fiber) start to become heritable, orders of magnitude more information may be stored and passed on to next generations. Weak heredity of sequence information may arise through the mechanism discussed above. Information transfer through templating by a pre-existing fiber, for example, through base-pairing, would be another mechanism capable of stronger heredity. We have shown that nucleobases can be incorporated into our replicators, albeit without any indication for base-pairing. 75 Work by Hud et al. has shown that basepairing interactions can occur in self-assembled materials, although these systems cannot self-replicate. 104 So challenges remain. ■ IMPLICATIONS If, one day, humans will be able to synthesize life de novo, this will have several implications. First, it will help us with understanding what life is. Having more than a single biochemistry should assist in identifying and generalizing life's distinguishing features. Such knowledge would also help defining the target(s) in our search for extraterrestrial forms of life. The process of making life will also inform on the path that may have been traveled in the emergence of life on Earth. Even though efforts of synthesizing life are not necessarily directed by current biochemistry or prebiotic geochemistry, having one (or more) synthetic path(s) connecting chemistry to biology might assist in identifying an analogous route that is compatible with conditions on early Earth and that converges on extant biochemistry. Some potentially useful insights on the prebiotic emergence of autocatalytic systems may already be obtained from the work described in this Account. For example, we found that autocatalytic systems emerge spontaneously and readily in mixtures where monomers oligomerize reversibly, provided that these oligomers are capable of self-assembly. This observation suggests that autocatalysis may be easier to achieve than previously thought, given that reversible oligomerization and self-assembly are quite general and widespread phenomena. Our work has also shown that this mechanism may lead to the autocatalytic formation of one-dimensional arrays of nucleic acids. 75 Such arrays may be a stepping stone toward systems in which nucleic acid sequences within such arrays are replicated. Finally, it is not unreasonable to expect that the ability to synthesize life may have an impact that is at least similar to the impact made by the ability to synthesize organic molecules. It is tempting to draw a parallel between these two developments. It was long thought that organic molecules could only be produced by living organisms. The idea that a "life force" was needed was eventually refuted by demonstrating that such molecules could also be obtained synthetically (a famous example is the synthesis of urea by Woler in 1828). 105 These demonstrations gave rise to the field of organic chemistry which has made a tremendous impact in areas ranging from medicine to materials. We are now getting closer to being able to synthesize life (and demonstrating that it is not only a product of existing forms of life). Just like urea was not exactly the most impressive or useful example of an organic molecule, the first form of synthetic life is equally unlikely to impress, when compared to even the simplest currently living organism. Yet, just like the many human-made organic molecules that followed the synthesis of urea, the subsequent forms of human-made life (living technology) are Figure 6. Open-ended Darwinian evolution requires a huge structure space to be available to allow for continuous evolutionary inventions to be made. At any given time, an evolving system must only occupy a tiny subset of this space, putting demands on replication fidelity. In the process of evolution, the location of the occupied subset changes gradually. Note that the occupied and available structure spaces are not drawn to scale; the former is so much smaller than the latter that it would not be visible otherwise.
|
<gh_stars>0
// This code contains NVIDIA Confidential Information and is disclosed to you
// under a form of NVIDIA software license agreement provided separately to you.
//
// Notice
// NVIDIA Corporation and its licensors retain all intellectual property and
// proprietary rights in and to this software and related documentation and
// any modifications thereto. Any use, reproduction, disclosure, or
// distribution of this software and related documentation without an express
// license agreement from NVIDIA Corporation is strictly prohibited.
//
// ALL NVIDIA DESIGN SPECIFICATIONS, CODE ARE PROVIDED "AS IS.". NVIDIA MAKES
// NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO
// THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT,
// MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
//
// Information and code furnished is believed to be accurate and reliable.
// However, NVIDIA Corporation assumes no responsibility for the consequences of use of such
// information or for any infringement of patents or other rights of third parties that may
// result from its use. No license is granted by implication or otherwise under any patent
// or patent rights of NVIDIA Corporation. Details are subject to change without notice.
// This code supersedes and replaces all information previously supplied.
// NVIDIA Corporation products are not authorized for use as critical
// components in life support devices or systems without express written approval of
// NVIDIA Corporation.
//
// Copyright (c) 2008-2014 NVIDIA Corporation. All rights reserved.
// Copyright (c) 2004-2008 AGEIA Technologies, Inc. All rights reserved.
// Copyright (c) 2001-2004 NovodeX AG. All rights reserved.
#ifndef PX_FOUNDATION_PSALLOCATOR_H
#define PX_FOUNDATION_PSALLOCATOR_H
#include "foundation/PxAllocatorCallback.h"
#include "foundation/PxFoundation.h"
#include "Ps.h"
#if (defined(PX_WINDOWS) || defined (PX_WINMODERN) || defined(PX_X360) || defined(PX_XBOXONE))
#include <exception>
#include <typeinfo.h>
#endif
#if (defined(PX_APPLE))
#include <typeinfo>
#endif
#ifdef PX_WIIU
#pragma ghs nowarning 193 //warning #193-D: zero used for undefined preprocessing identifier
#endif
#include <new>
#ifdef PX_WIIU
#pragma ghs endnowarning
#endif
// Allocation macros going through user allocator
#ifdef PX_DEBUG
#define PX_ALLOC(n, name) physx::shdfnd::NamedAllocator(name).allocate(n, __FILE__, __LINE__)
#else
#define PX_ALLOC(n, name) physx::shdfnd::Allocator().allocate(n, __FILE__, __LINE__)
#endif
#define PX_ALLOC_TEMP(n, name) PX_ALLOC(n, name)
#define PX_FREE(x) physx::shdfnd::Allocator().deallocate(x)
#define PX_FREE_AND_RESET(x) { PX_FREE(x); x=0; }
// The following macros support plain-old-types and classes derived from UserAllocated.
#define PX_NEW(T) new(physx::shdfnd::ReflectionAllocator<T>(), __FILE__, __LINE__) T
#define PX_NEW_TEMP(T) PX_NEW(T)
#define PX_DELETE(x) delete x
#define PX_DELETE_AND_RESET(x) { PX_DELETE(x); x=0; }
#define PX_DELETE_POD(x) { PX_FREE(x); x=0; }
#define PX_DELETE_ARRAY(x) { PX_DELETE([]x); x=0; }
// aligned allocation
#define PX_ALIGNED16_ALLOC(n) physx::shdfnd::AlignedAllocator<16>().allocate(n, __FILE__, __LINE__)
#define PX_ALIGNED16_FREE(x) physx::shdfnd::AlignedAllocator<16>().deallocate(x)
//! placement new macro to make it easy to spot bad use of 'new'
#define PX_PLACEMENT_NEW(p, T) new(p) T
// Don't use inline for alloca !!!
#if defined (PX_WINDOWS) || defined(PX_WINMODERN)
#include <malloc.h>
#define PxAlloca(x) _alloca(x)
#elif defined(PX_LINUX) || defined(PX_ANDROID)
#include <malloc.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_PSP2)
#include <alloca.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_APPLE)
#include <alloca.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_PS3)
#include <alloca.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_X360)
#include <malloc.h>
#define PxAlloca(x) _alloca(x)
#elif defined(PX_WIIU)
#include <alloca.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_PS4)
#include <memory.h>
#define PxAlloca(x) alloca(x)
#elif defined(PX_XBOXONE)
#include <malloc.h>
#define PxAlloca(x) alloca(x)
#endif
#define PxAllocaAligned(x, alignment) ((size_t(PxAlloca(x + alignment)) + (alignment - 1)) & ~size_t(alignment - 1))
namespace physx
{
namespace shdfnd
{
PX_FOUNDATION_API PxAllocatorCallback& getAllocator();
/*
* Bootstrap allocator using malloc/free.
* Don't use unless your objects get allocated before foundation is initialized.
*/
class RawAllocator
{
public:
RawAllocator(const char* = 0) {}
void* allocate(size_t size, const char*, int)
{
// malloc returns valid pointer for size==0, no need to check
return ::malloc(size);
}
void deallocate(void* ptr)
{
// free(0) is guaranteed to have no side effect, no need to check
::free(ptr);
}
};
/*
* Allocator that simply calls straight back to the application without tracking.
* This is used by the heap (Foundation::mNamedAllocMap) that tracks allocations
* because it needs to be able to grow as a result of an allocation.
* Making the hash table re-entrant to deal with this may not make sense.
*/
class NonTrackingAllocator
{
public:
NonTrackingAllocator(const char* = 0) {}
void* allocate(size_t size, const char* file, int line)
{
return PxGetFoundation().getAllocatorCallback().allocate(size, "NonTrackedAlloc", file, line);
}
void deallocate(void* ptr)
{
PxGetFoundation().getAllocatorCallback().deallocate(ptr);
}
};
/**
Allocator used to access the global PxAllocatorCallback instance without providing additional information.
*/
class PX_FOUNDATION_API Allocator
{
public:
Allocator(const char* = 0) {}
void* allocate(size_t size, const char* file, int line);
void deallocate(void* ptr);
};
/**
Allocator used to access the global PxAllocatorCallback instance using a dynamic name.
*/
#if defined(PX_DEBUG) || defined(PX_CHECKED) // see comment in cpp
class PX_FOUNDATION_API NamedAllocator
{
public:
NamedAllocator(const PxEMPTY&);
NamedAllocator(const char* name = 0); // todo: should not have default argument!
NamedAllocator(const NamedAllocator&);
~NamedAllocator();
NamedAllocator& operator=(const NamedAllocator&);
void* allocate(size_t size, const char* filename, int line);
void deallocate(void* ptr);
};
#else
class NamedAllocator;
#endif // PX_DEBUG
/**
Allocator used to access the global PxAllocatorCallback instance using a static name derived from T.
*/
template <typename T>
class ReflectionAllocator
{
static const char* getName()
{
if(!PxGetFoundation().getReportAllocationNames())
return "<allocation names disabled>";
#if defined(PX_GNUC) || defined(PX_GHS)
return __PRETTY_FUNCTION__;
#else
// name() calls malloc(), raw_name() wouldn't
return typeid(T).name();
#endif
}
public:
ReflectionAllocator(const PxEMPTY&) {}
ReflectionAllocator(const char* =0) {}
inline ReflectionAllocator(const ReflectionAllocator& ) { }
void* allocate(size_t size, const char* filename, int line)
{
return size ? getAllocator().allocate(size, getName(), filename, line) : 0;
}
void deallocate(void* ptr)
{
if(ptr)
getAllocator().deallocate(ptr);
}
};
template <typename T>
struct AllocatorTraits
{
#if defined(PX_CHECKED) // checked and debug builds
typedef NamedAllocator Type;
#else
typedef ReflectionAllocator<T> Type;
#endif
};
// if you get a build error here, you are trying to PX_NEW a class
// that is neither plain-old-type nor derived from UserAllocated
template <typename T, typename X>
union EnableIfPod
{
int i; T t;
typedef X Type;
};
} // namespace shdfnd
} // namespace physx
// Global placement new for ReflectionAllocator templated by
// plain-old-type. Allows using PX_NEW for pointers and built-in-types.
//
// ATTENTION: You need to use PX_DELETE_POD or PX_FREE to deallocate
// memory, not PX_DELETE. PX_DELETE_POD redirects to PX_FREE.
//
// Rationale: PX_DELETE uses global operator delete(void*), which we dont' want to overload.
// Any other definition of PX_DELETE couldn't support array syntax 'PX_DELETE([]a);'.
// PX_DELETE_POD was preferred over PX_DELETE_ARRAY because it is used
// less often and applies to both single instances and arrays.
template <typename T>
PX_INLINE void* operator new(size_t size, physx::shdfnd::ReflectionAllocator<T> alloc, const char* fileName, typename physx::shdfnd::EnableIfPod<T, int>::Type line)
{
return alloc.allocate(size, fileName, line);
}
template <typename T>
PX_INLINE void* operator new[](size_t size, physx::shdfnd::ReflectionAllocator<T> alloc, const char* fileName, typename physx::shdfnd::EnableIfPod<T, int>::Type line)
{
return alloc.allocate(size, fileName, line);
}
// If construction after placement new throws, this placement delete is being called.
template <typename T>
PX_INLINE void operator delete(void* ptr, physx::shdfnd::ReflectionAllocator<T> alloc, const char* fileName, typename physx::shdfnd::EnableIfPod<T, int>::Type line)
{
PX_UNUSED(fileName);
PX_UNUSED(line);
alloc.deallocate(ptr);
}
// If construction after placement new throws, this placement delete is being called.
template <typename T>
PX_INLINE void operator delete[](void* ptr, physx::shdfnd::ReflectionAllocator<T> alloc, const char* fileName, typename physx::shdfnd::EnableIfPod<T, int>::Type line)
{
PX_UNUSED(fileName);
PX_UNUSED(line);
alloc.deallocate(ptr);
}
#endif
|
package allen.service;
import java.util.List;
import java.util.Optional;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import org.springframework.beans.factory.annotation.Autowired;
//import org.springframework.boot.autoconfigure.data.web.SpringDataWebProperties.Pageable;
//import org.springframework.data.domain.Page;
import org.springframework.data.domain.Page;
import org.springframework.data.domain.Pageable;
import org.springframework.stereotype.Repository;
import org.springframework.transaction.annotation.Transactional;
import allen.model.Customer;
import allen.repository.CustomerRepository;
@Repository
@Transactional(readOnly = true)
public class CustomerService implements ICustomerService {
@PersistenceContext
private EntityManager em;
@Autowired
private CustomerRepository repo;
@Override
@Transactional
public Customer save(Customer cust) {
return repo.save(cust);
}
@Override
public Optional<Customer> findById(Long id) {
return repo.findById(id);
}
@Override
public List<Customer> findAll() {
return repo.findAll();
}
@Override
public Page<Customer> findAll(Pageable pageable) {
return repo.findAll(pageable);
}
@Override
public Page<Customer> findByLastname(String lastname, Pageable pageable) {
return repo.findByLastname(lastname, pageable);
}
}
|
MONTGOMERY | Shortly after dispatching nemesis Leroy out of the state semifinals, Steadman Shealy stood in the middle of the field with his cell phone pressed against his dirt-smudged face and said the thing that was on everyone�s mind.
�We�re going to state. Can you believe it?� Shealy asked.
That question would have been viewed as incredulous at the beginning of American Christian Academy baseball season. Now it�s an afterthought.
ACA (30-9) rolled to the Class 2A state championship with a two- game sweep of Lexington on Wednesday and Thursday in Montgomery, 4-3, 11-0.
The question is how did the Patriots replace seven senior starters from the 2008 team and come back to win it all?
ACA returned just two seniors in John Moore and Shealy, two juniors in Jarrett Hall and Davis Sparks and had the added hurdle of breaking in new coach Matt Moore.
The beginning was rocky as the Patriots limped out to a 4-7 record, including two blowout-losses to Tuscaloosa County. Not only did Matt Moore have to replace seven starters, he had to do it with mostly freshmen and sophomores.
After a team meeting to clear the air, ACA went on a run through the end of the regular season.
Through it all, the two seniors felt a responsibility to lead their younger teammates.
But as John Moore and Shealy got the group together, they noticed a bond growing that wasn�t there with last year�s team.
One thing was obvious for the seniors as they watched the younger players grow into their roles � they were good.
�They are some good ball players,� John Moore said. �They play travel ball together and they eat, sleep and breathe baseball. They made the transition to varsity well.
So where do the Patriots go from here?
ACA will lose Matt Moore, a right-handed pitcher, and Shealy, a right fielder and the two-hole hitter. But ACA returns everyone else, including series Most Valuable Player, Dillon Williams.
However, the Patriots realize repeating will be no easy chore given they will have targets on their backs. It�s a role supporters of the baseball powerhouse relish.
�It�s going to be tough, and playing as favorites is the first thing I�m going to teach them next year,� Matt Moore said. �We�ve got a great bunch of guys coming back. We�ve got the core of our team coming back.
�I think we�re just going to need somebody to step up and be a leader because we are going to have a bulls-eye on our back. To defend our state championship we�re going to have to show up and play every single day.
The departing seniors are in unison that the next two years of Patriots� baseball will be fun to watch.
�They have the chance to win the state the next two years easily,� Shealy said.
John Moore went further in his summation, saying the group could etch a spot in state annals.
|
import sc2
from sc2 import Race, Difficulty
from sc2.player import Bot, Computer
from zerg.zerg_rush import ZergRushBot
from protoss.threebase_voidray import ThreebaseVoidrayBot
from terran.onebase_battlecruiser import BCRushBot
sc2.run_game(
sc2.maps.get("AcropolisLE"),
[
Bot(Race.Terran, BCRushBot(), name="BC Rush"),
Bot(Race.Protoss, ThreebaseVoidrayBot(), name="Threebase Voidray"),
],
realtime=False,
save_replay_as="Level_06.SC2Replay",
)
|
/**
* <odoc>
* <key>io.randomAccessFile(aFilename, aMode) : RandomAccessFile</key>
* Creates a java RandomAccessFile to enable random access to files
* and returns the same.
* </odoc>
*
* @param filename
* @param mode
* @return
* @throws FileNotFoundException
*/
@JSFunction
public static Object randomAccessFile(String filename, String mode) throws FileNotFoundException {
RandomAccessFile raf = new RandomAccessFile(filename, mode);
return raf;
}
|
Salvianolic Acid A Protects H9c2 Cells from Arsenic Trioxide-Induced Injury via Inhibition of the MAPK Signaling Pathway Background/Aims: This study aimed to investigate whether Salvianolic acid A (Sal A) conferred cardiac protection against Arsenic trioxide (ATO)-induced cardiotoxicity in H9c2 cells by inhibiting MAPK pathways activation. Methods: H9c2 cardiac cells were exposed to 10 M ATO for 24 h to induce cytotoxicity. The cells were pretreated with Sal A for 4 h before exposure to ATO. Cell viability was determined utilizing the MTT assay. The percentage of apoptosis was measured by a FITC-Annexin V/PI apoptosis kit for flow cytometry. Mitochondrial membrane potential (∆m) was detected by JC-1. The intracellular ROS levels were measured using an Image-iTTM LIVE Green Reactive Oxygen Species Detection Kit. The apoptosis-related proteins and the MAPK signaling pathways proteins expression were quantified by Western blotting. Results: Sal A pretreatment increased cell viability, suppressed ATO-induced mitochondrial membrane depolarization, and significantly altered the apoptotic rate by enhancing endogenous antioxidative enzyme activity and ROS generation. Signal transduction studies indicated that Sal A suppressed the ATO-induced activation of the MAPK pathway. More importantly, JNK, ERK, and p38 inhibitors mimicked the cytoprotective activity of Sal A against ATO-induced injury in H9c2 cells by increasing cell viability, up-regulating Bcl-2 protein expression, and down-regulating both Bax and caspase-3 protein expression. Conclusion: Sal A decreases the ATO-induced apoptosis and necrosis of H9c2 cells, and the underlying mechanisms of this protective effect of Sal A may be connected with the MAPK pathways.
|
<reponame>hanneshofmann/go-c8y-cli<filename>pkg/cmd/events/events.auto.go
package events
import (
cmdCreate "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/create"
cmdCreateBinary "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/createbinary"
cmdDelete "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/delete"
cmdDeleteBinary "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/deletebinary"
cmdDeleteCollection "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/deletecollection"
cmdDownloadBinary "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/downloadbinary"
cmdGet "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/get"
cmdList "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/list"
cmdUpdate "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/update"
cmdUpdateBinary "github.com/reubenmiller/go-c8y-cli/pkg/cmd/events/updatebinary"
"github.com/reubenmiller/go-c8y-cli/pkg/cmd/subcommand"
"github.com/reubenmiller/go-c8y-cli/pkg/cmdutil"
"github.com/spf13/cobra"
)
type SubCmdEvents struct {
*subcommand.SubCommand
}
func NewSubCommand(f *cmdutil.Factory) *SubCmdEvents {
ccmd := &SubCmdEvents{}
cmd := &cobra.Command{
Use: "events",
Short: "Cumulocity events",
Long: `REST endpoint to interact with Cumulocity events`,
}
// Subcommands
cmd.AddCommand(cmdList.NewListCmd(f).GetCommand())
cmd.AddCommand(cmdDeleteCollection.NewDeleteCollectionCmd(f).GetCommand())
cmd.AddCommand(cmdGet.NewGetCmd(f).GetCommand())
cmd.AddCommand(cmdCreate.NewCreateCmd(f).GetCommand())
cmd.AddCommand(cmdUpdate.NewUpdateCmd(f).GetCommand())
cmd.AddCommand(cmdDelete.NewDeleteCmd(f).GetCommand())
cmd.AddCommand(cmdDownloadBinary.NewDownloadBinaryCmd(f).GetCommand())
cmd.AddCommand(cmdCreateBinary.NewCreateBinaryCmd(f).GetCommand())
cmd.AddCommand(cmdUpdateBinary.NewUpdateBinaryCmd(f).GetCommand())
cmd.AddCommand(cmdDeleteBinary.NewDeleteBinaryCmd(f).GetCommand())
ccmd.SubCommand = subcommand.NewSubCommand(cmd)
return ccmd
}
|
Design methodology for the optimization of transformer-loaded RF circuits In this paper a design methodology for the optimization of transformer-loaded RF circuits is discussed and a simplified equation for the maximum available output power is presented. The optimization procedure is based on a novel figure of merit for the integrated transformer, which was introduced to quantify its performance when operated as a tuned load. By means of the proposed approach, a highly linear up-converter for 5-GHz wireless LAN applications was implemented in a 40-GHz-f/sub T/ SiGe HBT technology. The circuit achieves an output 1-dB compression point of 4.5 dBm and a power gain of 18 dB, while drawing only 34 mA from a 3-V power supply.
|
/*
* Copyright (c) 2008-2023, Hazelcast, Inc. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.hazelcast.cp.internal.raft.impl.task;
import com.hazelcast.cp.internal.raft.impl.RaftEndpoint;
import com.hazelcast.cp.internal.raft.impl.RaftNodeImpl;
import com.hazelcast.cp.internal.raft.impl.dto.VoteRequest;
import com.hazelcast.cp.internal.raft.impl.handler.PreVoteResponseHandlerTask;
import com.hazelcast.cp.internal.raft.impl.state.RaftState;
/**
* LeaderElectionTask is scheduled when current leader is null, unreachable
* or unknown by {@link PreVoteResponseHandlerTask} after a follower receives
* votes from at least majority. Local member becomes a candidate using
* {@link RaftState#toCandidate(boolean)} and sends {@link VoteRequest}s to
* other members.
* <p>
* Also a {@link LeaderElectionTimeoutTask} is scheduled with a
* {@link RaftNodeImpl#getLeaderElectionTimeoutInMillis()} delay to trigger
* leader election if one is not elected yet.
*/
public class LeaderElectionTask extends RaftNodeStatusAwareTask implements Runnable {
private boolean disruptive;
public LeaderElectionTask(RaftNodeImpl raftNode, boolean disruptive) {
super(raftNode);
this.disruptive = disruptive;
}
@Override
protected void innerRun() {
RaftState state = raftNode.state();
if (state.leader() != null) {
logger.warning("No new election round, we already have a LEADER: " + state.leader());
return;
}
VoteRequest request = state.toCandidate(disruptive);
logger.info("Leader election started for term: " + request.term() + ", last log index: " + request.lastLogIndex()
+ ", last log term: " + request.lastLogTerm());
raftNode.printMemberState();
for (RaftEndpoint endpoint : state.remoteMembers()) {
raftNode.send(request, endpoint);
}
scheduleLeaderElectionTimeout();
}
private void scheduleLeaderElectionTimeout() {
raftNode.schedule(new LeaderElectionTimeoutTask(raftNode), raftNode.getLeaderElectionTimeoutInMillis());
}
}
|
Azrael (comic book)
Batman: Sword of Azrael
Azrael made his debut in a limited series called Batman: Sword of Azrael, in which he was introduced as a mild-mannered college student named Jean-Paul Valley, who becomes an assassin and acts on the will of a religious cult known as the Order of St. Dumas. His father, Ludovic Valley, had carried out the duties of the role until suffering severe gunshot wounds at the hands of a weapons-dealer named Carlton LeHah, at which point he was able to contact Jean-Paul and on his deathbed reveal that he had been brainwashing his son in preparation for the role since birth, unbeknownst to Jean-Paul himself. Jean-Paul is then escorted by a dwarf named Nomoz to a property in Switzerland which belongs to the Order, where Jean-Paul is to be more fully prepared to assume the role his father had failed. Jean-Paul's entire personality disappears as soon as he places Azrael's helmet on his head, and he begins to believe he is an actual avenging angel, as opposed to a man dressed like one. He is sent by Nomoz on an assassination mission, but over the course of the mission Jean-Paul gradually gains more and more control over the Azrael mindset, which Nomoz calls "The System". At one point, he finds Batman in a life-threatening situation, and actually risks his own life to save Batman's. Nomoz is angered and claims "Azrael does not protect", to which Batman replies "Maybe this one does." Although Azrael is an assassin, Batman recognizes his potential as a crimefighter and attempts to dissuade him from taking lives. At the end of the series, Batman offers Jean-Paul the opportunity to join the Bat-Family and return to Gotham to be trained in the Batcave.
Knightfall, Knightquest, and KnightsEnd
Prior to Knightfall, Azrael has accepted Batman's offer and has been accompanying Robin on missions around Gotham City. When the villain Bane frees all the inmates from Arkham Asylum, Azrael assists with the clean-up effort. Batman eventually becomes so exhausted by fighting villains throughout the city that he is finally defeated by Bane in combat, and suffers a broken back in the process. After suffering such a severe injury, Batman realizes that he will not be able to carry on his duties as the protector of Gotham, but that Gotham needs a Batman, especially in the midst of a crime wave. Reasoning that the current Robin (Tim Drake) is too young to protect Gotham on his own, and that the original Robin (Dick Grayson) is now acting as Nightwing in the city of Bludhaven and would be unwilling to return to Gotham to play the role of Batman, he turns to Jean-Paul and asks him to fill in as Batman, under one condition: He must avoid taking on Bane. Azrael agrees, and begins wearing the Batman costume while patrolling the city with Robin. Over time, Azrael begins to display a violent recklessness and a willingness for independence from Robin's guidance. Under the influence of "The System", Azrael designs vicious new gloves for the Batsuit, which feature sharp metal claws and a mechanism for launching Batarangs at high speed. Robin does not approve of the brutal fighting methods which Azrael is employing, and Azrael responds to the criticism by choking Robin and telling him he is no longer welcome in the Batcave. Now acting on his own, Azrael becomes increasingly violent and seems to be losing his sanity, experiencing hallucinations of his father and of St. Dumas himself. Azrael becomes quite conceited and believes himself superior to the original Batman, even directly disobeying Bruce's only condition by fighting Bane one-on-one, and actually manages to defeat him by utilizing an all-new Batsuit which features heavy armor and a Bat-signal spotlight on the chestplate. His defeat of Bane gives him a false confidence in his abilities, and erodes his conscience as he believes himself above any questions of morality as long as the job is getting done, leading him to actually begin killing villains. Bruce hears news of this from Tim Drake while assisting the Justice League with rescuing Drake's father, who had been abducted. Bruce is shocked and disgusted that the name of the Batman is being used this way, and determines he must return to Gotham and take the title back from Azrael. When he has finally recovered and trained his body back into physical fitness, he returns to Gotham and asks Azrael to give up the Batsuit. Azrael refuses, and they resort to combat. Bruce is finally able to defeat Azrael by forcing him to remove his mask, at which point the Jean-Paul Valley personality regains control and expresses contrition, acknowledging Bruce as the one true Batman. Bruce forgives him, but claims he can no longer trust him, so he cannot stay in the Batcave. Jean-Paul then leaves the Batcave in shame.
Azrael
The first Azrael ongoing series shows Azrael's (Jean-Paul Valley) battles against the Order of St. Dumas, and ran for 100 issues between February 1995 and May 2003. Following the events of Batman: Sword of Azrael and Batman: Knightfall, neither Jean-Paul nor Azrael is seen again in a comic book for several months until Jean-Paul finally re-appears as a homeless man wandering the streets of Gotham in a psychotic state. In Azrael #1, he successfully defends a fellow homeless man from an attack from several men who are attempting to take his shoes. When the man expresses gratitude, Jean-Paul reveals a state of confusion. The homeless man introduces himself as Dr. Brian Bryan, an unemployed alcoholic, and the self-proclaimed "worst psychiatrist in the world." Despite this admission, Bryan does what he can to try to help Jean-Paul return to a healthy mind, forming the basis of a friendship that will last for the duration of the series. Later, Bruce Wayne begins to feel guilt for abandoning Jean-Paul, and tracks him down in order to give him financial funding so that he can take care of himself. Acting on Bruce's suggestion, Azrael begins to more fully investigate his origins, returning to Europe to meet the Order of St. Dumas. After befriending a beautiful girl called Sister Lilhy who had been raised by the Order of St. Dumas, she helps him learn some of the secret background of the Order. He discovers that the Order of St. Dumas was a faction of Catholic knights that had splintered from the Knights Templar during the Crusades, and was named for their leader, a violent knight named Dumas, whom "no one else ever accused of being a saint." The Order had developed over the centuries into an extremist cult-like organization and had built a number of fortresses all over the world, where they conducted secretive genetic experiments and trained elite assassins. With the assistance of Lilhy and Bryan, Azrael tracks down one of the Order's main strongholds, and destroys it after reuniting with his former mentor Nomoz and inciting a riot by the Order's dwarves, who had been serving as slaves to the Order. While exploring the Order's fortress, he finds that he is one of a long line of genetically-engineered Azraels, and that the Order has already replaced him with another Azrael since he has gone rogue. After returning to American soil, Azrael gradually rebuilds his relationship with Batman. Starting with issue #47, the series was retitled Azrael: Agent of the Bat and played a major role in the No Man's Land storyline.
Starting with issue #50, he changes to a considerably different costume, and experiments with a few other costumes, even briefly returning to a version of his Batman-style costume from Knightquest and KnightsEnd, before finally settling on his original costume in the final issues of the series. Azrael battles supposed hallucinations that are supposed to represent both his father and the creator of the order that spawned him, St. Dumas. Toward the end of the series, Azrael is plagued by apparently supernatural occurrences in the form of possible miracles.
Azrael is seemingly killed in the series' final issue, shot with two specially-coated bullets while battling archenemies Nicholas Scratch and Carlton Lehah. However, his body is never recovered. He makes an appearance during "Whatever Happened to the Caped Crusader?"; however, the story is out of continuity, or takes place on multiple different earths. The Blackest Night crossover features a resurrected Jean-Paul Valley, suggesting he did indeed die.
Three annual editions of Azrael (Azrael Annual) were also published, depicting other stories, such as the one of his father in Azrael Annual #1 (Year One).
Azrael: Death's Dark Knight
Azrael: Death's Dark Knight was a miniseries (written by Fabian Nicieza, with art by Frazer Irving, May - July 2009), taking place within the Battle for the Cowl storyline, in which an African-American ex-cop named Michael Washington Lane is approached by a religious sect called the Order of Purity to claim the mantle of Azrael after the Order's most recent Azrael went mad and killed an undercover police officer. When he takes this opportunity to serve his ideal of justice, he is offered three items by the Order of Purity: The Suit of Sorrows, The Sword of Sin, and The Sword of Salvation. Lane initially is told very little by the members of the Order of Purity, beyond that the Order of Purity is a secret religious order which had splintered from the Order of St. Dumas, which itself had splintered from the Catholic Church during the Crusades.
The Suit of Sorrows had been created by the Order of Purity to be used by their first Azrael, who had been intended as a rival to the Order of St. Dumas' enforcer of the same name. This suit of armor had at some point fallen into the hands of Ra's al Ghul, and remained in the possession of his League of Assassins for several centuries until Batman added it to his collection during the Resurrection of Ra's al Ghul storyline. The suit mysteriously vanished from the Batcave during the subsequent Batman: R.I.P. storyline, and shortly thereafter was returned to its original owners, the Order of Purity.
The Sword of Sin, Lane's preferred sword, was a sword of yellow flames which had the ability to bring to the minds of its victims memories of their sins and the guilt accompanying these sins.
The Sword of Salvation was a sword of blue ice which had the ability to coerce its victims into telling the truth, although it was unable to harm anyone who was innocent.
When the Order of Purity is attacked by seven assassins led by Talia al Ghul, Azrael dons the suit and takes up the swords in an effort to impede their mission to retake these items for Ra's al Ghul. The background and narrative that is outlined in the Azrael: Death's Dark Knight is expanded further by Nicieza in a two-part story written that crossed between Batman Annual #27 and Detective Comics Annual #11. In this story, the character teams up with Batman and Robin in an attempt to thwart an ancient demonic cult from sacrificing seven children in an effort to resurrect the embodiment of the eighth deadly sin.
|
from boa3.builtin.interop.policy import get_storage_price
def main() -> int:
return get_storage_price(123)
|
// DefaultRouter creates chi mux with default middlewares.
// Supply nil for allowedOrigins to turn off CORS middleware.
// Example string for allowedOrigins: "https://example.com".
func DefaultRouter(serviceName string, allowedOrigins []string, logger *zap.Logger) *chi.Mux {
r := chi.NewRouter()
r.Use(middleware.RequestID)
r.Use(middleware.Heartbeat("/health"))
r.Use(arcmw.ZapLogger(logger))
r.Use(arcmw.PromMetrics(serviceName, nil))
if len(allowedOrigins) > 0 {
r.Use(getCORS(allowedOrigins).Handler)
}
r.Use(arcmw.RenderJSON)
r.Use(middleware.Recoverer)
r.NotFound(func(w http.ResponseWriter, r *http.Request) {
res := arcmw.ResponseFromContext(r.Context())
res.SetStatusNotFound(http.StatusText(http.StatusNotFound))
})
r.MethodNotAllowed(func(w http.ResponseWriter, r *http.Request) {
res := arcmw.ResponseFromContext(r.Context())
res.SetStatus(http.StatusMethodNotAllowed).AddError(http.StatusText(http.StatusMethodNotAllowed))
})
return r
}
|
// CompareIPAddresses makes a comparison between old and new objects of the node
// to return the information of the match
func CompareIPAddresses(oldObj *corev1.Node, newObj *corev1.Node) bool {
updated := true
oldInternalIP, oldExternalIP := GetNodeIPAddresses(oldObj)
newInternalIP, newExternalIP := GetNodeIPAddresses(newObj)
if oldInternalIP != "" && newInternalIP != "" {
if oldExternalIP != "" && newExternalIP != "" {
if oldInternalIP == newInternalIP && oldExternalIP == newExternalIP {
updated = false
}
} else {
if oldInternalIP == newInternalIP {
updated = false
}
}
} else {
if oldExternalIP == newExternalIP {
updated = false
}
}
return updated
}
|
s = list(input() )
s[0] = str(9 - int(s[0] ) ) if '9' > s[0] > '4' else s[0]
for i in range(1, len(s) ):
if '5' <= s[i]:
s[i] = str(9 - int(s[i] ) )
print(''.join(s) )
|
Monitoring Activities of Daily Living with a Mobile App and Bluetooth Beacons In this paper, we present a preliminary study on the monitoring of activities of daily living (ADL) with a mobile app. We rely on a set of Bluetooth beacons deployed around the household to perform indoor localization. The mobile app also tracks the instrumental ADL (IADL) in terms of the app usage, such as apps for social networking, financial management, and personal entertainment. Furthermore, the mobile app collects information regarding the level of physical activities and the environment such as light, temperature, air pressure using the built-in sensors of the smartphone. The latter could help infer the living conditions of the individual, even though the information is not directly about ADL. Tracking ADL in any method will inevitably intrude on the user's privacy. Our mobile app informs the user exactly what information we collect and all the data are stored locally on the smartphone. The user can view the report of the individual's ADL, and has the choice of deleting some or all data. Finally, we propose a feature extraction model for temporal ADL data and demonstrate how the features can be used to classify different behavioral patterns.
|
import csv
from django.core.management.base import BaseCommand
from radical_translations.core.models import Resource
class Command(BaseCommand):
help = "Exports `Resource` data into a CSV file."
def handle(self, *args, **options):
self.stdout.write(
"Exporting Resources into CSV file resources.csv ...", ending=" "
)
resources = [resource.to_dict() for resource in Resource.objects.all()]
if not resources:
self.stderr.write(self.style.NOTICE("No resources found!"))
return -1
fieldnames = resources[0].keys()
with open("resources.csv", "w") as f:
c = csv.DictWriter(f, fieldnames)
c.writeheader()
c.writerows(resources)
self.stdout.write(self.style.SUCCESS("done"))
|
A game that wasn't played has become one of the more interesting of the 2018 DODEA-Europe Division II football season.
The canceled game between Aviano and International School of Brussels will stay cancelled, DODEA-Europe athletic director Kathlene Clemmons announced this week, with postseason ramifications to be determined.
The scheduled Sept. 8 season-opening game “has been declared a cancelled game,” Clemmons said, as opposed to a forfeit or postponement.
The original cancellation stemmed from a late-summer flurry of realignment and rescheduling. Baumholder’s inability to produce enough players to field a Division II squad knocked the program down to the Division III level, forcing a final-hour restructuring of both divisions’ schedules. ISB’s long trip to Aviano was among the changes; ISB wasn’t able to secure transportation for the journey and didn’t make the trip to Italy.
But DODEA-Europe opted not to hand ISB a forfeit loss.
“The circumstances were beyond the control of both teams for the scheduled game, as well as a possible make-up date,” Clemmons said.
The lost game could have a major impact on the Division II postseason race.
Aviano and ISB will now play just five games apiece, while all other teams will play six. The top two finishers in the seven-team league are set to play for the European championship game Nov. 2, the only Division II playoff game on the slate this year.
While 0-2 ISB seems unlikely to make a run at one of those two title-game berths, Aviano’s prospects are much brighter. The Saints are 2-0 entering this weekend’s home game against Rota, a battle of the last two undefeated teams in Division II.
DODEA-Europe plans to soon sort out how to account for the missing game.
“After the games this weekend, we will decide how this affects the playoff picture,” Clemmons said.
ISB is scheduled to make a similar trip to the one it missed on Saturday, visiting Aviano's northern Italian neighbor Vicenza.
|
/**
* Created by kliu on 3/6/15.
*/
@Entity
@Table(name = "survey_score_interval")
public class SurveyScoreInterval implements Serializable{
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Integer id;
@JoinColumn(name = "survey_id", referencedColumnName = "survey_id")
@ManyToOne(cascade = CascadeType.ALL,optional = false)
private Survey survey;
@Column(length = 20)
private String min;
@Column(length = 20)
private String max;
@Column(length = 100)
private String meaning;
@Column(name = "exception")
private boolean exception;
public Integer getId() {
return id;
}
public void setId(Integer id) {
this.id = id;
}
public Survey getSurvey() {
return survey;
}
public void setSurvey(Survey survey) {
this.survey = survey;
}
public String getMin() {
return min;
}
public void setMin(String min) {
this.min = min;
}
public String getMax() {
return max;
}
public void setMax(String max) {
this.max = max;
}
public String getMeaning() {
return meaning;
}
public void setMeaning(String meaning) {
this.meaning = meaning;
}
public boolean isException() {
return exception;
}
public void setException(boolean exception) {
this.exception = exception;
}
}
|
A MAGICIAN who is allergic to rabbits had to hold back the tears yesterday as he faced his worst nightmare – a room full of fluffy bunnies.
Entertainer Ian Wragg normally stays as far away as he can from his furry sidekicks – when he is not pulling one from a top hat.
But after his companion of 11 years recently died, the 41- year-old, from Darlington, had to face up to his allergy by visiting the Bunny Burrows rehoming centre, in Richmond, North Yorkshire, to pick out a new star for his magic show.
“As soon as I walked in my eyes started to water and my skin started itching,” he said.
“Then my eyes started to go puffy and close up.
His new bunny, named Dante, after a 1930s illusionist, will replace previous faithful companion, Asrah.
The New Zealand dwarf rabbit was a key part of his act for nearly half his career and the pair had faced tough times together.
Back in 2007, they had a lucky escape when Mr Wragg’s Citroen Picasso aquaplaned in torrential rain as they returned home from performing at a children’s party in Yarm.
Asrah’s wooden travel box split in half as the car flipped over an embankment and hit a tree. Fortunately, neither were injured.
Mr Wragg said: “Dante has a lot to live up to, but so far he seems perfect.
The owner of Bunny Burrows, Gwen Butler, admitted she had been a little worried at the prospect of one of her rabbits going into showbusiness.
“We have dozens of rabbits who need homes, but I only let them go somewhere I know they will be well looked after, so I was a bit worried he would be pulled out of a hat by his ears,” she added.
“But after meeting Ian I am more than happy to let him have one of them.
|
The mechanistic target of rapamycin pathway in sarcomas: from biology to therapy Introduction: The mechanistic target of rapamycin (mTOR) is a critical node at the junction of multiple intracellular signaling pathways. Amongst its many functions, mTOR instrumentally coordinates protein synthesis and cell growth in response to varying nutrient levels. The mTOR pathway has been implicated and targeted in several malignancies with modest success. Sarcomas are uncommon but diverse mesenchymal tumors for which systemic therapy is often suboptimal. Unsurprisingly, evidence is accumulating for the relevance of mTOR pathway activation across multiple sarcomas. Several clinical trials testing mTOR targeted therapy in sarcomas recently have yielded mixed results. Areas covered: The biology of the mTOR pathway and its relevance to the maintenance of the cancer phenotype shall be reviewed. The preclinical evidence for mTOR pathway activation in a range of sarcoma subtypes, the pharmacology of current and developing agents targeting mTOR, and results of completed clinical trials in sarcomas will also be discussed. In addition, potential reasons for suboptimal activity and avenues for maximizing therapeutic efficacy of targeting mTOR in sarcomas shall be evaluated. Expert opinion: While little doubt exists that mTOR activation has a role in sarcomagenesis, targeting this pathway with rapalogs has thus far yielded suboptimal results. Beyond the use of next-generation mTOR inhibitors to improve target inhibition, future efforts must focus on rational combinations and establishment of robust biomarkers to enhance therapeutic efficacy.
|
/**
* Copyright 2021 Expedia, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import { getLogger } from '@iex/shared/logger';
import { Authorized, FieldResolver, Query, Resolver } from 'type-graphql';
import { Service } from 'typedi';
import { ElasticIndex, uniqueTerms } from '../lib/elasticsearch';
import { UniqueValue, AutocompleteResults } from '../models/autocomplete';
import { Permission } from '../models/permission';
import { UserService } from '../services/user.service';
const logger = getLogger('autocomplete.resolver');
@Service()
@Resolver(() => AutocompleteResults)
export class AutocompleteResolver {
constructor(private readonly userService: UserService) {}
@Authorized<Permission>({ user: true })
@Query(() => AutocompleteResults)
async autocomplete(): Promise<AutocompleteResults> {
return {
tags: [],
authors: [],
teams: [],
skills: []
};
}
@FieldResolver(() => [UniqueValue])
async activityInsights(): Promise<UniqueValue[]> {
try {
return await uniqueTerms('details.insightName.keyword', undefined, ElasticIndex.ACTIVITIES);
} catch (error: any) {
logger.error(error);
return error;
}
}
@FieldResolver(() => [UniqueValue])
async activityUsers(): Promise<UniqueValue[]> {
try {
return await uniqueTerms('user.userName.keyword', undefined, ElasticIndex.ACTIVITIES);
} catch (error: any) {
logger.error(error);
return error;
}
}
@FieldResolver(() => [UniqueValue])
async tags(): Promise<UniqueValue[]> {
try {
return await uniqueTerms('tags.keyword');
} catch (error: any) {
logger.error(error);
return error;
}
}
@FieldResolver(() => [UniqueValue])
async authors(): Promise<UniqueValue[]> {
try {
return await uniqueTerms('contributors.userName.keyword');
} catch (error: any) {
logger.error(error);
return error;
}
}
@FieldResolver(() => [UniqueValue])
async skills(): Promise<UniqueValue[]> {
try {
return await this.userService.getUniqueSkills();
} catch (error: any) {
logger.error(error);
return error;
}
}
@FieldResolver(() => [UniqueValue])
async teams(): Promise<UniqueValue[]> {
try {
const [publishedTeams, userTeams] = await Promise.all([
uniqueTerms('metadata.team.keyword'),
this.userService.getUniqueTeams()
]);
// Combine both teams on Insights and Users
const combined = [...publishedTeams, ...userTeams].reduce<Record<string, number>>((acc, v) => {
if (acc[v.value]) {
acc[v.value] = acc[v.value] + v.occurrences;
} else {
acc[v.value] = v.occurrences;
}
return acc;
}, {});
return Object.keys(combined).map<UniqueValue>((value) => ({ value, occurrences: combined[value] }));
} catch (error: any) {
logger.error(error);
return error;
}
}
}
|
<reponame>kancheng/kan-cs-report-in-2022
# Definition for a binary tree node.
class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution(object):
def levelOrder(self, root):
if not root:
return []
queue = [(root, 0)]
levelMap = {}
while queue:
node, level = queue.pop(0)
if node.left:
queue.append((node.left, level+1))
if node.right:
queue.append((node.right, level+1))
if level in levelMap:
levelMap[level].append(node.val)
else:
levelMap[level] = [node.val]
result = []
for key, value in levelMap.items():
result.append(value)
return result
if __name__ == '__main__':
tree = TreeNode(3)
tree.left = TreeNode(9)
tree.right = TreeNode(20)
tree.right.left = TreeNode(15)
tree.right.right = TreeNode(7)
print(Solution().levelOrder(tree))
|
Perinatal nursing education for single-room maternity care: an evaluation of a competency-based model. AIMS AND OBJECTIVES To evaluate the success of a competency-based nursing orientation programme for a single-room maternity care unit by measuring improvement in self-reported competency after six months. BACKGROUND Single-room maternity care has challenged obstetrical nurses to provide comprehensive nursing care during all phases of the in-hospital birth experience. In this model, nurses provide intrapartum, postpartum and newborn care in one room. To date, an evaluation of nursing education for single-room maternity care has not been published. DESIGN A prospective cohort design comparing self-reported competencies prior to starting work in the single-room maternity care and six months after. METHODS Nurses completed a competency-based education programme in which they could select from a menu of learning methods and content areas according to their individual needs. Learning methods included classroom lectures, self-paced learning packages, and preceptorships in the clinical area. Competencies were measured by a standardized perinatal self-efficacy tool and a tool developed by the authors for this study, the Single-Room Maternity Care Competency Tool. A paired analysis was undertaken to take into account the paired (before and after) nature of the design. RESULTS Scores on the perinatal self-efficacy scale and the single-room maternity care competency tool were improved. These differences were statistically significant. CONCLUSIONS Improvements in perinatal and single-room maternity care-specific competencies suggest that our education programme was successful in preparing nurses for their new role in the single-room maternity care setting. This conclusion is supported by reported increases in nursing and patient satisfaction in the single-room maternity care compared with the traditional labour/delivery and postpartum settings. RELEVANCE TO CLINICAL PRACTICE An education programme tailored to the learning needs of experienced clinical nurses contributes to improvements in nursing competencies and patient care.
|
<filename>ros/niryo_one_ros/niryo_one_user_interface/src/niryo_one_user_interface/sequences/blockly_code_generator.py
#!/usr/bin/env python
# blockly_code_generator.py
# Copyright (C) 2017 Niryo
# All rights reserved.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import rospy
import subprocess
import socket
import os
import signal
import ast
HOST = '127.0.0.1'
def cleanup_node_tcp_port(port_number):
try:
output = subprocess.check_output(['lsof', '-i', 'tcp:' + str(port_number)])
for line in output.split(os.linesep):
if 'node' in line:
lines = line.split()
if len(lines) > 2:
subprocess.call(['kill', '-9', str(lines[1])])
except subprocess.CalledProcessError, e:
pass # No process found
def create_directory(directory_path):
if not os.path.exists(directory_path):
os.makedirs(directory_path)
def save_to_file(filename, text):
with open(filename, 'w') as target:
target.write(text)
def read_file(filename):
with open(filename, 'r') as target:
return target.read()
class BlocklyCodeGenerator:
def __init__(self):
self.blockly_dir = rospy.get_param("~niryo_one_blockly_path")
self.tcp_port = rospy.get_param("~niryo_one_blockly_tcp_port")
self.blockly_xml_file = str(self.blockly_dir) + '/blockly_xml'
self.blockly_python_file = str(self.blockly_dir) + '/blockly_python'
create_directory(self.blockly_dir)
# Cleanup tcp port if another Nodejs server didn't shutdown normally
cleanup_node_tcp_port(self.tcp_port)
# Start Nodejs server
self.blockly_generator_server = subprocess.Popen("blockly_code_generator_server", shell=True)
rospy.loginfo("Blockly code generator started")
self.socket = None
def shutdown(self):
if self.socket:
self.socket.close()
rospy.loginfo("Shutdown blockly code generator : Kill PID : " + str(self.blockly_generator_server.pid))
os.kill(self.blockly_generator_server.pid, signal.SIGINT)
#
# input : correctly formatted XML on one line
# output : generated Python code from XML
#
def get_generated_python_code(self, xml_code):
# 1. Save xml to file
save_to_file(self.blockly_xml_file, xml_code)
# 2. Create socket
try:
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.settimeout(6)
except socket.error, msg:
return {'status': 400, 'message': msg}
# 3. Connect to nodejs server
try:
self.socket.connect((HOST, self.tcp_port))
except socket.error, msg:
self.socket.close()
return {'status': 400, 'message': msg}
# 4. Send filename to nodejs server
try:
self.socket.sendall(self.blockly_dir)
except socket.error, msg:
self.socket.close()
return {'status': 400, 'message': msg}
# 5. Wait for response
try:
reply = self.socket.recv(1024)
except socket.timeout, msg:
self.socket.close()
return {'status': 400, 'message': 'Could not generate Python code in time'}
# 6. Close socket
self.socket.close()
# 7. Analyze response
try:
reply = ast.literal_eval(reply)
except Exception, e:
return {'status': 400, 'message': e}
if reply['status'] != 200:
return reply
# 8. Return generated code
code = read_file(self.blockly_python_file)
return {'status': 200, 'code': code}
|
<gh_stars>0
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzToolsFramework/UI/EditorEntityUi/EditorEntityUiHandlerBase.h>
#include <AzCore/Interface/Interface.h>
namespace AzToolsFramework
{
EditorEntityUiHandlerBase::EditorEntityUiHandlerBase()
{
auto editorEntityUiInterface = AZ::Interface<AzToolsFramework::EditorEntityUiInterface>::Get();
AZ_Assert((editorEntityUiInterface != nullptr),
"Editor Entity Ui Handlers require EditorEntityUiInterface on construction.");
m_handlerId = editorEntityUiInterface->RegisterHandler(this);
}
EditorEntityUiHandlerBase::~EditorEntityUiHandlerBase()
{
auto editorEntityUiInterface = AZ::Interface<AzToolsFramework::EditorEntityUiInterface>::Get();
if (editorEntityUiInterface != nullptr)
{
editorEntityUiInterface->UnregisterHandler(this);
}
}
EditorEntityUiHandlerId EditorEntityUiHandlerBase::GetHandlerId()
{
return m_handlerId;
}
QString EditorEntityUiHandlerBase::GenerateItemInfoString([[maybe_unused]] AZ::EntityId entityId) const
{
return QString();
}
QString EditorEntityUiHandlerBase::GenerateItemTooltip([[maybe_unused]] AZ::EntityId entityId) const
{
return QString();
}
QIcon EditorEntityUiHandlerBase::GenerateItemIcon([[maybe_unused]] AZ::EntityId entityId) const
{
return QIcon();
}
bool EditorEntityUiHandlerBase::CanToggleLockVisibility([[maybe_unused]] AZ::EntityId entityId) const
{
return true;
}
bool EditorEntityUiHandlerBase::CanRename([[maybe_unused]] AZ::EntityId entityId) const
{
return true;
}
void EditorEntityUiHandlerBase::PaintItemBackground(
[[maybe_unused]] QPainter* painter,
[[maybe_unused]] const QStyleOptionViewItem& option,
[[maybe_unused]] const QModelIndex& index) const
{
}
void EditorEntityUiHandlerBase::PaintDescendantBackground(
[[maybe_unused]] QPainter* painter,
[[maybe_unused]] const QStyleOptionViewItem& option,
[[maybe_unused]] const QModelIndex& index,
[[maybe_unused]] const QModelIndex& descendantIndex) const
{
}
void EditorEntityUiHandlerBase::PaintDescendantBranchBackground(
[[maybe_unused]] QPainter* painter,
[[maybe_unused]] const QTreeView* view,
[[maybe_unused]] const QRect& rect,
[[maybe_unused]] const QModelIndex& index,
[[maybe_unused]] const QModelIndex& descendantIndex) const
{
}
void EditorEntityUiHandlerBase::PaintItemForeground(
[[maybe_unused]] QPainter* painter,
[[maybe_unused]] const QStyleOptionViewItem& option,
[[maybe_unused]] const QModelIndex& index) const
{
}
void EditorEntityUiHandlerBase::PaintDescendantForeground(
[[maybe_unused]] QPainter* painter,
[[maybe_unused]] const QStyleOptionViewItem& option,
[[maybe_unused]] const QModelIndex& index,
[[maybe_unused]] const QModelIndex& descendantIndex) const
{
}
void EditorEntityUiHandlerBase::OnDoubleClick([[maybe_unused]] AZ::EntityId entityId) const
{
}
} // namespace AzToolsFramework
|
//
// linecount.c
//
//
// Created by <NAME> on 08.09.2017.
//
//
#include "linecount.h"
#include <stdio.h>
main()
{
int c, nl;
nl = 0;
while ((c = getchar()) != EOF) {
if(c == '\n')
++nl;
printf("%d\n", nl);
}
}
|
ANNAPOLIS, Md. � Pitt�s two-game stretch against Navy and Georgia Tech matches the Panthers against two of the nation�s top running offenses. The Midshipmen and the Yellow Jackets both run triple-option attacks and both are on the north side of 300 rushing yards per game.
If Saturday�s 24-21 loss at Navy is any indication, the Panthers� defense could be in trouble when they travel to Atlanta next weekend.
Upon further review, however, Pitt�s performance in Annapolis may give the Yellow Jackets pause. The Panthers defense controlled they Navy rushing attack well enough to earn a victory.
Pitt held Navy to just 62 rushing yards in the first half and didn�t top 100-yard mark until late in third quarter. The Midshipmen ended up with 220 yards on the ground, 85 yards below their season-average.
Navy entered the game ranked fourth in the country in fewest three-and-outs with just 15.8 percent of their drives ending in a three-and-out. Pitt forced four three-and-outs on Saturday � two to open the game and two more to open the second half, before Navy put together two long fourth-quarter drives that ultimately won them the game.
A 10-play, 71-yard drive tied the game with 4:15 left. That score came on the heels of a 16-play, 91-yard drive on their previous possession.
Had the Pitt offense done their part, however, the late Navy drives would likely have been too time consuming and too late in the game to make much of an impact. Pitt turned the ball over once, settled for field goals on two other first half drives and managed just 86 yards of offense in the second half.
|
// generated from `xb buildhlsl`
// source: adaptive_triangle.hs.hlsl
const uint8_t adaptive_triangle_hs[] = {
0x44, 0x58, 0x42, 0x43, 0x0A, 0xA5, 0x75, 0xB0, 0x13, 0x0C, 0x82, 0x6C,
0xAB, 0x68, 0xC3, 0xA1, 0x34, 0xFB, 0x63, 0xC1, 0x01, 0x00, 0x00, 0x00,
0xE0, 0x03, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0x38, 0x00, 0x00, 0x00,
0xA4, 0x00, 0x00, 0x00, 0xDC, 0x00, 0x00, 0x00, 0x10, 0x01, 0x00, 0x00,
0xA4, 0x01, 0x00, 0x00, 0x44, 0x03, 0x00, 0x00, 0x52, 0x44, 0x45, 0x46,
0x64, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x3C, 0x00, 0x00, 0x00, 0x01, 0x05, 0x53, 0x48,
0x00, 0x05, 0x00, 0x00, 0x3C, 0x00, 0x00, 0x00, 0x13, 0x13, 0x44, 0x25,
0x3C, 0x00, 0x00, 0x00, 0x18, 0x00, 0x00, 0x00, 0x28, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x0C, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x4D, 0x69, 0x63, 0x72, 0x6F, 0x73, 0x6F, 0x66,
0x74, 0x20, 0x28, 0x52, 0x29, 0x20, 0x48, 0x4C, 0x53, 0x4C, 0x20, 0x53,
0x68, 0x61, 0x64, 0x65, 0x72, 0x20, 0x43, 0x6F, 0x6D, 0x70, 0x69, 0x6C,
0x65, 0x72, 0x20, 0x31, 0x30, 0x2E, 0x31, 0x00, 0x49, 0x53, 0x47, 0x4E,
0x30, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00,
0x20, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00, 0x00,
0x58, 0x45, 0x54, 0x45, 0x53, 0x53, 0x46, 0x41, 0x43, 0x54, 0x4F, 0x52,
0x00, 0xAB, 0xAB, 0xAB, 0x4F, 0x53, 0x47, 0x4E, 0x2C, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x01, 0x0E, 0x00, 0x00, 0x58, 0x45, 0x56, 0x45,
0x52, 0x54, 0x45, 0x58, 0x49, 0x44, 0x00, 0xAB, 0x50, 0x43, 0x53, 0x47,
0x8C, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x08, 0x00, 0x00, 0x00,
0x68, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x0E, 0x00, 0x00,
0x68, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x01, 0x0E, 0x00, 0x00,
0x68, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x0D, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x01, 0x0E, 0x00, 0x00,
0x76, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0E, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x01, 0x0E, 0x00, 0x00,
0x53, 0x56, 0x5F, 0x54, 0x65, 0x73, 0x73, 0x46, 0x61, 0x63, 0x74, 0x6F,
0x72, 0x00, 0x53, 0x56, 0x5F, 0x49, 0x6E, 0x73, 0x69, 0x64, 0x65, 0x54,
0x65, 0x73, 0x73, 0x46, 0x61, 0x63, 0x74, 0x6F, 0x72, 0x00, 0xAB, 0xAB,
0x53, 0x48, 0x45, 0x58, 0x98, 0x01, 0x00, 0x00, 0x51, 0x00, 0x03, 0x00,
0x66, 0x00, 0x00, 0x00, 0x71, 0x00, 0x00, 0x01, 0x93, 0x18, 0x00, 0x01,
0x94, 0x18, 0x00, 0x01, 0x95, 0x10, 0x00, 0x01, 0x96, 0x20, 0x00, 0x01,
0x97, 0x18, 0x00, 0x01, 0x6A, 0x08, 0x00, 0x01, 0x72, 0x00, 0x00, 0x01,
0x65, 0x00, 0x00, 0x03, 0x12, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,
0x36, 0x00, 0x00, 0x05, 0x12, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,
0x01, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3E, 0x00, 0x00, 0x01,
0x73, 0x00, 0x00, 0x01, 0x99, 0x00, 0x00, 0x02, 0x03, 0x00, 0x00, 0x00,
0x5F, 0x00, 0x00, 0x02, 0x00, 0x70, 0x01, 0x00, 0x5F, 0x00, 0x00, 0x04,
0x12, 0x90, 0x21, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x67, 0x00, 0x00, 0x04, 0x12, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,
0x11, 0x00, 0x00, 0x00, 0x67, 0x00, 0x00, 0x04, 0x12, 0x20, 0x10, 0x00,
0x01, 0x00, 0x00, 0x00, 0x12, 0x00, 0x00, 0x00, 0x67, 0x00, 0x00, 0x04,
0x12, 0x20, 0x10, 0x00, 0x02, 0x00, 0x00, 0x00, 0x13, 0x00, 0x00, 0x00,
0x68, 0x00, 0x00, 0x02, 0x01, 0x00, 0x00, 0x00, 0x5B, 0x00, 0x00, 0x04,
0x12, 0x20, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
0x1E, 0x00, 0x00, 0x06, 0x12, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,
0x0A, 0x70, 0x01, 0x00, 0x01, 0x40, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x4E, 0x00, 0x00, 0x08, 0x00, 0xD0, 0x00, 0x00, 0x12, 0x00, 0x10, 0x00,
0x00, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00,
0x01, 0x40, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x36, 0x00, 0x00, 0x04,
0x22, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x70, 0x01, 0x00,
0x36, 0x00, 0x00, 0x08, 0x12, 0x20, 0x90, 0x00, 0x1A, 0x00, 0x10, 0x00,
0x00, 0x00, 0x00, 0x00, 0x0A, 0x90, 0xA1, 0x00, 0x0A, 0x00, 0x10, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3E, 0x00, 0x00, 0x01,
0x73, 0x00, 0x00, 0x01, 0x5F, 0x00, 0x00, 0x04, 0x12, 0x90, 0x21, 0x00,
0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x67, 0x00, 0x00, 0x04,
0x12, 0x20, 0x10, 0x00, 0x03, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00,
0x68, 0x00, 0x00, 0x02, 0x01, 0x00, 0x00, 0x00, 0x33, 0x00, 0x00, 0x09,
0x12, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x90, 0x21, 0x00,
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x0A, 0x90, 0x21, 0x00,
0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x33, 0x00, 0x00, 0x08,
0x12, 0x20, 0x10, 0x00, 0x03, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x10, 0x00,
0x00, 0x00, 0x00, 0x00, 0x0A, 0x90, 0x21, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x3E, 0x00, 0x00, 0x01, 0x53, 0x54, 0x41, 0x54,
0x94, 0x00, 0x00, 0x00, 0x0A, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x05, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x0A, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
0x04, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
};
|
<filename>BlockMatching/src/libbasic/test-transfo-spherical.c
/*************************************************************************
* test-transfo-spherical.c -
*
* $Id: test-transfo-spherical.c,v 1.2 2000/10/05 16:01:43 greg Exp $
*
* Copyright INRIA
*
* AUTHOR:
* <NAME> (<EMAIL>)
*
* CREATION DATE:
* Mon Sep 11 18:52:32 MET DST 2000
*
* ADDITIONS, CHANGES
*
*
*/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <transfo.h>
int main (int argc, char *argv[] )
{
int i, n=4;
double v[3];
double u1[3], u2[3], u3[3];
double w;
double t, p;
double max = 2147483647; /* (2^31)-1 */
double r2d = 180.0 / 3.1415926536;
(void)srandom(time(0));
if ( argc >= 2 ) {
if ( sscanf( argv[1], "%d", &n ) != 1 ) {
n = 4;
}
}
for ( i=0; i<n ;i++ ) {
printf( "trial %3d/%d\n", i+1, n );
v[0] = random() / max * 2.0 - 1.0;
v[1] = random() / max * 2.0 - 1.0;
v[2] = random() / max * 2.0 - 1.0;
w = sqrt( v[0]*v[0] + v[1]*v[1] + v[2]*v[2] );
if ( w < 0.00000001 ) {
i --;
continue;
}
v[0] /= w;
v[1] /= w;
v[2] /= w;
printf( " ... (%f,%f,%f) -> ", v[0], v[1], v[2] );
UnitVectorToSphericalAngles( v, &t, &p );
printf( "(theta=%f, phi=%f) -> ", t*r2d, p*r2d );
/*
SphericalAnglesToUnitVector( t, p, u );
printf( "(%g,%g,%g)\n", fabs(v[0]-u[0]), fabs(v[1]-u[1]), fabs(v[2]-u[2]) );
*/
SphericalAnglesToUnitsVectors( t, p, u1, u2, u3 );
printf( "(%g,%g,%g)\n", fabs(v[0]-u1[0]), fabs(v[1]-u1[1]), fabs(v[2]-u1[2]) );
printf( " |u1| = %f, |u2| = %f, |u3| = %f\n",
sqrt( u1[0]*u1[0] + u1[1]*u1[1] + u1[2]*u1[2] ),
sqrt( u2[0]*u2[0] + u2[1]*u2[1] + u2[2]*u2[2] ),
sqrt( u3[0]*u3[0] + u3[1]*u3[1] + u3[2]*u3[2] ) );
printf( " |u1.u2| = %f, |u1.u3| = %f, |u2.u3| = %f\n",
fabs( u1[0]*u2[0] + u1[1]*u2[1] + u1[2]*u2[2] ),
fabs( u1[0]*u3[0] + u1[1]*u3[1] + u1[2]*u3[2] ),
fabs( u2[0]*u3[0] + u2[1]*u3[1] + u2[2]*u3[2] ) );
}
return( 0 );
}
|
Family Values in the Gospel Tradition This article discusses the idea of family values, developed in modern Western Protestantism, within the horizon of family-related sayings in the Gospel tradition. In accord with general tendencies of the Hellenistic period, early Christianity opened up the possibility of a religious affiliation different from that suggested by ethnic, tribal, or familial tradition. A first challenge to traditional family obligations can be seen in the lifestyle of Jesus and his earliest disciples: In Mark, Jesus is depicted in a strong distance from his physical family. Common family obligations are questioned, and those who follow him and Gods word are called his real family. In Matthew and Luke, the distance is softened due to the idea that Jesus family is already aware of his mission and destiny. In John, his earthly mother is present under his cross, and also Joseph as his father is openly mentioned, but his true origins are in the realm of God, and for his Galilean contemporaries, knowledge of his earthly origins is rather an argument not to believe him. In John, finally, family aspects are transferred to the community which is the new family of God, shaped by the mutual love and support of the disciples. In a global context of theology, the various biblical views on family matters and also the different patterns of community structures have to be negotiated, and the challenge of the radical questioning of traditional values in Jesus ministry should not be ignored.
|
# Generated by Django 2.0 on 2018-02-10 12:39
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0002_auto_20180209_1654'),
]
operations = [
migrations.AddField(
model_name='staff',
name='profilepic',
field=models.ImageField(default='static/images/staff.png', upload_to='static/staff_img/'),
),
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.