code
stringlengths 75
104k
| docstring
stringlengths 1
46.9k
| text
stringlengths 164
112k
|
---|---|---|
def process_typedef(self, line):
"""处理类型申明"""
content = line.split(' ')
type_ = self.type_dict[content[1]]
if type_ == 'c_char' and '[' in line:
# type_ = 'string'
type_ = '%s*%s' % (type_, line[line.index('[') + 1:line.index(']')])
keyword = content[2]
if '[' in keyword:
i = keyword.index('[')
keyword = keyword[:i]
else:
keyword = keyword.replace(';\n', '') # 删除行末分号
py_line = 'typedefDict["%s"] = "%s"\n' % (keyword, type_)
return py_line | 处理类型申明 | Below is the the instruction that describes the task:
### Input:
处理类型申明
### Response:
def process_typedef(self, line):
"""处理类型申明"""
content = line.split(' ')
type_ = self.type_dict[content[1]]
if type_ == 'c_char' and '[' in line:
# type_ = 'string'
type_ = '%s*%s' % (type_, line[line.index('[') + 1:line.index(']')])
keyword = content[2]
if '[' in keyword:
i = keyword.index('[')
keyword = keyword[:i]
else:
keyword = keyword.replace(';\n', '') # 删除行末分号
py_line = 'typedefDict["%s"] = "%s"\n' % (keyword, type_)
return py_line |
def getOutputDevice(self, textureType):
"""
* Returns platform- and texture-type specific adapter identification so that applications and the
compositor are creating textures and swap chains on the same GPU. If an error occurs the device
will be set to 0.
pInstance is an optional parameter that is required only when textureType is TextureType_Vulkan.
[D3D10/11/12 Only (D3D9 Not Supported)]
Returns the adapter LUID that identifies the GPU attached to the HMD. The user should
enumerate all adapters using IDXGIFactory::EnumAdapters and IDXGIAdapter::GetDesc to find
the adapter with the matching LUID, or use IDXGIFactory4::EnumAdapterByLuid.
The discovered IDXGIAdapter should be used to create the device and swap chain.
[Vulkan Only]
Returns the VkPhysicalDevice that should be used by the application.
pInstance must be the instance the application will use to query for the VkPhysicalDevice. The application
must create the VkInstance with extensions returned by IVRCompositor::GetVulkanInstanceExtensionsRequired enabled.
[macOS Only]
For TextureType_IOSurface returns the id<MTLDevice> that should be used by the application.
On 10.13+ for TextureType_OpenGL returns the 'registryId' of the renderer which should be used
by the application. See Apple Technical Q&A QA1168 for information on enumerating GL Renderers, and the
new kCGLRPRegistryIDLow and kCGLRPRegistryIDHigh CGLRendererProperty values in the 10.13 SDK.
Pre 10.13 for TextureType_OpenGL returns 0, as there is no dependable way to correlate the HMDs MTLDevice
with a GL Renderer.
"""
fn = self.function_table.getOutputDevice
pnDevice = c_uint64()
pInstance = VkInstance_T()
fn(byref(pnDevice), textureType, byref(pInstance))
return pnDevice.value, pInstance | * Returns platform- and texture-type specific adapter identification so that applications and the
compositor are creating textures and swap chains on the same GPU. If an error occurs the device
will be set to 0.
pInstance is an optional parameter that is required only when textureType is TextureType_Vulkan.
[D3D10/11/12 Only (D3D9 Not Supported)]
Returns the adapter LUID that identifies the GPU attached to the HMD. The user should
enumerate all adapters using IDXGIFactory::EnumAdapters and IDXGIAdapter::GetDesc to find
the adapter with the matching LUID, or use IDXGIFactory4::EnumAdapterByLuid.
The discovered IDXGIAdapter should be used to create the device and swap chain.
[Vulkan Only]
Returns the VkPhysicalDevice that should be used by the application.
pInstance must be the instance the application will use to query for the VkPhysicalDevice. The application
must create the VkInstance with extensions returned by IVRCompositor::GetVulkanInstanceExtensionsRequired enabled.
[macOS Only]
For TextureType_IOSurface returns the id<MTLDevice> that should be used by the application.
On 10.13+ for TextureType_OpenGL returns the 'registryId' of the renderer which should be used
by the application. See Apple Technical Q&A QA1168 for information on enumerating GL Renderers, and the
new kCGLRPRegistryIDLow and kCGLRPRegistryIDHigh CGLRendererProperty values in the 10.13 SDK.
Pre 10.13 for TextureType_OpenGL returns 0, as there is no dependable way to correlate the HMDs MTLDevice
with a GL Renderer. | Below is the the instruction that describes the task:
### Input:
* Returns platform- and texture-type specific adapter identification so that applications and the
compositor are creating textures and swap chains on the same GPU. If an error occurs the device
will be set to 0.
pInstance is an optional parameter that is required only when textureType is TextureType_Vulkan.
[D3D10/11/12 Only (D3D9 Not Supported)]
Returns the adapter LUID that identifies the GPU attached to the HMD. The user should
enumerate all adapters using IDXGIFactory::EnumAdapters and IDXGIAdapter::GetDesc to find
the adapter with the matching LUID, or use IDXGIFactory4::EnumAdapterByLuid.
The discovered IDXGIAdapter should be used to create the device and swap chain.
[Vulkan Only]
Returns the VkPhysicalDevice that should be used by the application.
pInstance must be the instance the application will use to query for the VkPhysicalDevice. The application
must create the VkInstance with extensions returned by IVRCompositor::GetVulkanInstanceExtensionsRequired enabled.
[macOS Only]
For TextureType_IOSurface returns the id<MTLDevice> that should be used by the application.
On 10.13+ for TextureType_OpenGL returns the 'registryId' of the renderer which should be used
by the application. See Apple Technical Q&A QA1168 for information on enumerating GL Renderers, and the
new kCGLRPRegistryIDLow and kCGLRPRegistryIDHigh CGLRendererProperty values in the 10.13 SDK.
Pre 10.13 for TextureType_OpenGL returns 0, as there is no dependable way to correlate the HMDs MTLDevice
with a GL Renderer.
### Response:
def getOutputDevice(self, textureType):
"""
* Returns platform- and texture-type specific adapter identification so that applications and the
compositor are creating textures and swap chains on the same GPU. If an error occurs the device
will be set to 0.
pInstance is an optional parameter that is required only when textureType is TextureType_Vulkan.
[D3D10/11/12 Only (D3D9 Not Supported)]
Returns the adapter LUID that identifies the GPU attached to the HMD. The user should
enumerate all adapters using IDXGIFactory::EnumAdapters and IDXGIAdapter::GetDesc to find
the adapter with the matching LUID, or use IDXGIFactory4::EnumAdapterByLuid.
The discovered IDXGIAdapter should be used to create the device and swap chain.
[Vulkan Only]
Returns the VkPhysicalDevice that should be used by the application.
pInstance must be the instance the application will use to query for the VkPhysicalDevice. The application
must create the VkInstance with extensions returned by IVRCompositor::GetVulkanInstanceExtensionsRequired enabled.
[macOS Only]
For TextureType_IOSurface returns the id<MTLDevice> that should be used by the application.
On 10.13+ for TextureType_OpenGL returns the 'registryId' of the renderer which should be used
by the application. See Apple Technical Q&A QA1168 for information on enumerating GL Renderers, and the
new kCGLRPRegistryIDLow and kCGLRPRegistryIDHigh CGLRendererProperty values in the 10.13 SDK.
Pre 10.13 for TextureType_OpenGL returns 0, as there is no dependable way to correlate the HMDs MTLDevice
with a GL Renderer.
"""
fn = self.function_table.getOutputDevice
pnDevice = c_uint64()
pInstance = VkInstance_T()
fn(byref(pnDevice), textureType, byref(pInstance))
return pnDevice.value, pInstance |
def initial_data(cls, product_quantities):
''' Prepares initial data for an instance of this form.
product_quantities is a sequence of (product,quantity) tuples '''
f = [
{
_ItemQuantityProductsForm.CHOICE_FIELD: product.id,
_ItemQuantityProductsForm.QUANTITY_FIELD: quantity,
}
for product, quantity in product_quantities
if quantity > 0
]
return f | Prepares initial data for an instance of this form.
product_quantities is a sequence of (product,quantity) tuples | Below is the the instruction that describes the task:
### Input:
Prepares initial data for an instance of this form.
product_quantities is a sequence of (product,quantity) tuples
### Response:
def initial_data(cls, product_quantities):
''' Prepares initial data for an instance of this form.
product_quantities is a sequence of (product,quantity) tuples '''
f = [
{
_ItemQuantityProductsForm.CHOICE_FIELD: product.id,
_ItemQuantityProductsForm.QUANTITY_FIELD: quantity,
}
for product, quantity in product_quantities
if quantity > 0
]
return f |
def set_lacp_timeout(self, name, value=None):
"""Configures the Port-Channel LACP fallback timeout
The fallback timeout configures the period an interface in
fallback mode remains in LACP mode without receiving a PDU.
Args:
name(str): The Port-Channel interface name
value(int): port-channel lacp fallback timeout in seconds
Returns:
True if the operation succeeds otherwise False is returned
"""
commands = ['interface %s' % name]
string = 'port-channel lacp fallback timeout'
commands.append(self.command_builder(string, value=value))
return self.configure(commands) | Configures the Port-Channel LACP fallback timeout
The fallback timeout configures the period an interface in
fallback mode remains in LACP mode without receiving a PDU.
Args:
name(str): The Port-Channel interface name
value(int): port-channel lacp fallback timeout in seconds
Returns:
True if the operation succeeds otherwise False is returned | Below is the the instruction that describes the task:
### Input:
Configures the Port-Channel LACP fallback timeout
The fallback timeout configures the period an interface in
fallback mode remains in LACP mode without receiving a PDU.
Args:
name(str): The Port-Channel interface name
value(int): port-channel lacp fallback timeout in seconds
Returns:
True if the operation succeeds otherwise False is returned
### Response:
def set_lacp_timeout(self, name, value=None):
"""Configures the Port-Channel LACP fallback timeout
The fallback timeout configures the period an interface in
fallback mode remains in LACP mode without receiving a PDU.
Args:
name(str): The Port-Channel interface name
value(int): port-channel lacp fallback timeout in seconds
Returns:
True if the operation succeeds otherwise False is returned
"""
commands = ['interface %s' % name]
string = 'port-channel lacp fallback timeout'
commands.append(self.command_builder(string, value=value))
return self.configure(commands) |
def update_user_pool_client(UserPoolId=None, ClientId=None, ClientName=None, RefreshTokenValidity=None, ReadAttributes=None, WriteAttributes=None, ExplicitAuthFlows=None, SupportedIdentityProviders=None, CallbackURLs=None, LogoutURLs=None, DefaultRedirectURI=None, AllowedOAuthFlows=None, AllowedOAuthScopes=None, AllowedOAuthFlowsUserPoolClient=None):
"""
Allows the developer to update the specified user pool client and password policy.
See also: AWS API Documentation
:example: response = client.update_user_pool_client(
UserPoolId='string',
ClientId='string',
ClientName='string',
RefreshTokenValidity=123,
ReadAttributes=[
'string',
],
WriteAttributes=[
'string',
],
ExplicitAuthFlows=[
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
SupportedIdentityProviders=[
'string',
],
CallbackURLs=[
'string',
],
LogoutURLs=[
'string',
],
DefaultRedirectURI='string',
AllowedOAuthFlows=[
'code'|'implicit'|'client_credentials',
],
AllowedOAuthScopes=[
'string',
],
AllowedOAuthFlowsUserPoolClient=True|False
)
:type UserPoolId: string
:param UserPoolId: [REQUIRED]
The user pool ID for the user pool where you want to update the user pool client.
:type ClientId: string
:param ClientId: [REQUIRED]
The ID of the client associated with the user pool.
:type ClientName: string
:param ClientName: The client name from the update user pool client request.
:type RefreshTokenValidity: integer
:param RefreshTokenValidity: The time limit, in days, after which the refresh token is no longer valid and cannot be used.
:type ReadAttributes: list
:param ReadAttributes: The read-only attributes of the user pool.
(string) --
:type WriteAttributes: list
:param WriteAttributes: The writeable attributes of the user pool.
(string) --
:type ExplicitAuthFlows: list
:param ExplicitAuthFlows: Explicit authentication flows.
(string) --
:type SupportedIdentityProviders: list
:param SupportedIdentityProviders: A list of provider names for the identity providers that are supported on this client.
(string) --
:type CallbackURLs: list
:param CallbackURLs: A list of allowed callback URLs for the identity providers.
(string) --
:type LogoutURLs: list
:param LogoutURLs: A list ofallowed logout URLs for the identity providers.
(string) --
:type DefaultRedirectURI: string
:param DefaultRedirectURI: The default redirect URI. Must be in the CallbackURLs list.
:type AllowedOAuthFlows: list
:param AllowedOAuthFlows: Set to code to initiate a code grant flow, which provides an authorization code as the response. This code can be exchanged for access tokens with the token endpoint.
Set to token to specify that the client should get the access token (and, optionally, ID token, based on scopes) directly.
(string) --
:type AllowedOAuthScopes: list
:param AllowedOAuthScopes: A list of allowed OAuth scopes. Currently supported values are 'phone' , 'email' , 'openid' , and 'Cognito' .
(string) --
:type AllowedOAuthFlowsUserPoolClient: boolean
:param AllowedOAuthFlowsUserPoolClient: Set to TRUE if the client is allowed to follow the OAuth protocol when interacting with Cognito user pools.
:rtype: dict
:return: {
'UserPoolClient': {
'UserPoolId': 'string',
'ClientName': 'string',
'ClientId': 'string',
'ClientSecret': 'string',
'LastModifiedDate': datetime(2015, 1, 1),
'CreationDate': datetime(2015, 1, 1),
'RefreshTokenValidity': 123,
'ReadAttributes': [
'string',
],
'WriteAttributes': [
'string',
],
'ExplicitAuthFlows': [
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
'SupportedIdentityProviders': [
'string',
],
'CallbackURLs': [
'string',
],
'LogoutURLs': [
'string',
],
'DefaultRedirectURI': 'string',
'AllowedOAuthFlows': [
'code'|'implicit'|'client_credentials',
],
'AllowedOAuthScopes': [
'string',
],
'AllowedOAuthFlowsUserPoolClient': True|False
}
}
:returns:
(string) --
"""
pass | Allows the developer to update the specified user pool client and password policy.
See also: AWS API Documentation
:example: response = client.update_user_pool_client(
UserPoolId='string',
ClientId='string',
ClientName='string',
RefreshTokenValidity=123,
ReadAttributes=[
'string',
],
WriteAttributes=[
'string',
],
ExplicitAuthFlows=[
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
SupportedIdentityProviders=[
'string',
],
CallbackURLs=[
'string',
],
LogoutURLs=[
'string',
],
DefaultRedirectURI='string',
AllowedOAuthFlows=[
'code'|'implicit'|'client_credentials',
],
AllowedOAuthScopes=[
'string',
],
AllowedOAuthFlowsUserPoolClient=True|False
)
:type UserPoolId: string
:param UserPoolId: [REQUIRED]
The user pool ID for the user pool where you want to update the user pool client.
:type ClientId: string
:param ClientId: [REQUIRED]
The ID of the client associated with the user pool.
:type ClientName: string
:param ClientName: The client name from the update user pool client request.
:type RefreshTokenValidity: integer
:param RefreshTokenValidity: The time limit, in days, after which the refresh token is no longer valid and cannot be used.
:type ReadAttributes: list
:param ReadAttributes: The read-only attributes of the user pool.
(string) --
:type WriteAttributes: list
:param WriteAttributes: The writeable attributes of the user pool.
(string) --
:type ExplicitAuthFlows: list
:param ExplicitAuthFlows: Explicit authentication flows.
(string) --
:type SupportedIdentityProviders: list
:param SupportedIdentityProviders: A list of provider names for the identity providers that are supported on this client.
(string) --
:type CallbackURLs: list
:param CallbackURLs: A list of allowed callback URLs for the identity providers.
(string) --
:type LogoutURLs: list
:param LogoutURLs: A list ofallowed logout URLs for the identity providers.
(string) --
:type DefaultRedirectURI: string
:param DefaultRedirectURI: The default redirect URI. Must be in the CallbackURLs list.
:type AllowedOAuthFlows: list
:param AllowedOAuthFlows: Set to code to initiate a code grant flow, which provides an authorization code as the response. This code can be exchanged for access tokens with the token endpoint.
Set to token to specify that the client should get the access token (and, optionally, ID token, based on scopes) directly.
(string) --
:type AllowedOAuthScopes: list
:param AllowedOAuthScopes: A list of allowed OAuth scopes. Currently supported values are 'phone' , 'email' , 'openid' , and 'Cognito' .
(string) --
:type AllowedOAuthFlowsUserPoolClient: boolean
:param AllowedOAuthFlowsUserPoolClient: Set to TRUE if the client is allowed to follow the OAuth protocol when interacting with Cognito user pools.
:rtype: dict
:return: {
'UserPoolClient': {
'UserPoolId': 'string',
'ClientName': 'string',
'ClientId': 'string',
'ClientSecret': 'string',
'LastModifiedDate': datetime(2015, 1, 1),
'CreationDate': datetime(2015, 1, 1),
'RefreshTokenValidity': 123,
'ReadAttributes': [
'string',
],
'WriteAttributes': [
'string',
],
'ExplicitAuthFlows': [
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
'SupportedIdentityProviders': [
'string',
],
'CallbackURLs': [
'string',
],
'LogoutURLs': [
'string',
],
'DefaultRedirectURI': 'string',
'AllowedOAuthFlows': [
'code'|'implicit'|'client_credentials',
],
'AllowedOAuthScopes': [
'string',
],
'AllowedOAuthFlowsUserPoolClient': True|False
}
}
:returns:
(string) -- | Below is the the instruction that describes the task:
### Input:
Allows the developer to update the specified user pool client and password policy.
See also: AWS API Documentation
:example: response = client.update_user_pool_client(
UserPoolId='string',
ClientId='string',
ClientName='string',
RefreshTokenValidity=123,
ReadAttributes=[
'string',
],
WriteAttributes=[
'string',
],
ExplicitAuthFlows=[
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
SupportedIdentityProviders=[
'string',
],
CallbackURLs=[
'string',
],
LogoutURLs=[
'string',
],
DefaultRedirectURI='string',
AllowedOAuthFlows=[
'code'|'implicit'|'client_credentials',
],
AllowedOAuthScopes=[
'string',
],
AllowedOAuthFlowsUserPoolClient=True|False
)
:type UserPoolId: string
:param UserPoolId: [REQUIRED]
The user pool ID for the user pool where you want to update the user pool client.
:type ClientId: string
:param ClientId: [REQUIRED]
The ID of the client associated with the user pool.
:type ClientName: string
:param ClientName: The client name from the update user pool client request.
:type RefreshTokenValidity: integer
:param RefreshTokenValidity: The time limit, in days, after which the refresh token is no longer valid and cannot be used.
:type ReadAttributes: list
:param ReadAttributes: The read-only attributes of the user pool.
(string) --
:type WriteAttributes: list
:param WriteAttributes: The writeable attributes of the user pool.
(string) --
:type ExplicitAuthFlows: list
:param ExplicitAuthFlows: Explicit authentication flows.
(string) --
:type SupportedIdentityProviders: list
:param SupportedIdentityProviders: A list of provider names for the identity providers that are supported on this client.
(string) --
:type CallbackURLs: list
:param CallbackURLs: A list of allowed callback URLs for the identity providers.
(string) --
:type LogoutURLs: list
:param LogoutURLs: A list ofallowed logout URLs for the identity providers.
(string) --
:type DefaultRedirectURI: string
:param DefaultRedirectURI: The default redirect URI. Must be in the CallbackURLs list.
:type AllowedOAuthFlows: list
:param AllowedOAuthFlows: Set to code to initiate a code grant flow, which provides an authorization code as the response. This code can be exchanged for access tokens with the token endpoint.
Set to token to specify that the client should get the access token (and, optionally, ID token, based on scopes) directly.
(string) --
:type AllowedOAuthScopes: list
:param AllowedOAuthScopes: A list of allowed OAuth scopes. Currently supported values are 'phone' , 'email' , 'openid' , and 'Cognito' .
(string) --
:type AllowedOAuthFlowsUserPoolClient: boolean
:param AllowedOAuthFlowsUserPoolClient: Set to TRUE if the client is allowed to follow the OAuth protocol when interacting with Cognito user pools.
:rtype: dict
:return: {
'UserPoolClient': {
'UserPoolId': 'string',
'ClientName': 'string',
'ClientId': 'string',
'ClientSecret': 'string',
'LastModifiedDate': datetime(2015, 1, 1),
'CreationDate': datetime(2015, 1, 1),
'RefreshTokenValidity': 123,
'ReadAttributes': [
'string',
],
'WriteAttributes': [
'string',
],
'ExplicitAuthFlows': [
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
'SupportedIdentityProviders': [
'string',
],
'CallbackURLs': [
'string',
],
'LogoutURLs': [
'string',
],
'DefaultRedirectURI': 'string',
'AllowedOAuthFlows': [
'code'|'implicit'|'client_credentials',
],
'AllowedOAuthScopes': [
'string',
],
'AllowedOAuthFlowsUserPoolClient': True|False
}
}
:returns:
(string) --
### Response:
def update_user_pool_client(UserPoolId=None, ClientId=None, ClientName=None, RefreshTokenValidity=None, ReadAttributes=None, WriteAttributes=None, ExplicitAuthFlows=None, SupportedIdentityProviders=None, CallbackURLs=None, LogoutURLs=None, DefaultRedirectURI=None, AllowedOAuthFlows=None, AllowedOAuthScopes=None, AllowedOAuthFlowsUserPoolClient=None):
"""
Allows the developer to update the specified user pool client and password policy.
See also: AWS API Documentation
:example: response = client.update_user_pool_client(
UserPoolId='string',
ClientId='string',
ClientName='string',
RefreshTokenValidity=123,
ReadAttributes=[
'string',
],
WriteAttributes=[
'string',
],
ExplicitAuthFlows=[
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
SupportedIdentityProviders=[
'string',
],
CallbackURLs=[
'string',
],
LogoutURLs=[
'string',
],
DefaultRedirectURI='string',
AllowedOAuthFlows=[
'code'|'implicit'|'client_credentials',
],
AllowedOAuthScopes=[
'string',
],
AllowedOAuthFlowsUserPoolClient=True|False
)
:type UserPoolId: string
:param UserPoolId: [REQUIRED]
The user pool ID for the user pool where you want to update the user pool client.
:type ClientId: string
:param ClientId: [REQUIRED]
The ID of the client associated with the user pool.
:type ClientName: string
:param ClientName: The client name from the update user pool client request.
:type RefreshTokenValidity: integer
:param RefreshTokenValidity: The time limit, in days, after which the refresh token is no longer valid and cannot be used.
:type ReadAttributes: list
:param ReadAttributes: The read-only attributes of the user pool.
(string) --
:type WriteAttributes: list
:param WriteAttributes: The writeable attributes of the user pool.
(string) --
:type ExplicitAuthFlows: list
:param ExplicitAuthFlows: Explicit authentication flows.
(string) --
:type SupportedIdentityProviders: list
:param SupportedIdentityProviders: A list of provider names for the identity providers that are supported on this client.
(string) --
:type CallbackURLs: list
:param CallbackURLs: A list of allowed callback URLs for the identity providers.
(string) --
:type LogoutURLs: list
:param LogoutURLs: A list ofallowed logout URLs for the identity providers.
(string) --
:type DefaultRedirectURI: string
:param DefaultRedirectURI: The default redirect URI. Must be in the CallbackURLs list.
:type AllowedOAuthFlows: list
:param AllowedOAuthFlows: Set to code to initiate a code grant flow, which provides an authorization code as the response. This code can be exchanged for access tokens with the token endpoint.
Set to token to specify that the client should get the access token (and, optionally, ID token, based on scopes) directly.
(string) --
:type AllowedOAuthScopes: list
:param AllowedOAuthScopes: A list of allowed OAuth scopes. Currently supported values are 'phone' , 'email' , 'openid' , and 'Cognito' .
(string) --
:type AllowedOAuthFlowsUserPoolClient: boolean
:param AllowedOAuthFlowsUserPoolClient: Set to TRUE if the client is allowed to follow the OAuth protocol when interacting with Cognito user pools.
:rtype: dict
:return: {
'UserPoolClient': {
'UserPoolId': 'string',
'ClientName': 'string',
'ClientId': 'string',
'ClientSecret': 'string',
'LastModifiedDate': datetime(2015, 1, 1),
'CreationDate': datetime(2015, 1, 1),
'RefreshTokenValidity': 123,
'ReadAttributes': [
'string',
],
'WriteAttributes': [
'string',
],
'ExplicitAuthFlows': [
'ADMIN_NO_SRP_AUTH'|'CUSTOM_AUTH_FLOW_ONLY',
],
'SupportedIdentityProviders': [
'string',
],
'CallbackURLs': [
'string',
],
'LogoutURLs': [
'string',
],
'DefaultRedirectURI': 'string',
'AllowedOAuthFlows': [
'code'|'implicit'|'client_credentials',
],
'AllowedOAuthScopes': [
'string',
],
'AllowedOAuthFlowsUserPoolClient': True|False
}
}
:returns:
(string) --
"""
pass |
def flight_id(self) -> Optional[str]:
"""Returns the unique flight_id value(s) of the DataFrame.
If you know how to split flights, you may want to append such a column
in the DataFrame.
"""
if "flight_id" not in self.data.columns:
return None
tmp = set(self.data.flight_id)
if len(tmp) != 1:
logging.warn("Several ids for one flight, consider splitting")
return tmp.pop() | Returns the unique flight_id value(s) of the DataFrame.
If you know how to split flights, you may want to append such a column
in the DataFrame. | Below is the the instruction that describes the task:
### Input:
Returns the unique flight_id value(s) of the DataFrame.
If you know how to split flights, you may want to append such a column
in the DataFrame.
### Response:
def flight_id(self) -> Optional[str]:
"""Returns the unique flight_id value(s) of the DataFrame.
If you know how to split flights, you may want to append such a column
in the DataFrame.
"""
if "flight_id" not in self.data.columns:
return None
tmp = set(self.data.flight_id)
if len(tmp) != 1:
logging.warn("Several ids for one flight, consider splitting")
return tmp.pop() |
def _get_app_config(self, app_name):
"""
Returns an app config for the given name, not by label.
"""
matches = [app_config for app_config in apps.get_app_configs()
if app_config.name == app_name]
if not matches:
return
return matches[0] | Returns an app config for the given name, not by label. | Below is the the instruction that describes the task:
### Input:
Returns an app config for the given name, not by label.
### Response:
def _get_app_config(self, app_name):
"""
Returns an app config for the given name, not by label.
"""
matches = [app_config for app_config in apps.get_app_configs()
if app_config.name == app_name]
if not matches:
return
return matches[0] |
def calc_requiredremotesupply_v1(self):
"""Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
toy = der.toy[self.idx_sim]
flu.requiredremotesupply = (
con.highestremotesupply[toy] *
smoothutils.smooth_logistic1(
con.waterlevelsupplythreshold[toy]-aid.waterlevel,
der.waterlevelsupplysmoothpar[toy])) | Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 | | Below is the the instruction that describes the task:
### Input:
Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
### Response:
def calc_requiredremotesupply_v1(self):
"""Calculate the required maximum supply from another location
that can be discharged into the dam.
Required control parameters:
|HighestRemoteSupply|
|WaterLevelSupplyThreshold|
Required derived parameter:
|WaterLevelSupplySmoothPar|
Required aide sequence:
|WaterLevel|
Calculated flux sequence:
|RequiredRemoteSupply|
Basic equation:
:math:`RequiredRemoteSupply = HighestRemoteSupply \\cdot
smooth_{logistic1}(WaterLevelSupplyThreshold-WaterLevel,
WaterLevelSupplySmoothPar)`
Used auxiliary method:
|smooth_logistic1|
Examples:
Method |calc_requiredremotesupply_v1| is functionally identical
with method |calc_allowedremoterelieve_v2|. Hence the following
examples serve for testing purposes only (see the documentation
on function |calc_allowedremoterelieve_v2| for more detailed
information):
>>> from hydpy import pub
>>> pub.timegrids = '2001.03.30', '2001.04.03', '1d'
>>> from hydpy.models.dam import *
>>> parameterstep()
>>> highestremotesupply(_11_1_12=1.0, _03_31_12=1.0,
... _04_1_12=2.0, _10_31_12=2.0)
>>> waterlevelsupplythreshold(_11_1_12=3.0, _03_31_12=2.0,
... _04_1_12=4.0, _10_31_12=4.0)
>>> waterlevelsupplytolerance(_11_1_12=0.0, _03_31_12=0.0,
... _04_1_12=1.0, _10_31_12=1.0)
>>> derived.waterlevelsupplysmoothpar.update()
>>> derived.toy.update()
>>> from hydpy import UnitTest
>>> test = UnitTest(model,
... model.calc_requiredremotesupply_v1,
... last_example=9,
... parseqs=(aides.waterlevel,
... fluxes.requiredremotesupply))
>>> test.nexts.waterlevel = range(9)
>>> model.idx_sim = pub.timegrids.init['2001.03.30']
>>> test(first_example=2, last_example=6)
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 3 | 1.0 | 1.0 |
| 4 | 2.0 | 1.0 |
| 5 | 3.0 | 0.0 |
| 6 | 4.0 | 0.0 |
>>> model.idx_sim = pub.timegrids.init['2001.04.01']
>>> test()
| ex. | waterlevel | requiredremotesupply |
-------------------------------------------
| 1 | 0.0 | 2.0 |
| 2 | 1.0 | 1.999998 |
| 3 | 2.0 | 1.999796 |
| 4 | 3.0 | 1.98 |
| 5 | 4.0 | 1.0 |
| 6 | 5.0 | 0.02 |
| 7 | 6.0 | 0.000204 |
| 8 | 7.0 | 0.000002 |
| 9 | 8.0 | 0.0 |
"""
con = self.parameters.control.fastaccess
der = self.parameters.derived.fastaccess
flu = self.sequences.fluxes.fastaccess
aid = self.sequences.aides.fastaccess
toy = der.toy[self.idx_sim]
flu.requiredremotesupply = (
con.highestremotesupply[toy] *
smoothutils.smooth_logistic1(
con.waterlevelsupplythreshold[toy]-aid.waterlevel,
der.waterlevelsupplysmoothpar[toy])) |
def avoid_dead_links(root, machine, wrap_around=False):
"""Modify a RoutingTree to route-around dead links in a Machine.
Uses A* to reconnect disconnected branches of the tree (due to dead links
in the machine).
Parameters
----------
root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`
The root of the RoutingTree which contains nothing but RoutingTrees
(i.e. no vertices and links).
machine : :py:class:`~rig.place_and_route.Machine`
The machine in which the routes exist.
wrap_around : bool
Consider wrap-around links in pathfinding heuristics.
Returns
-------
(:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,
{(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})
A new RoutingTree is produced rooted as before. A dictionarry mapping
from (x, y) to the associated RoutingTree is provided for convenience.
Raises
------
:py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`
If a path to reconnect the tree cannot be found.
"""
# Make a copy of the RoutingTree with all broken parts disconnected
root, lookup, broken_links = copy_and_disconnect_tree(root, machine)
# For each disconnected subtree, use A* to connect the tree to *any* other
# disconnected subtree. Note that this process will eventually result in
# all disconnected subtrees being connected, the result is a fully
# connected tree.
for parent, child in broken_links:
child_chips = set(c.chip for c in lookup[child])
# Try to reconnect broken links to any other part of the tree
# (excluding this broken subtree itself since that would create a
# cycle).
path = a_star(child, parent,
set(lookup).difference(child_chips),
machine, wrap_around)
# Add new RoutingTree nodes to reconnect the child to the tree.
last_node = lookup[path[0][1]]
last_direction = path[0][0]
for direction, (x, y) in path[1:]:
if (x, y) not in child_chips:
# This path segment traverses new ground so we must create a
# new RoutingTree for the segment.
new_node = RoutingTree((x, y))
# A* will not traverse anything but chips in this tree so this
# assert is meerly a sanity check that this ocurred correctly.
assert (x, y) not in lookup, "Cycle created."
lookup[(x, y)] = new_node
else:
# This path segment overlaps part of the disconnected tree
# (A* doesn't know where the disconnected tree is and thus
# doesn't avoid it). To prevent cycles being introduced, this
# overlapped node is severed from its parent and merged as part
# of the A* path.
new_node = lookup[(x, y)]
# Find the node's current parent and disconnect it.
for node in lookup[child]: # pragma: no branch
dn = [(d, n) for d, n in node.children if n == new_node]
assert len(dn) <= 1
if dn:
node.children.remove(dn[0])
# A node can only have one parent so we can stop now.
break
last_node.children.append((Routes(last_direction), new_node))
last_node = new_node
last_direction = direction
last_node.children.append((last_direction, lookup[child]))
return (root, lookup) | Modify a RoutingTree to route-around dead links in a Machine.
Uses A* to reconnect disconnected branches of the tree (due to dead links
in the machine).
Parameters
----------
root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`
The root of the RoutingTree which contains nothing but RoutingTrees
(i.e. no vertices and links).
machine : :py:class:`~rig.place_and_route.Machine`
The machine in which the routes exist.
wrap_around : bool
Consider wrap-around links in pathfinding heuristics.
Returns
-------
(:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,
{(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})
A new RoutingTree is produced rooted as before. A dictionarry mapping
from (x, y) to the associated RoutingTree is provided for convenience.
Raises
------
:py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`
If a path to reconnect the tree cannot be found. | Below is the the instruction that describes the task:
### Input:
Modify a RoutingTree to route-around dead links in a Machine.
Uses A* to reconnect disconnected branches of the tree (due to dead links
in the machine).
Parameters
----------
root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`
The root of the RoutingTree which contains nothing but RoutingTrees
(i.e. no vertices and links).
machine : :py:class:`~rig.place_and_route.Machine`
The machine in which the routes exist.
wrap_around : bool
Consider wrap-around links in pathfinding heuristics.
Returns
-------
(:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,
{(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})
A new RoutingTree is produced rooted as before. A dictionarry mapping
from (x, y) to the associated RoutingTree is provided for convenience.
Raises
------
:py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`
If a path to reconnect the tree cannot be found.
### Response:
def avoid_dead_links(root, machine, wrap_around=False):
"""Modify a RoutingTree to route-around dead links in a Machine.
Uses A* to reconnect disconnected branches of the tree (due to dead links
in the machine).
Parameters
----------
root : :py:class:`~rig.place_and_route.routing_tree.RoutingTree`
The root of the RoutingTree which contains nothing but RoutingTrees
(i.e. no vertices and links).
machine : :py:class:`~rig.place_and_route.Machine`
The machine in which the routes exist.
wrap_around : bool
Consider wrap-around links in pathfinding heuristics.
Returns
-------
(:py:class:`~.rig.place_and_route.routing_tree.RoutingTree`,
{(x,y): :py:class:`~.rig.place_and_route.routing_tree.RoutingTree`, ...})
A new RoutingTree is produced rooted as before. A dictionarry mapping
from (x, y) to the associated RoutingTree is provided for convenience.
Raises
------
:py:class:~rig.place_and_route.exceptions.MachineHasDisconnectedSubregion`
If a path to reconnect the tree cannot be found.
"""
# Make a copy of the RoutingTree with all broken parts disconnected
root, lookup, broken_links = copy_and_disconnect_tree(root, machine)
# For each disconnected subtree, use A* to connect the tree to *any* other
# disconnected subtree. Note that this process will eventually result in
# all disconnected subtrees being connected, the result is a fully
# connected tree.
for parent, child in broken_links:
child_chips = set(c.chip for c in lookup[child])
# Try to reconnect broken links to any other part of the tree
# (excluding this broken subtree itself since that would create a
# cycle).
path = a_star(child, parent,
set(lookup).difference(child_chips),
machine, wrap_around)
# Add new RoutingTree nodes to reconnect the child to the tree.
last_node = lookup[path[0][1]]
last_direction = path[0][0]
for direction, (x, y) in path[1:]:
if (x, y) not in child_chips:
# This path segment traverses new ground so we must create a
# new RoutingTree for the segment.
new_node = RoutingTree((x, y))
# A* will not traverse anything but chips in this tree so this
# assert is meerly a sanity check that this ocurred correctly.
assert (x, y) not in lookup, "Cycle created."
lookup[(x, y)] = new_node
else:
# This path segment overlaps part of the disconnected tree
# (A* doesn't know where the disconnected tree is and thus
# doesn't avoid it). To prevent cycles being introduced, this
# overlapped node is severed from its parent and merged as part
# of the A* path.
new_node = lookup[(x, y)]
# Find the node's current parent and disconnect it.
for node in lookup[child]: # pragma: no branch
dn = [(d, n) for d, n in node.children if n == new_node]
assert len(dn) <= 1
if dn:
node.children.remove(dn[0])
# A node can only have one parent so we can stop now.
break
last_node.children.append((Routes(last_direction), new_node))
last_node = new_node
last_direction = direction
last_node.children.append((last_direction, lookup[child]))
return (root, lookup) |
def new_request(sender, request=None, notify=True, **kwargs):
"""New request for inclusion."""
if current_app.config['COMMUNITIES_MAIL_ENABLED'] and notify:
send_community_request_email(request) | New request for inclusion. | Below is the the instruction that describes the task:
### Input:
New request for inclusion.
### Response:
def new_request(sender, request=None, notify=True, **kwargs):
"""New request for inclusion."""
if current_app.config['COMMUNITIES_MAIL_ENABLED'] and notify:
send_community_request_email(request) |
def get_weekly_charts(self, chart_kind, from_date=None, to_date=None):
"""
Returns the weekly charts for the week starting from the
from_date value to the to_date value.
chart_kind should be one of "album", "artist" or "track"
"""
method = ".getWeekly" + chart_kind.title() + "Chart"
chart_type = eval(chart_kind.title()) # string to type
params = self._get_params()
if from_date and to_date:
params["from"] = from_date
params["to"] = to_date
doc = self._request(self.ws_prefix + method, True, params)
seq = []
for node in doc.getElementsByTagName(chart_kind.lower()):
if chart_kind == "artist":
item = chart_type(_extract(node, "name"), self.network)
else:
item = chart_type(
_extract(node, "artist"), _extract(node, "name"), self.network
)
weight = _number(_extract(node, "playcount"))
seq.append(TopItem(item, weight))
return seq | Returns the weekly charts for the week starting from the
from_date value to the to_date value.
chart_kind should be one of "album", "artist" or "track" | Below is the the instruction that describes the task:
### Input:
Returns the weekly charts for the week starting from the
from_date value to the to_date value.
chart_kind should be one of "album", "artist" or "track"
### Response:
def get_weekly_charts(self, chart_kind, from_date=None, to_date=None):
"""
Returns the weekly charts for the week starting from the
from_date value to the to_date value.
chart_kind should be one of "album", "artist" or "track"
"""
method = ".getWeekly" + chart_kind.title() + "Chart"
chart_type = eval(chart_kind.title()) # string to type
params = self._get_params()
if from_date and to_date:
params["from"] = from_date
params["to"] = to_date
doc = self._request(self.ws_prefix + method, True, params)
seq = []
for node in doc.getElementsByTagName(chart_kind.lower()):
if chart_kind == "artist":
item = chart_type(_extract(node, "name"), self.network)
else:
item = chart_type(
_extract(node, "artist"), _extract(node, "name"), self.network
)
weight = _number(_extract(node, "playcount"))
seq.append(TopItem(item, weight))
return seq |
def _discrete_colorize(categorical, hue, scheme, k, cmap, vmin, vmax):
"""
Creates a discrete colormap, either using an already-categorical data variable or by bucketing a non-categorical
ordinal one. If a scheme is provided we compute a distribution for the given data. If one is not provided we
assume that the input data is categorical.
This code makes extensive use of ``geopandas`` choropleth facilities.
Parameters
----------
categorical : boolean
Whether or not the input variable is already categorical.
hue : iterable
The data column whose entries are being discretely colorized. Note that although top-level plotter ``hue``
parameters ingest many argument signatures, not just iterables, they are all preprocessed to standardized
iterables before this method is called.
scheme : str
The mapclassify binning scheme to be used for splitting data values (or rather, the the string representation
thereof).
k : int
The number of bins which will be used. This parameter will be ignored if ``categorical`` is True. The default
value should be 5---this should be set before this method is called.
cmap : ``matplotlib.cm`` instance
The `matplotlib` colormap instance which will be used to colorize the geometries. This colormap
determines the spectrum; our algorithm determines the cuts.
vmin : float
A strict floor on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is below this level will all be colored by the same threshold value.
vmax : float
A strict cealing on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is above this level will all be colored by the same threshold value.
Returns
-------
(cmap, categories, values) : tuple
A tuple meant for assignment containing the values for various properties set by this method call.
"""
if not categorical:
binning = _mapclassify_choro(hue, scheme, k=k)
values = binning.yb
binedges = [binning.yb.min()] + binning.bins.tolist()
categories = ['{0:.2f} - {1:.2f}'.format(binedges[i], binedges[i + 1])
for i in range(len(binedges) - 1)]
else:
categories = np.unique(hue)
if len(categories) > 10:
warnings.warn("Generating a colormap using a categorical column with over 10 individual categories. "
"This is not recommended!")
value_map = {v: i for i, v in enumerate(categories)}
values = [value_map[d] for d in hue]
cmap = _norm_cmap(values, cmap, mpl.colors.Normalize, mpl.cm, vmin=vmin, vmax=vmax)
return cmap, categories, values | Creates a discrete colormap, either using an already-categorical data variable or by bucketing a non-categorical
ordinal one. If a scheme is provided we compute a distribution for the given data. If one is not provided we
assume that the input data is categorical.
This code makes extensive use of ``geopandas`` choropleth facilities.
Parameters
----------
categorical : boolean
Whether or not the input variable is already categorical.
hue : iterable
The data column whose entries are being discretely colorized. Note that although top-level plotter ``hue``
parameters ingest many argument signatures, not just iterables, they are all preprocessed to standardized
iterables before this method is called.
scheme : str
The mapclassify binning scheme to be used for splitting data values (or rather, the the string representation
thereof).
k : int
The number of bins which will be used. This parameter will be ignored if ``categorical`` is True. The default
value should be 5---this should be set before this method is called.
cmap : ``matplotlib.cm`` instance
The `matplotlib` colormap instance which will be used to colorize the geometries. This colormap
determines the spectrum; our algorithm determines the cuts.
vmin : float
A strict floor on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is below this level will all be colored by the same threshold value.
vmax : float
A strict cealing on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is above this level will all be colored by the same threshold value.
Returns
-------
(cmap, categories, values) : tuple
A tuple meant for assignment containing the values for various properties set by this method call. | Below is the the instruction that describes the task:
### Input:
Creates a discrete colormap, either using an already-categorical data variable or by bucketing a non-categorical
ordinal one. If a scheme is provided we compute a distribution for the given data. If one is not provided we
assume that the input data is categorical.
This code makes extensive use of ``geopandas`` choropleth facilities.
Parameters
----------
categorical : boolean
Whether or not the input variable is already categorical.
hue : iterable
The data column whose entries are being discretely colorized. Note that although top-level plotter ``hue``
parameters ingest many argument signatures, not just iterables, they are all preprocessed to standardized
iterables before this method is called.
scheme : str
The mapclassify binning scheme to be used for splitting data values (or rather, the the string representation
thereof).
k : int
The number of bins which will be used. This parameter will be ignored if ``categorical`` is True. The default
value should be 5---this should be set before this method is called.
cmap : ``matplotlib.cm`` instance
The `matplotlib` colormap instance which will be used to colorize the geometries. This colormap
determines the spectrum; our algorithm determines the cuts.
vmin : float
A strict floor on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is below this level will all be colored by the same threshold value.
vmax : float
A strict cealing on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is above this level will all be colored by the same threshold value.
Returns
-------
(cmap, categories, values) : tuple
A tuple meant for assignment containing the values for various properties set by this method call.
### Response:
def _discrete_colorize(categorical, hue, scheme, k, cmap, vmin, vmax):
"""
Creates a discrete colormap, either using an already-categorical data variable or by bucketing a non-categorical
ordinal one. If a scheme is provided we compute a distribution for the given data. If one is not provided we
assume that the input data is categorical.
This code makes extensive use of ``geopandas`` choropleth facilities.
Parameters
----------
categorical : boolean
Whether or not the input variable is already categorical.
hue : iterable
The data column whose entries are being discretely colorized. Note that although top-level plotter ``hue``
parameters ingest many argument signatures, not just iterables, they are all preprocessed to standardized
iterables before this method is called.
scheme : str
The mapclassify binning scheme to be used for splitting data values (or rather, the the string representation
thereof).
k : int
The number of bins which will be used. This parameter will be ignored if ``categorical`` is True. The default
value should be 5---this should be set before this method is called.
cmap : ``matplotlib.cm`` instance
The `matplotlib` colormap instance which will be used to colorize the geometries. This colormap
determines the spectrum; our algorithm determines the cuts.
vmin : float
A strict floor on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is below this level will all be colored by the same threshold value.
vmax : float
A strict cealing on the value associated with the "bottom" of the colormap spectrum. Data column entries whose
value is above this level will all be colored by the same threshold value.
Returns
-------
(cmap, categories, values) : tuple
A tuple meant for assignment containing the values for various properties set by this method call.
"""
if not categorical:
binning = _mapclassify_choro(hue, scheme, k=k)
values = binning.yb
binedges = [binning.yb.min()] + binning.bins.tolist()
categories = ['{0:.2f} - {1:.2f}'.format(binedges[i], binedges[i + 1])
for i in range(len(binedges) - 1)]
else:
categories = np.unique(hue)
if len(categories) > 10:
warnings.warn("Generating a colormap using a categorical column with over 10 individual categories. "
"This is not recommended!")
value_map = {v: i for i, v in enumerate(categories)}
values = [value_map[d] for d in hue]
cmap = _norm_cmap(values, cmap, mpl.colors.Normalize, mpl.cm, vmin=vmin, vmax=vmax)
return cmap, categories, values |
async def sleep(self, duration: float=0.0) -> None:
'''Simple wrapper around `asyncio.sleep()`.'''
duration = max(0, duration)
if duration > 0:
Log.debug('sleeping task %s for %.1f seconds', self.name, duration)
await asyncio.sleep(duration) | Simple wrapper around `asyncio.sleep()`. | Below is the the instruction that describes the task:
### Input:
Simple wrapper around `asyncio.sleep()`.
### Response:
async def sleep(self, duration: float=0.0) -> None:
'''Simple wrapper around `asyncio.sleep()`.'''
duration = max(0, duration)
if duration > 0:
Log.debug('sleeping task %s for %.1f seconds', self.name, duration)
await asyncio.sleep(duration) |
def cli(env, **kwargs):
"""host order options for a given dedicated host.
To get a list of available backend routers see example:
slcli dh create-options --datacenter dal05 --flavor 56_CORES_X_242_RAM_X_1_4_TB
"""
mgr = SoftLayer.DedicatedHostManager(env.client)
tables = []
if not kwargs['flavor'] and not kwargs['datacenter']:
options = mgr.get_create_options()
# Datacenters
dc_table = formatting.Table(['datacenter', 'value'])
dc_table.sortby = 'value'
for location in options['locations']:
dc_table.add_row([location['name'], location['key']])
tables.append(dc_table)
dh_table = formatting.Table(['Dedicated Virtual Host Flavor(s)', 'value'])
dh_table.sortby = 'value'
for item in options['dedicated_host']:
dh_table.add_row([item['name'], item['key']])
tables.append(dh_table)
else:
if kwargs['flavor'] is None or kwargs['datacenter'] is None:
raise exceptions.ArgumentError('Both a flavor and datacenter need '
'to be passed as arguments '
'ex. slcli dh create-options -d '
'ams01 -f '
'56_CORES_X_242_RAM_X_1_4_TB')
router_opt = mgr.get_router_options(kwargs['datacenter'], kwargs['flavor'])
br_table = formatting.Table(
['Available Backend Routers'])
for router in router_opt:
br_table.add_row([router['hostname']])
tables.append(br_table)
env.fout(formatting.listing(tables, separator='\n')) | host order options for a given dedicated host.
To get a list of available backend routers see example:
slcli dh create-options --datacenter dal05 --flavor 56_CORES_X_242_RAM_X_1_4_TB | Below is the the instruction that describes the task:
### Input:
host order options for a given dedicated host.
To get a list of available backend routers see example:
slcli dh create-options --datacenter dal05 --flavor 56_CORES_X_242_RAM_X_1_4_TB
### Response:
def cli(env, **kwargs):
"""host order options for a given dedicated host.
To get a list of available backend routers see example:
slcli dh create-options --datacenter dal05 --flavor 56_CORES_X_242_RAM_X_1_4_TB
"""
mgr = SoftLayer.DedicatedHostManager(env.client)
tables = []
if not kwargs['flavor'] and not kwargs['datacenter']:
options = mgr.get_create_options()
# Datacenters
dc_table = formatting.Table(['datacenter', 'value'])
dc_table.sortby = 'value'
for location in options['locations']:
dc_table.add_row([location['name'], location['key']])
tables.append(dc_table)
dh_table = formatting.Table(['Dedicated Virtual Host Flavor(s)', 'value'])
dh_table.sortby = 'value'
for item in options['dedicated_host']:
dh_table.add_row([item['name'], item['key']])
tables.append(dh_table)
else:
if kwargs['flavor'] is None or kwargs['datacenter'] is None:
raise exceptions.ArgumentError('Both a flavor and datacenter need '
'to be passed as arguments '
'ex. slcli dh create-options -d '
'ams01 -f '
'56_CORES_X_242_RAM_X_1_4_TB')
router_opt = mgr.get_router_options(kwargs['datacenter'], kwargs['flavor'])
br_table = formatting.Table(
['Available Backend Routers'])
for router in router_opt:
br_table.add_row([router['hostname']])
tables.append(br_table)
env.fout(formatting.listing(tables, separator='\n')) |
def objects_to_td64ns(data, unit="ns", errors="raise"):
"""
Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
Parameters
----------
data : ndarray or Index
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level.
"""
# coerce Index to np.ndarray, converting string-dtype if necessary
values = np.array(data, dtype=np.object_, copy=False)
result = array_to_timedelta64(values,
unit=unit, errors=errors)
return result.view('timedelta64[ns]') | Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
Parameters
----------
data : ndarray or Index
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level. | Below is the the instruction that describes the task:
### Input:
Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
Parameters
----------
data : ndarray or Index
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level.
### Response:
def objects_to_td64ns(data, unit="ns", errors="raise"):
"""
Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
Parameters
----------
data : ndarray or Index
unit : str, default "ns"
The timedelta unit to treat integers as multiples of.
errors : {"raise", "coerce", "ignore"}, default "raise"
How to handle elements that cannot be converted to timedelta64[ns].
See ``pandas.to_timedelta`` for details.
Returns
-------
numpy.ndarray : timedelta64[ns] array converted from data
Raises
------
ValueError : Data cannot be converted to timedelta64[ns].
Notes
-----
Unlike `pandas.to_timedelta`, if setting `errors=ignore` will not cause
errors to be ignored; they are caught and subsequently ignored at a
higher level.
"""
# coerce Index to np.ndarray, converting string-dtype if necessary
values = np.array(data, dtype=np.object_, copy=False)
result = array_to_timedelta64(values,
unit=unit, errors=errors)
return result.view('timedelta64[ns]') |
def get_tile_images_by_rect(self, rect):
""" Speed up data access
More efficient because data is accessed and cached locally
"""
def rev(seq, start, stop):
if start < 0:
start = 0
return enumerate(seq[start:stop + 1], start)
x1, y1, x2, y2 = rect_to_bb(rect)
images = self.tmx.images
layers = self.tmx.layers
at = self._animated_tile
tracked_gids = self._tracked_gids
anim_map = self._animation_map
track = bool(self._animation_queue)
for l in self.tmx.visible_tile_layers:
for y, row in rev(layers[l].data, y1, y2):
for x, gid in [i for i in rev(row, x1, x2) if i[1]]:
# since the tile has been queried, assume it wants to be checked
# for animations sometime in the future
if track and gid in tracked_gids:
anim_map[gid].positions.add((x, y, l))
try:
# animated, so return the correct frame
yield x, y, l, at[(x, y, l)]
except KeyError:
# not animated, so return surface from data, if any
yield x, y, l, images[gid] | Speed up data access
More efficient because data is accessed and cached locally | Below is the the instruction that describes the task:
### Input:
Speed up data access
More efficient because data is accessed and cached locally
### Response:
def get_tile_images_by_rect(self, rect):
""" Speed up data access
More efficient because data is accessed and cached locally
"""
def rev(seq, start, stop):
if start < 0:
start = 0
return enumerate(seq[start:stop + 1], start)
x1, y1, x2, y2 = rect_to_bb(rect)
images = self.tmx.images
layers = self.tmx.layers
at = self._animated_tile
tracked_gids = self._tracked_gids
anim_map = self._animation_map
track = bool(self._animation_queue)
for l in self.tmx.visible_tile_layers:
for y, row in rev(layers[l].data, y1, y2):
for x, gid in [i for i in rev(row, x1, x2) if i[1]]:
# since the tile has been queried, assume it wants to be checked
# for animations sometime in the future
if track and gid in tracked_gids:
anim_map[gid].positions.add((x, y, l))
try:
# animated, so return the correct frame
yield x, y, l, at[(x, y, l)]
except KeyError:
# not animated, so return surface from data, if any
yield x, y, l, images[gid] |
def as_set(self, preserve_casing=False):
"""Return the set as real python set type. When calling this, all
the items are converted to lowercase and the ordering is lost.
:param preserve_casing: if set to `True` the items in the set returned
will have the original case like in the
:class:`HeaderSet`, otherwise they will
be lowercase.
"""
if preserve_casing:
return set(self._headers)
return set(self._set) | Return the set as real python set type. When calling this, all
the items are converted to lowercase and the ordering is lost.
:param preserve_casing: if set to `True` the items in the set returned
will have the original case like in the
:class:`HeaderSet`, otherwise they will
be lowercase. | Below is the the instruction that describes the task:
### Input:
Return the set as real python set type. When calling this, all
the items are converted to lowercase and the ordering is lost.
:param preserve_casing: if set to `True` the items in the set returned
will have the original case like in the
:class:`HeaderSet`, otherwise they will
be lowercase.
### Response:
def as_set(self, preserve_casing=False):
"""Return the set as real python set type. When calling this, all
the items are converted to lowercase and the ordering is lost.
:param preserve_casing: if set to `True` the items in the set returned
will have the original case like in the
:class:`HeaderSet`, otherwise they will
be lowercase.
"""
if preserve_casing:
return set(self._headers)
return set(self._set) |
def danke(client, event, channel, nick, rest):
'Danke schön!'
if rest:
rest = rest.strip()
Karma.store.change(rest, 1)
rcpt = rest
else:
rcpt = channel
return f'Danke schön, {rcpt}! Danke schön!' | Danke schön! | Below is the the instruction that describes the task:
### Input:
Danke schön!
### Response:
def danke(client, event, channel, nick, rest):
'Danke schön!'
if rest:
rest = rest.strip()
Karma.store.change(rest, 1)
rcpt = rest
else:
rcpt = channel
return f'Danke schön, {rcpt}! Danke schön!' |
def get_page_content(id):
"""Return XHTML content of a page.
Parameters:
- id: id of a Confluence page.
"""
data = _json.loads(_api.rest("/" + str(id) + "?expand=body.storage"))
return data["body"]["storage"]["value"] | Return XHTML content of a page.
Parameters:
- id: id of a Confluence page. | Below is the the instruction that describes the task:
### Input:
Return XHTML content of a page.
Parameters:
- id: id of a Confluence page.
### Response:
def get_page_content(id):
"""Return XHTML content of a page.
Parameters:
- id: id of a Confluence page.
"""
data = _json.loads(_api.rest("/" + str(id) + "?expand=body.storage"))
return data["body"]["storage"]["value"] |
def select(data, trial=None, invert=False, **axes_to_select):
"""Define the selection of trials, using ranges or actual values.
Parameters
----------
data : instance of Data
data to select from.
trial : list of int or ndarray (dtype='i'), optional
index of trials of interest
**axes_to_select, optional
Values need to be tuple or list. If the values in one axis are string,
then you need to specify all the strings that you want. If the values
are numeric, then you should specify the range (you cannot specify
single values, nor multiple values). To select only up to one point,
you can use (None, value_of_interest)
invert : bool
take the opposite selection
Returns
-------
instance, same class as input
data where selection has been applied.
"""
if trial is not None and not isinstance(trial, Iterable):
raise TypeError('Trial needs to be iterable.')
for axis_to_select, values_to_select in axes_to_select.items():
if (not isinstance(values_to_select, Iterable) or
isinstance(values_to_select, str)):
raise TypeError(axis_to_select + ' needs to be iterable.')
if trial is None:
trial = range(data.number_of('trial'))
else:
trial = trial
if invert:
trial = setdiff1d(range(data.number_of('trial')), trial)
# create empty axis
output = data._copy(axis=False)
for one_axis in output.axis:
output.axis[one_axis] = empty(len(trial), dtype='O')
output.data = empty(len(trial), dtype='O')
to_select = {}
for cnt, i in enumerate(trial):
lg.debug('Selection on trial {0: 6}'.format(i))
for one_axis in output.axis:
values = data.axis[one_axis][i]
if one_axis in axes_to_select.keys():
values_to_select = axes_to_select[one_axis]
if len(values_to_select) == 0:
selected_values = ()
elif isinstance(values_to_select[0], str):
selected_values = asarray(values_to_select, dtype='U')
else:
if (values_to_select[0] is None and
values_to_select[1] is None):
bool_values = ones(len(values), dtype=bool)
elif values_to_select[0] is None:
bool_values = values < values_to_select[1]
elif values_to_select[1] is None:
bool_values = values_to_select[0] <= values
else:
bool_values = ((values_to_select[0] <= values) &
(values < values_to_select[1]))
selected_values = values[bool_values]
if invert:
selected_values = setdiff1d(values, selected_values)
lg.debug('In axis {0}, selecting {1: 6} '
'values'.format(one_axis,
len(selected_values)))
to_select[one_axis] = selected_values
else:
lg.debug('In axis ' + one_axis + ', selecting all the '
'values')
selected_values = data.axis[one_axis][i]
output.axis[one_axis][cnt] = selected_values
output.data[cnt] = data(trial=i, **to_select)
return output | Define the selection of trials, using ranges or actual values.
Parameters
----------
data : instance of Data
data to select from.
trial : list of int or ndarray (dtype='i'), optional
index of trials of interest
**axes_to_select, optional
Values need to be tuple or list. If the values in one axis are string,
then you need to specify all the strings that you want. If the values
are numeric, then you should specify the range (you cannot specify
single values, nor multiple values). To select only up to one point,
you can use (None, value_of_interest)
invert : bool
take the opposite selection
Returns
-------
instance, same class as input
data where selection has been applied. | Below is the the instruction that describes the task:
### Input:
Define the selection of trials, using ranges or actual values.
Parameters
----------
data : instance of Data
data to select from.
trial : list of int or ndarray (dtype='i'), optional
index of trials of interest
**axes_to_select, optional
Values need to be tuple or list. If the values in one axis are string,
then you need to specify all the strings that you want. If the values
are numeric, then you should specify the range (you cannot specify
single values, nor multiple values). To select only up to one point,
you can use (None, value_of_interest)
invert : bool
take the opposite selection
Returns
-------
instance, same class as input
data where selection has been applied.
### Response:
def select(data, trial=None, invert=False, **axes_to_select):
"""Define the selection of trials, using ranges or actual values.
Parameters
----------
data : instance of Data
data to select from.
trial : list of int or ndarray (dtype='i'), optional
index of trials of interest
**axes_to_select, optional
Values need to be tuple or list. If the values in one axis are string,
then you need to specify all the strings that you want. If the values
are numeric, then you should specify the range (you cannot specify
single values, nor multiple values). To select only up to one point,
you can use (None, value_of_interest)
invert : bool
take the opposite selection
Returns
-------
instance, same class as input
data where selection has been applied.
"""
if trial is not None and not isinstance(trial, Iterable):
raise TypeError('Trial needs to be iterable.')
for axis_to_select, values_to_select in axes_to_select.items():
if (not isinstance(values_to_select, Iterable) or
isinstance(values_to_select, str)):
raise TypeError(axis_to_select + ' needs to be iterable.')
if trial is None:
trial = range(data.number_of('trial'))
else:
trial = trial
if invert:
trial = setdiff1d(range(data.number_of('trial')), trial)
# create empty axis
output = data._copy(axis=False)
for one_axis in output.axis:
output.axis[one_axis] = empty(len(trial), dtype='O')
output.data = empty(len(trial), dtype='O')
to_select = {}
for cnt, i in enumerate(trial):
lg.debug('Selection on trial {0: 6}'.format(i))
for one_axis in output.axis:
values = data.axis[one_axis][i]
if one_axis in axes_to_select.keys():
values_to_select = axes_to_select[one_axis]
if len(values_to_select) == 0:
selected_values = ()
elif isinstance(values_to_select[0], str):
selected_values = asarray(values_to_select, dtype='U')
else:
if (values_to_select[0] is None and
values_to_select[1] is None):
bool_values = ones(len(values), dtype=bool)
elif values_to_select[0] is None:
bool_values = values < values_to_select[1]
elif values_to_select[1] is None:
bool_values = values_to_select[0] <= values
else:
bool_values = ((values_to_select[0] <= values) &
(values < values_to_select[1]))
selected_values = values[bool_values]
if invert:
selected_values = setdiff1d(values, selected_values)
lg.debug('In axis {0}, selecting {1: 6} '
'values'.format(one_axis,
len(selected_values)))
to_select[one_axis] = selected_values
else:
lg.debug('In axis ' + one_axis + ', selecting all the '
'values')
selected_values = data.axis[one_axis][i]
output.axis[one_axis][cnt] = selected_values
output.data[cnt] = data(trial=i, **to_select)
return output |
def add_permission(self, permitted_object, include_members=False,
view_only=True):
"""
Add permission for the specified permitted object(s) to this User Role,
thereby granting that permission to all users that have this User Role.
The granted permission depends on the resource class of the permitted
object(s):
* For Task resources, the granted permission is task permission for
that task.
* For Group resources, the granted permission is object access
permission for the group resource, and optionally also for the
group members.
* For any other resources, the granted permission is object access
permission for these resources.
The User Role must be user-defined.
Authorization requirements:
* Task permission to the "Manage User Roles" task.
Parameters:
permitted_object (:class:`~zhmcclient.BaseResource` or :term:`string`):
Permitted object(s), either as a Python resource object (e.g.
:class:`~zhmcclient.Partition`), or as a resource class string (e.g.
'partition').
Must not be `None`.
include_members (bool): Controls whether for Group resources, the
operation applies additionally to its group member resources.
This parameter will be ignored when the permitted object does not
specify Group resources.
view_only (bool): Controls whether for Task resources, the operation
aplies to the view-only version of the task (if `True`), or to
the full version of the task (if `False`). Only certain tasks
support a view-only version.
This parameter will be ignored when the permitted object does not
specify Task resources.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
""" # noqa: E501
if isinstance(permitted_object, BaseResource):
perm_obj = permitted_object.uri
perm_type = 'object'
elif isinstance(permitted_object, six.string_types):
perm_obj = permitted_object
perm_type = 'object-class'
else:
raise TypeError(
"permitted_object must be a string or BaseResource, but is: "
"{}".format(type(permitted_object)))
body = {
'permitted-object': perm_obj,
'permitted-object-type': perm_type,
'include-members': include_members,
'view-only-mode': view_only,
}
self.manager.session.post(
self.uri + '/operations/add-permission',
body=body) | Add permission for the specified permitted object(s) to this User Role,
thereby granting that permission to all users that have this User Role.
The granted permission depends on the resource class of the permitted
object(s):
* For Task resources, the granted permission is task permission for
that task.
* For Group resources, the granted permission is object access
permission for the group resource, and optionally also for the
group members.
* For any other resources, the granted permission is object access
permission for these resources.
The User Role must be user-defined.
Authorization requirements:
* Task permission to the "Manage User Roles" task.
Parameters:
permitted_object (:class:`~zhmcclient.BaseResource` or :term:`string`):
Permitted object(s), either as a Python resource object (e.g.
:class:`~zhmcclient.Partition`), or as a resource class string (e.g.
'partition').
Must not be `None`.
include_members (bool): Controls whether for Group resources, the
operation applies additionally to its group member resources.
This parameter will be ignored when the permitted object does not
specify Group resources.
view_only (bool): Controls whether for Task resources, the operation
aplies to the view-only version of the task (if `True`), or to
the full version of the task (if `False`). Only certain tasks
support a view-only version.
This parameter will be ignored when the permitted object does not
specify Task resources.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError` | Below is the the instruction that describes the task:
### Input:
Add permission for the specified permitted object(s) to this User Role,
thereby granting that permission to all users that have this User Role.
The granted permission depends on the resource class of the permitted
object(s):
* For Task resources, the granted permission is task permission for
that task.
* For Group resources, the granted permission is object access
permission for the group resource, and optionally also for the
group members.
* For any other resources, the granted permission is object access
permission for these resources.
The User Role must be user-defined.
Authorization requirements:
* Task permission to the "Manage User Roles" task.
Parameters:
permitted_object (:class:`~zhmcclient.BaseResource` or :term:`string`):
Permitted object(s), either as a Python resource object (e.g.
:class:`~zhmcclient.Partition`), or as a resource class string (e.g.
'partition').
Must not be `None`.
include_members (bool): Controls whether for Group resources, the
operation applies additionally to its group member resources.
This parameter will be ignored when the permitted object does not
specify Group resources.
view_only (bool): Controls whether for Task resources, the operation
aplies to the view-only version of the task (if `True`), or to
the full version of the task (if `False`). Only certain tasks
support a view-only version.
This parameter will be ignored when the permitted object does not
specify Task resources.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
### Response:
def add_permission(self, permitted_object, include_members=False,
view_only=True):
"""
Add permission for the specified permitted object(s) to this User Role,
thereby granting that permission to all users that have this User Role.
The granted permission depends on the resource class of the permitted
object(s):
* For Task resources, the granted permission is task permission for
that task.
* For Group resources, the granted permission is object access
permission for the group resource, and optionally also for the
group members.
* For any other resources, the granted permission is object access
permission for these resources.
The User Role must be user-defined.
Authorization requirements:
* Task permission to the "Manage User Roles" task.
Parameters:
permitted_object (:class:`~zhmcclient.BaseResource` or :term:`string`):
Permitted object(s), either as a Python resource object (e.g.
:class:`~zhmcclient.Partition`), or as a resource class string (e.g.
'partition').
Must not be `None`.
include_members (bool): Controls whether for Group resources, the
operation applies additionally to its group member resources.
This parameter will be ignored when the permitted object does not
specify Group resources.
view_only (bool): Controls whether for Task resources, the operation
aplies to the view-only version of the task (if `True`), or to
the full version of the task (if `False`). Only certain tasks
support a view-only version.
This parameter will be ignored when the permitted object does not
specify Task resources.
Raises:
:exc:`~zhmcclient.HTTPError`
:exc:`~zhmcclient.ParseError`
:exc:`~zhmcclient.AuthError`
:exc:`~zhmcclient.ConnectionError`
""" # noqa: E501
if isinstance(permitted_object, BaseResource):
perm_obj = permitted_object.uri
perm_type = 'object'
elif isinstance(permitted_object, six.string_types):
perm_obj = permitted_object
perm_type = 'object-class'
else:
raise TypeError(
"permitted_object must be a string or BaseResource, but is: "
"{}".format(type(permitted_object)))
body = {
'permitted-object': perm_obj,
'permitted-object-type': perm_type,
'include-members': include_members,
'view-only-mode': view_only,
}
self.manager.session.post(
self.uri + '/operations/add-permission',
body=body) |
async def select(self, db):
"""Changes db index for all free connections.
All previously acquired connections will be closed when released.
"""
res = True
async with self._cond:
for i in range(self.freesize):
res = res and (await self._pool[i].select(db))
else:
self._db = db
return res | Changes db index for all free connections.
All previously acquired connections will be closed when released. | Below is the the instruction that describes the task:
### Input:
Changes db index for all free connections.
All previously acquired connections will be closed when released.
### Response:
async def select(self, db):
"""Changes db index for all free connections.
All previously acquired connections will be closed when released.
"""
res = True
async with self._cond:
for i in range(self.freesize):
res = res and (await self._pool[i].select(db))
else:
self._db = db
return res |
def night_mode(self):
"""bool: The speaker's night mode.
True if on, False if off, None if not supported.
"""
if not self.is_soundbar:
return None
response = self.renderingControl.GetEQ([
('InstanceID', 0),
('EQType', 'NightMode')
])
return bool(int(response['CurrentValue'])) | bool: The speaker's night mode.
True if on, False if off, None if not supported. | Below is the the instruction that describes the task:
### Input:
bool: The speaker's night mode.
True if on, False if off, None if not supported.
### Response:
def night_mode(self):
"""bool: The speaker's night mode.
True if on, False if off, None if not supported.
"""
if not self.is_soundbar:
return None
response = self.renderingControl.GetEQ([
('InstanceID', 0),
('EQType', 'NightMode')
])
return bool(int(response['CurrentValue'])) |
def name(function):
"""
Retrieve a pretty name for the function
:param function: function to get name from
:return: pretty name
"""
if isinstance(function, types.FunctionType):
return function.__name__
else:
return str(function) | Retrieve a pretty name for the function
:param function: function to get name from
:return: pretty name | Below is the the instruction that describes the task:
### Input:
Retrieve a pretty name for the function
:param function: function to get name from
:return: pretty name
### Response:
def name(function):
"""
Retrieve a pretty name for the function
:param function: function to get name from
:return: pretty name
"""
if isinstance(function, types.FunctionType):
return function.__name__
else:
return str(function) |
def add_string_label(self, str_):
""" Maps ("folds") the given string, returning an unique label ID.
This allows several constant labels to be initialized to the same address
thus saving memory space.
:param str_: the string to map
:return: the unique label ID
"""
if self.STRING_LABELS.get(str_, None) is None:
self.STRING_LABELS[str_] = backend.tmp_label()
return self.STRING_LABELS[str_] | Maps ("folds") the given string, returning an unique label ID.
This allows several constant labels to be initialized to the same address
thus saving memory space.
:param str_: the string to map
:return: the unique label ID | Below is the the instruction that describes the task:
### Input:
Maps ("folds") the given string, returning an unique label ID.
This allows several constant labels to be initialized to the same address
thus saving memory space.
:param str_: the string to map
:return: the unique label ID
### Response:
def add_string_label(self, str_):
""" Maps ("folds") the given string, returning an unique label ID.
This allows several constant labels to be initialized to the same address
thus saving memory space.
:param str_: the string to map
:return: the unique label ID
"""
if self.STRING_LABELS.get(str_, None) is None:
self.STRING_LABELS[str_] = backend.tmp_label()
return self.STRING_LABELS[str_] |
def width(self):
"""
:return: The width of the data component in the buffer in number of pixels.
"""
try:
if self._part:
value = self._part.width
else:
value = self._buffer.width
except InvalidParameterException:
value = self._node_map.Width.value
return value | :return: The width of the data component in the buffer in number of pixels. | Below is the the instruction that describes the task:
### Input:
:return: The width of the data component in the buffer in number of pixels.
### Response:
def width(self):
"""
:return: The width of the data component in the buffer in number of pixels.
"""
try:
if self._part:
value = self._part.width
else:
value = self._buffer.width
except InvalidParameterException:
value = self._node_map.Width.value
return value |
def jsonxs(data, expr, action=ACTION_GET, value=None, default=None):
"""
Get, set, delete values in a JSON structure. `expr` is a JSONpath-like
expression pointing to the desired value. `action` determines the action to
perform. See the module-level `ACTION_*` constants. `value` should be given
if action is `ACTION_SET`. If `default` is set and `expr` isn't found,
return `default` instead. This will override all exceptions.
"""
tokens = tokenize(expr)
# Walk through the list of tokens to reach the correct path in the data
# structure.
try:
prev_path = None
cur_path = data
for token in tokens:
prev_path = cur_path
if not token in cur_path and action in [ACTION_SET, ACTION_MKDICT, ACTION_MKLIST]:
# When setting values or creating dicts/lists, the key can be
# missing from the data struture
continue
cur_path = cur_path[token]
except Exception:
if default is not None:
return default
else:
raise
# Perform action the user requested.
if action == ACTION_GET:
return cur_path
elif action == ACTION_DEL:
del prev_path[token]
elif action == ACTION_SET:
prev_path[token] = value
elif action == ACTION_APPEND:
prev_path[token].append(value)
elif action == ACTION_INSERT:
prev_path.insert(token, value)
elif action == ACTION_MKDICT:
prev_path[token] = {}
elif action == ACTION_MKLIST:
prev_path[token] = []
else:
raise ValueError("Invalid action: {}".format(action)) | Get, set, delete values in a JSON structure. `expr` is a JSONpath-like
expression pointing to the desired value. `action` determines the action to
perform. See the module-level `ACTION_*` constants. `value` should be given
if action is `ACTION_SET`. If `default` is set and `expr` isn't found,
return `default` instead. This will override all exceptions. | Below is the the instruction that describes the task:
### Input:
Get, set, delete values in a JSON structure. `expr` is a JSONpath-like
expression pointing to the desired value. `action` determines the action to
perform. See the module-level `ACTION_*` constants. `value` should be given
if action is `ACTION_SET`. If `default` is set and `expr` isn't found,
return `default` instead. This will override all exceptions.
### Response:
def jsonxs(data, expr, action=ACTION_GET, value=None, default=None):
"""
Get, set, delete values in a JSON structure. `expr` is a JSONpath-like
expression pointing to the desired value. `action` determines the action to
perform. See the module-level `ACTION_*` constants. `value` should be given
if action is `ACTION_SET`. If `default` is set and `expr` isn't found,
return `default` instead. This will override all exceptions.
"""
tokens = tokenize(expr)
# Walk through the list of tokens to reach the correct path in the data
# structure.
try:
prev_path = None
cur_path = data
for token in tokens:
prev_path = cur_path
if not token in cur_path and action in [ACTION_SET, ACTION_MKDICT, ACTION_MKLIST]:
# When setting values or creating dicts/lists, the key can be
# missing from the data struture
continue
cur_path = cur_path[token]
except Exception:
if default is not None:
return default
else:
raise
# Perform action the user requested.
if action == ACTION_GET:
return cur_path
elif action == ACTION_DEL:
del prev_path[token]
elif action == ACTION_SET:
prev_path[token] = value
elif action == ACTION_APPEND:
prev_path[token].append(value)
elif action == ACTION_INSERT:
prev_path.insert(token, value)
elif action == ACTION_MKDICT:
prev_path[token] = {}
elif action == ACTION_MKLIST:
prev_path[token] = []
else:
raise ValueError("Invalid action: {}".format(action)) |
def resize_pty(self, width=80, height=24, width_pixels=0, height_pixels=0):
"""
Resize the pseudo-terminal. This can be used to change the width and
height of the terminal emulation created in a previous `get_pty` call.
:param int width: new width (in characters) of the terminal screen
:param int height: new height (in characters) of the terminal screen
:param int width_pixels: new width (in pixels) of the terminal screen
:param int height_pixels: new height (in pixels) of the terminal screen
:raises:
`.SSHException` -- if the request was rejected or the channel was
closed
"""
m = Message()
m.add_byte(cMSG_CHANNEL_REQUEST)
m.add_int(self.remote_chanid)
m.add_string("window-change")
m.add_boolean(False)
m.add_int(width)
m.add_int(height)
m.add_int(width_pixels)
m.add_int(height_pixels)
self.transport._send_user_message(m) | Resize the pseudo-terminal. This can be used to change the width and
height of the terminal emulation created in a previous `get_pty` call.
:param int width: new width (in characters) of the terminal screen
:param int height: new height (in characters) of the terminal screen
:param int width_pixels: new width (in pixels) of the terminal screen
:param int height_pixels: new height (in pixels) of the terminal screen
:raises:
`.SSHException` -- if the request was rejected or the channel was
closed | Below is the the instruction that describes the task:
### Input:
Resize the pseudo-terminal. This can be used to change the width and
height of the terminal emulation created in a previous `get_pty` call.
:param int width: new width (in characters) of the terminal screen
:param int height: new height (in characters) of the terminal screen
:param int width_pixels: new width (in pixels) of the terminal screen
:param int height_pixels: new height (in pixels) of the terminal screen
:raises:
`.SSHException` -- if the request was rejected or the channel was
closed
### Response:
def resize_pty(self, width=80, height=24, width_pixels=0, height_pixels=0):
"""
Resize the pseudo-terminal. This can be used to change the width and
height of the terminal emulation created in a previous `get_pty` call.
:param int width: new width (in characters) of the terminal screen
:param int height: new height (in characters) of the terminal screen
:param int width_pixels: new width (in pixels) of the terminal screen
:param int height_pixels: new height (in pixels) of the terminal screen
:raises:
`.SSHException` -- if the request was rejected or the channel was
closed
"""
m = Message()
m.add_byte(cMSG_CHANNEL_REQUEST)
m.add_int(self.remote_chanid)
m.add_string("window-change")
m.add_boolean(False)
m.add_int(width)
m.add_int(height)
m.add_int(width_pixels)
m.add_int(height_pixels)
self.transport._send_user_message(m) |
def _update_capacity(self, data):
""" Update the consumed capacity metrics """
if 'ConsumedCapacity' in data:
# This is all for backwards compatibility
consumed = data['ConsumedCapacity']
if not isinstance(consumed, list):
consumed = [consumed]
for cap in consumed:
self.capacity += cap.get('CapacityUnits', 0)
self.table_capacity += cap.get('Table',
{}).get('CapacityUnits', 0)
local_indexes = cap.get('LocalSecondaryIndexes', {})
for k, v in six.iteritems(local_indexes):
self.indexes.setdefault(k, 0)
self.indexes[k] += v['CapacityUnits']
global_indexes = cap.get('GlobalSecondaryIndexes', {})
for k, v in six.iteritems(global_indexes):
self.global_indexes.setdefault(k, 0)
self.global_indexes[k] += v['CapacityUnits'] | Update the consumed capacity metrics | Below is the the instruction that describes the task:
### Input:
Update the consumed capacity metrics
### Response:
def _update_capacity(self, data):
""" Update the consumed capacity metrics """
if 'ConsumedCapacity' in data:
# This is all for backwards compatibility
consumed = data['ConsumedCapacity']
if not isinstance(consumed, list):
consumed = [consumed]
for cap in consumed:
self.capacity += cap.get('CapacityUnits', 0)
self.table_capacity += cap.get('Table',
{}).get('CapacityUnits', 0)
local_indexes = cap.get('LocalSecondaryIndexes', {})
for k, v in six.iteritems(local_indexes):
self.indexes.setdefault(k, 0)
self.indexes[k] += v['CapacityUnits']
global_indexes = cap.get('GlobalSecondaryIndexes', {})
for k, v in six.iteritems(global_indexes):
self.global_indexes.setdefault(k, 0)
self.global_indexes[k] += v['CapacityUnits'] |
def enrich(self, column):
""" This enricher returns the same dataframe
with a new column named 'domain'.
That column is the result of splitting the
email address of another column. If there is
not a proper email address an 'unknown'
domain is returned.
:param column: column where the text to analyze is found
:type data: string
"""
if column not in self.data.columns:
return self.data
self.data['domain'] = self.data[column].apply(lambda x: self.__parse_email(x))
return self.data | This enricher returns the same dataframe
with a new column named 'domain'.
That column is the result of splitting the
email address of another column. If there is
not a proper email address an 'unknown'
domain is returned.
:param column: column where the text to analyze is found
:type data: string | Below is the the instruction that describes the task:
### Input:
This enricher returns the same dataframe
with a new column named 'domain'.
That column is the result of splitting the
email address of another column. If there is
not a proper email address an 'unknown'
domain is returned.
:param column: column where the text to analyze is found
:type data: string
### Response:
def enrich(self, column):
""" This enricher returns the same dataframe
with a new column named 'domain'.
That column is the result of splitting the
email address of another column. If there is
not a proper email address an 'unknown'
domain is returned.
:param column: column where the text to analyze is found
:type data: string
"""
if column not in self.data.columns:
return self.data
self.data['domain'] = self.data[column].apply(lambda x: self.__parse_email(x))
return self.data |
def is_pinned(self, color: Color, square: Square) -> bool:
"""
Detects if the given square is pinned to the king of the given color.
"""
return self.pin_mask(color, square) != BB_ALL | Detects if the given square is pinned to the king of the given color. | Below is the the instruction that describes the task:
### Input:
Detects if the given square is pinned to the king of the given color.
### Response:
def is_pinned(self, color: Color, square: Square) -> bool:
"""
Detects if the given square is pinned to the king of the given color.
"""
return self.pin_mask(color, square) != BB_ALL |
def append_var_uint32(self, value):
"""Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= value <= wire_format.UINT32_MAX:
raise errors.EncodeError('Value out of range: %d' % value)
self.append_var_uint64(value) | Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint. | Below is the the instruction that describes the task:
### Input:
Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint.
### Response:
def append_var_uint32(self, value):
"""Appends an unsigned 32-bit integer to the internal buffer,
encoded as a varint.
"""
if not 0 <= value <= wire_format.UINT32_MAX:
raise errors.EncodeError('Value out of range: %d' % value)
self.append_var_uint64(value) |
def _fuzzdb_get_strings(max_len=0):
'Helper to get all the strings from fuzzdb'
ignored = ['integer-overflow']
for subdir in pkg_resources.resource_listdir('protofuzz', BASE_PATH):
if subdir in ignored:
continue
path = '{}/{}'.format(BASE_PATH, subdir)
listing = pkg_resources.resource_listdir('protofuzz', path)
for filename in listing:
if not filename.endswith('.txt'):
continue
path = '{}/{}/{}'.format(BASE_PATH, subdir, filename)
source = _open_fuzzdb_file(path)
for line in source:
string = line.decode('utf-8').strip()
if not string or string.startswith('#'):
continue
if max_len != 0 and len(line) > max_len:
continue
yield string | Helper to get all the strings from fuzzdb | Below is the the instruction that describes the task:
### Input:
Helper to get all the strings from fuzzdb
### Response:
def _fuzzdb_get_strings(max_len=0):
'Helper to get all the strings from fuzzdb'
ignored = ['integer-overflow']
for subdir in pkg_resources.resource_listdir('protofuzz', BASE_PATH):
if subdir in ignored:
continue
path = '{}/{}'.format(BASE_PATH, subdir)
listing = pkg_resources.resource_listdir('protofuzz', path)
for filename in listing:
if not filename.endswith('.txt'):
continue
path = '{}/{}/{}'.format(BASE_PATH, subdir, filename)
source = _open_fuzzdb_file(path)
for line in source:
string = line.decode('utf-8').strip()
if not string or string.startswith('#'):
continue
if max_len != 0 and len(line) > max_len:
continue
yield string |
def runAsAdmin(cmdLine=None, target_dir='', wait=True):
"""
run [cmdLine] as admin
specify the location from where the code is executed through [target_dir]
"""
if os.name != 'nt':
raise RuntimeError("This function is only implemented on Windows.")
# import win32api,
import win32con
import win32event
import win32process
from win32com.shell.shell import ShellExecuteEx
from win32com.shell import shellcon
python_exe = sys.executable
if cmdLine is None:
cmdLine = [python_exe] + sys.argv
elif type(cmdLine) not in (tuple, list):
raise ValueError("cmdLine is not a sequence.")
cmd = '"%s"' % (cmdLine[0],)
# XXX TODO: isn't there a function or something we can call to message
# command line params?
params = " ".join(['"%s"' % (x,) for x in cmdLine[1:]])
#cmdDir = ''
showCmd = win32con.SW_SHOWNORMAL
#showCmd = win32con.SW_HIDE
lpVerb = 'runas' # causes UAC elevation prompt.
# print "Running", cmd, params
# ShellExecute() doesn't seem to allow us to fetch the PID or handle
# of the process, so we can't get anything useful from it. Therefore
# the more complex ShellExecuteEx() must be used.
# procHandle = win32api.ShellExecute(0, lpVerb, cmd, params, cmdDir, showCmd)
print(target_dir, cmd, params)
procInfo = ShellExecuteEx(nShow=showCmd,
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
lpVerb=lpVerb,
lpFile=cmd,
lpParameters=params,
lpDirectory=target_dir)
if wait:
procHandle = procInfo['hProcess']
obj = win32event.WaitForSingleObject(procHandle, win32event.INFINITE)
rc = win32process.GetExitCodeProcess(procHandle)
# print "Process handle %s returned code %s" % (procHandle, rc)
else:
rc = None
return rc | run [cmdLine] as admin
specify the location from where the code is executed through [target_dir] | Below is the the instruction that describes the task:
### Input:
run [cmdLine] as admin
specify the location from where the code is executed through [target_dir]
### Response:
def runAsAdmin(cmdLine=None, target_dir='', wait=True):
"""
run [cmdLine] as admin
specify the location from where the code is executed through [target_dir]
"""
if os.name != 'nt':
raise RuntimeError("This function is only implemented on Windows.")
# import win32api,
import win32con
import win32event
import win32process
from win32com.shell.shell import ShellExecuteEx
from win32com.shell import shellcon
python_exe = sys.executable
if cmdLine is None:
cmdLine = [python_exe] + sys.argv
elif type(cmdLine) not in (tuple, list):
raise ValueError("cmdLine is not a sequence.")
cmd = '"%s"' % (cmdLine[0],)
# XXX TODO: isn't there a function or something we can call to message
# command line params?
params = " ".join(['"%s"' % (x,) for x in cmdLine[1:]])
#cmdDir = ''
showCmd = win32con.SW_SHOWNORMAL
#showCmd = win32con.SW_HIDE
lpVerb = 'runas' # causes UAC elevation prompt.
# print "Running", cmd, params
# ShellExecute() doesn't seem to allow us to fetch the PID or handle
# of the process, so we can't get anything useful from it. Therefore
# the more complex ShellExecuteEx() must be used.
# procHandle = win32api.ShellExecute(0, lpVerb, cmd, params, cmdDir, showCmd)
print(target_dir, cmd, params)
procInfo = ShellExecuteEx(nShow=showCmd,
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
lpVerb=lpVerb,
lpFile=cmd,
lpParameters=params,
lpDirectory=target_dir)
if wait:
procHandle = procInfo['hProcess']
obj = win32event.WaitForSingleObject(procHandle, win32event.INFINITE)
rc = win32process.GetExitCodeProcess(procHandle)
# print "Process handle %s returned code %s" % (procHandle, rc)
else:
rc = None
return rc |
def has_gap_in_elf_shndx(self):
"""Return the has gap in elf shndx attribute of the BFD file being
processed.
"""
if not self._ptr:
raise BfdException("BFD not initialized")
return _bfd.get_bfd_attribute(
self._ptr, BfdAttributes.HAS_GAP_IN_ELF_SHNDX) | Return the has gap in elf shndx attribute of the BFD file being
processed. | Below is the the instruction that describes the task:
### Input:
Return the has gap in elf shndx attribute of the BFD file being
processed.
### Response:
def has_gap_in_elf_shndx(self):
"""Return the has gap in elf shndx attribute of the BFD file being
processed.
"""
if not self._ptr:
raise BfdException("BFD not initialized")
return _bfd.get_bfd_attribute(
self._ptr, BfdAttributes.HAS_GAP_IN_ELF_SHNDX) |
def objects_copy(self, source_bucket, source_key, target_bucket, target_key):
"""Updates the metadata associated with an object.
Args:
source_bucket: the name of the bucket containing the source object.
source_key: the key of the source object being copied.
target_bucket: the name of the bucket that will contain the copied object.
target_key: the key of the copied object.
Returns:
A parsed object information dictionary.
Raises:
Exception if there is an error performing the operation.
"""
url = Api._ENDPOINT + (Api._OBJECT_COPY_PATH % (source_bucket, Api._escape_key(source_key),
target_bucket, Api._escape_key(target_key)))
return datalab.utils.Http.request(url, method='POST', credentials=self._credentials) | Updates the metadata associated with an object.
Args:
source_bucket: the name of the bucket containing the source object.
source_key: the key of the source object being copied.
target_bucket: the name of the bucket that will contain the copied object.
target_key: the key of the copied object.
Returns:
A parsed object information dictionary.
Raises:
Exception if there is an error performing the operation. | Below is the the instruction that describes the task:
### Input:
Updates the metadata associated with an object.
Args:
source_bucket: the name of the bucket containing the source object.
source_key: the key of the source object being copied.
target_bucket: the name of the bucket that will contain the copied object.
target_key: the key of the copied object.
Returns:
A parsed object information dictionary.
Raises:
Exception if there is an error performing the operation.
### Response:
def objects_copy(self, source_bucket, source_key, target_bucket, target_key):
"""Updates the metadata associated with an object.
Args:
source_bucket: the name of the bucket containing the source object.
source_key: the key of the source object being copied.
target_bucket: the name of the bucket that will contain the copied object.
target_key: the key of the copied object.
Returns:
A parsed object information dictionary.
Raises:
Exception if there is an error performing the operation.
"""
url = Api._ENDPOINT + (Api._OBJECT_COPY_PATH % (source_bucket, Api._escape_key(source_key),
target_bucket, Api._escape_key(target_key)))
return datalab.utils.Http.request(url, method='POST', credentials=self._credentials) |
def warn_quirks(message, recommend, pattern, index):
"""Warn quirks."""
import traceback
import bs4 # noqa: F401
# Acquire source code line context
paths = (MODULE, sys.modules['bs4'].__path__[0])
tb = traceback.extract_stack()
previous = None
filename = None
lineno = None
for entry in tb:
if (PY35 and entry.filename.startswith(paths)) or (not PY35 and entry[0].startswith(paths)):
break
previous = entry
if previous:
filename = previous.filename if PY35 else previous[0]
lineno = previous.lineno if PY35 else previous[1]
# Format pattern to show line and column position
context, line = get_pattern_context(pattern, index)[0:2]
# Display warning
warnings.warn_explicit(
"\nCSS selector pattern:\n" +
" {}\n".format(message) +
" This behavior is only allowed temporarily for Beautiful Soup's transition to Soup Sieve.\n" +
" In order to confrom to the CSS spec, {}\n".format(recommend) +
" It is strongly recommended the selector be altered to conform to the CSS spec " +
"as an exception will be raised for this case in the future.\n" +
"pattern line {}:\n{}".format(line, context),
QuirksWarning,
filename,
lineno
) | Warn quirks. | Below is the the instruction that describes the task:
### Input:
Warn quirks.
### Response:
def warn_quirks(message, recommend, pattern, index):
"""Warn quirks."""
import traceback
import bs4 # noqa: F401
# Acquire source code line context
paths = (MODULE, sys.modules['bs4'].__path__[0])
tb = traceback.extract_stack()
previous = None
filename = None
lineno = None
for entry in tb:
if (PY35 and entry.filename.startswith(paths)) or (not PY35 and entry[0].startswith(paths)):
break
previous = entry
if previous:
filename = previous.filename if PY35 else previous[0]
lineno = previous.lineno if PY35 else previous[1]
# Format pattern to show line and column position
context, line = get_pattern_context(pattern, index)[0:2]
# Display warning
warnings.warn_explicit(
"\nCSS selector pattern:\n" +
" {}\n".format(message) +
" This behavior is only allowed temporarily for Beautiful Soup's transition to Soup Sieve.\n" +
" In order to confrom to the CSS spec, {}\n".format(recommend) +
" It is strongly recommended the selector be altered to conform to the CSS spec " +
"as an exception will be raised for this case in the future.\n" +
"pattern line {}:\n{}".format(line, context),
QuirksWarning,
filename,
lineno
) |
def setup_interval_coinc(workflow, hdfbank, trig_files, stat_files,
veto_files, veto_names, out_dir, tags=None):
"""
This function sets up exact match coincidence and background estimation
using a folded interval technique.
"""
if tags is None:
tags = []
make_analysis_dir(out_dir)
logging.info('Setting up coincidence')
if len(hdfbank) != 1:
raise ValueError('Must use exactly 1 bank file for this coincidence '
'method, I got %i !' % len(hdfbank))
hdfbank = hdfbank[0]
if len(workflow.ifos) > 2:
raise ValueError('This coincidence method only supports two-ifo searches')
findcoinc_exe = PyCBCFindCoincExecutable(workflow.cp, 'coinc',
ifos=workflow.ifos,
tags=tags, out_dir=out_dir)
# Wall time knob and memory knob
factor = int(workflow.cp.get_opt_tags('workflow-coincidence', 'parallelization-factor', tags))
statmap_files = []
for veto_file, veto_name in zip(veto_files, veto_names):
bg_files = FileList()
for i in range(factor):
group_str = '%s/%s' % (i, factor)
coinc_node = findcoinc_exe.create_node(trig_files, hdfbank,
stat_files,
veto_file, veto_name,
group_str,
tags=[veto_name, str(i)])
bg_files += coinc_node.output_files
workflow.add_node(coinc_node)
statmap_files += [setup_statmap(workflow, bg_files, hdfbank, out_dir, tags=tags + [veto_name])]
logging.info('...leaving coincidence ')
return statmap_files | This function sets up exact match coincidence and background estimation
using a folded interval technique. | Below is the the instruction that describes the task:
### Input:
This function sets up exact match coincidence and background estimation
using a folded interval technique.
### Response:
def setup_interval_coinc(workflow, hdfbank, trig_files, stat_files,
veto_files, veto_names, out_dir, tags=None):
"""
This function sets up exact match coincidence and background estimation
using a folded interval technique.
"""
if tags is None:
tags = []
make_analysis_dir(out_dir)
logging.info('Setting up coincidence')
if len(hdfbank) != 1:
raise ValueError('Must use exactly 1 bank file for this coincidence '
'method, I got %i !' % len(hdfbank))
hdfbank = hdfbank[0]
if len(workflow.ifos) > 2:
raise ValueError('This coincidence method only supports two-ifo searches')
findcoinc_exe = PyCBCFindCoincExecutable(workflow.cp, 'coinc',
ifos=workflow.ifos,
tags=tags, out_dir=out_dir)
# Wall time knob and memory knob
factor = int(workflow.cp.get_opt_tags('workflow-coincidence', 'parallelization-factor', tags))
statmap_files = []
for veto_file, veto_name in zip(veto_files, veto_names):
bg_files = FileList()
for i in range(factor):
group_str = '%s/%s' % (i, factor)
coinc_node = findcoinc_exe.create_node(trig_files, hdfbank,
stat_files,
veto_file, veto_name,
group_str,
tags=[veto_name, str(i)])
bg_files += coinc_node.output_files
workflow.add_node(coinc_node)
statmap_files += [setup_statmap(workflow, bg_files, hdfbank, out_dir, tags=tags + [veto_name])]
logging.info('...leaving coincidence ')
return statmap_files |
def _handle_option_deprecations(options):
"""Issue appropriate warnings when deprecated options are present in the
options dictionary. Removes deprecated option key, value pairs if the
options dictionary is found to also have the renamed option."""
undeprecated_options = _CaseInsensitiveDictionary()
for key, value in iteritems(options):
optname = str(key).lower()
if optname in URI_OPTIONS_DEPRECATION_MAP:
renamed_key = URI_OPTIONS_DEPRECATION_MAP[optname]
if renamed_key.lower() in options:
warnings.warn("Deprecated option '%s' ignored in favor of "
"'%s'." % (str(key), renamed_key))
continue
warnings.warn("Option '%s' is deprecated, use '%s' instead." % (
str(key), renamed_key))
undeprecated_options[str(key)] = value
return undeprecated_options | Issue appropriate warnings when deprecated options are present in the
options dictionary. Removes deprecated option key, value pairs if the
options dictionary is found to also have the renamed option. | Below is the the instruction that describes the task:
### Input:
Issue appropriate warnings when deprecated options are present in the
options dictionary. Removes deprecated option key, value pairs if the
options dictionary is found to also have the renamed option.
### Response:
def _handle_option_deprecations(options):
"""Issue appropriate warnings when deprecated options are present in the
options dictionary. Removes deprecated option key, value pairs if the
options dictionary is found to also have the renamed option."""
undeprecated_options = _CaseInsensitiveDictionary()
for key, value in iteritems(options):
optname = str(key).lower()
if optname in URI_OPTIONS_DEPRECATION_MAP:
renamed_key = URI_OPTIONS_DEPRECATION_MAP[optname]
if renamed_key.lower() in options:
warnings.warn("Deprecated option '%s' ignored in favor of "
"'%s'." % (str(key), renamed_key))
continue
warnings.warn("Option '%s' is deprecated, use '%s' instead." % (
str(key), renamed_key))
undeprecated_options[str(key)] = value
return undeprecated_options |
def get_image(vm_):
'''
Return the image object to use
'''
images = avail_images()
vm_image = config.get_cloud_config_value(
'image', vm_, __opts__, search_global=False
)
if not isinstance(vm_image, six.string_types):
vm_image = six.text_type(vm_image)
for image in images:
if vm_image in (images[image]['name'],
images[image]['slug'],
images[image]['id']):
if images[image]['slug'] is not None:
return images[image]['slug']
return int(images[image]['id'])
raise SaltCloudNotFound(
'The specified image, \'{0}\', could not be found.'.format(vm_image)
) | Return the image object to use | Below is the the instruction that describes the task:
### Input:
Return the image object to use
### Response:
def get_image(vm_):
'''
Return the image object to use
'''
images = avail_images()
vm_image = config.get_cloud_config_value(
'image', vm_, __opts__, search_global=False
)
if not isinstance(vm_image, six.string_types):
vm_image = six.text_type(vm_image)
for image in images:
if vm_image in (images[image]['name'],
images[image]['slug'],
images[image]['id']):
if images[image]['slug'] is not None:
return images[image]['slug']
return int(images[image]['id'])
raise SaltCloudNotFound(
'The specified image, \'{0}\', could not be found.'.format(vm_image)
) |
def get_alarms_list(self, num_items=100, params=None):
"""
Get alarms as list of dictionaries
:param int num_items: Max items to retrieve
:param dict params: Additional params dictionary according to:
https://www.alienvault.com/documentation/api/usm-anywhere-api.htm#/alarms
:returns list: list of alarms
"""
if params and set(params.keys()) - VALID_ALARM_PARAMS:
self.log.error("Invalid alarm query parameters: {set(params.keys()) - VALID_ALARM_PARAMS}")
return None
return self._retrieve_items(item_type="alarms", num_items=num_items, params=params) | Get alarms as list of dictionaries
:param int num_items: Max items to retrieve
:param dict params: Additional params dictionary according to:
https://www.alienvault.com/documentation/api/usm-anywhere-api.htm#/alarms
:returns list: list of alarms | Below is the the instruction that describes the task:
### Input:
Get alarms as list of dictionaries
:param int num_items: Max items to retrieve
:param dict params: Additional params dictionary according to:
https://www.alienvault.com/documentation/api/usm-anywhere-api.htm#/alarms
:returns list: list of alarms
### Response:
def get_alarms_list(self, num_items=100, params=None):
"""
Get alarms as list of dictionaries
:param int num_items: Max items to retrieve
:param dict params: Additional params dictionary according to:
https://www.alienvault.com/documentation/api/usm-anywhere-api.htm#/alarms
:returns list: list of alarms
"""
if params and set(params.keys()) - VALID_ALARM_PARAMS:
self.log.error("Invalid alarm query parameters: {set(params.keys()) - VALID_ALARM_PARAMS}")
return None
return self._retrieve_items(item_type="alarms", num_items=num_items, params=params) |
def validate_holdout_selfplay():
"""Validate on held-out selfplay data."""
holdout_dirs = (os.path.join(fsdb.holdout_dir(), d)
for d in reversed(gfile.ListDirectory(fsdb.holdout_dir()))
if gfile.IsDirectory(os.path.join(fsdb.holdout_dir(), d))
for f in gfile.ListDirectory(os.path.join(fsdb.holdout_dir(), d)))
# This is a roundabout way of computing how many hourly directories we need
# to read in order to encompass 20,000 holdout games.
holdout_dirs = set(itertools.islice(holdout_dirs), 20000)
cmd = ['python3', 'validate.py'] + list(holdout_dirs) + [
'--use_tpu',
'--tpu_name={}'.format(TPU_NAME),
'--flagfile=rl_loop/distributed_flags',
'--expand_validation_dirs']
mask_flags.run(cmd) | Validate on held-out selfplay data. | Below is the the instruction that describes the task:
### Input:
Validate on held-out selfplay data.
### Response:
def validate_holdout_selfplay():
"""Validate on held-out selfplay data."""
holdout_dirs = (os.path.join(fsdb.holdout_dir(), d)
for d in reversed(gfile.ListDirectory(fsdb.holdout_dir()))
if gfile.IsDirectory(os.path.join(fsdb.holdout_dir(), d))
for f in gfile.ListDirectory(os.path.join(fsdb.holdout_dir(), d)))
# This is a roundabout way of computing how many hourly directories we need
# to read in order to encompass 20,000 holdout games.
holdout_dirs = set(itertools.islice(holdout_dirs), 20000)
cmd = ['python3', 'validate.py'] + list(holdout_dirs) + [
'--use_tpu',
'--tpu_name={}'.format(TPU_NAME),
'--flagfile=rl_loop/distributed_flags',
'--expand_validation_dirs']
mask_flags.run(cmd) |
def deeper_conv_block(conv_layer, kernel_size, weighted=True):
'''deeper conv layer.
'''
n_dim = get_n_dim(conv_layer)
filter_shape = (kernel_size,) * 2
n_filters = conv_layer.filters
weight = np.zeros((n_filters, n_filters) + filter_shape)
center = tuple(map(lambda x: int((x - 1) / 2), filter_shape))
for i in range(n_filters):
filter_weight = np.zeros((n_filters,) + filter_shape)
index = (i,) + center
filter_weight[index] = 1
weight[i, ...] = filter_weight
bias = np.zeros(n_filters)
new_conv_layer = get_conv_class(n_dim)(
conv_layer.filters, n_filters, kernel_size=kernel_size
)
bn = get_batch_norm_class(n_dim)(n_filters)
if weighted:
new_conv_layer.set_weights(
(add_noise(weight, np.array([0, 1])), add_noise(bias, np.array([0, 1])))
)
new_weights = [
add_noise(np.ones(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.zeros(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.zeros(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.ones(n_filters, dtype=np.float32), np.array([0, 1])),
]
bn.set_weights(new_weights)
return [StubReLU(), new_conv_layer, bn] | deeper conv layer. | Below is the the instruction that describes the task:
### Input:
deeper conv layer.
### Response:
def deeper_conv_block(conv_layer, kernel_size, weighted=True):
'''deeper conv layer.
'''
n_dim = get_n_dim(conv_layer)
filter_shape = (kernel_size,) * 2
n_filters = conv_layer.filters
weight = np.zeros((n_filters, n_filters) + filter_shape)
center = tuple(map(lambda x: int((x - 1) / 2), filter_shape))
for i in range(n_filters):
filter_weight = np.zeros((n_filters,) + filter_shape)
index = (i,) + center
filter_weight[index] = 1
weight[i, ...] = filter_weight
bias = np.zeros(n_filters)
new_conv_layer = get_conv_class(n_dim)(
conv_layer.filters, n_filters, kernel_size=kernel_size
)
bn = get_batch_norm_class(n_dim)(n_filters)
if weighted:
new_conv_layer.set_weights(
(add_noise(weight, np.array([0, 1])), add_noise(bias, np.array([0, 1])))
)
new_weights = [
add_noise(np.ones(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.zeros(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.zeros(n_filters, dtype=np.float32), np.array([0, 1])),
add_noise(np.ones(n_filters, dtype=np.float32), np.array([0, 1])),
]
bn.set_weights(new_weights)
return [StubReLU(), new_conv_layer, bn] |
def __create_chunk(self, frequency, play_time, sample_rate):
'''
チャンクを生成する
Args:
frequency: 周波数
play_time: 再生時間(秒)
sample_rate: サンプルレート
Returns:
チャンクのnumpy配列
'''
chunks = []
wave_form = self.wave_form.create(frequency, play_time, sample_rate)
chunks.append(wave_form)
chunk = numpy.concatenate(chunks)
return chunk | チャンクを生成する
Args:
frequency: 周波数
play_time: 再生時間(秒)
sample_rate: サンプルレート
Returns:
チャンクのnumpy配列 | Below is the the instruction that describes the task:
### Input:
チャンクを生成する
Args:
frequency: 周波数
play_time: 再生時間(秒)
sample_rate: サンプルレート
Returns:
チャンクのnumpy配列
### Response:
def __create_chunk(self, frequency, play_time, sample_rate):
'''
チャンクを生成する
Args:
frequency: 周波数
play_time: 再生時間(秒)
sample_rate: サンプルレート
Returns:
チャンクのnumpy配列
'''
chunks = []
wave_form = self.wave_form.create(frequency, play_time, sample_rate)
chunks.append(wave_form)
chunk = numpy.concatenate(chunks)
return chunk |
def __get_max_date(self, reviews):
""""Get the max date in unixtime format from reviews."""
max_ts = 0
for review in reviews:
ts = str_to_datetime(review['timestamp'])
ts = datetime_to_utc(ts)
if ts.timestamp() > max_ts:
max_ts = ts.timestamp()
return max_ts | Get the max date in unixtime format from reviews. | Below is the the instruction that describes the task:
### Input:
Get the max date in unixtime format from reviews.
### Response:
def __get_max_date(self, reviews):
""""Get the max date in unixtime format from reviews."""
max_ts = 0
for review in reviews:
ts = str_to_datetime(review['timestamp'])
ts = datetime_to_utc(ts)
if ts.timestamp() > max_ts:
max_ts = ts.timestamp()
return max_ts |
def register(conf, conf_admin, **options):
"""
Register a new admin section.
:param conf: A subclass of ``djconfig.admin.Config``
:param conf_admin: A subclass of ``djconfig.admin.ConfigAdmin``
:param options: Extra options passed to ``django.contrib.admin.site.register``
"""
assert issubclass(conf_admin, ConfigAdmin), (
'conf_admin is not a ConfigAdmin subclass')
assert issubclass(
getattr(conf_admin, 'change_list_form', None),
ConfigForm), 'No change_list_form set'
assert issubclass(conf, Config), (
'conf is not a Config subclass')
assert conf.app_label, 'No app_label set'
assert conf.verbose_name_plural, 'No verbose_name_plural set'
assert not conf.name or re.match(r"^[a-zA-Z_]+$", conf.name), (
'Not a valid name. Valid chars are [a-zA-Z_]')
config_class = type("Config", (), {})
config_class._meta = type("Meta", (_ConfigMeta,), {
'app_label': conf.app_label,
'verbose_name_plural': conf.verbose_name_plural,
'object_name': 'Config',
'model_name': conf.name,
'module_name': conf.name})
admin.site.register([config_class], conf_admin, **options) | Register a new admin section.
:param conf: A subclass of ``djconfig.admin.Config``
:param conf_admin: A subclass of ``djconfig.admin.ConfigAdmin``
:param options: Extra options passed to ``django.contrib.admin.site.register`` | Below is the the instruction that describes the task:
### Input:
Register a new admin section.
:param conf: A subclass of ``djconfig.admin.Config``
:param conf_admin: A subclass of ``djconfig.admin.ConfigAdmin``
:param options: Extra options passed to ``django.contrib.admin.site.register``
### Response:
def register(conf, conf_admin, **options):
"""
Register a new admin section.
:param conf: A subclass of ``djconfig.admin.Config``
:param conf_admin: A subclass of ``djconfig.admin.ConfigAdmin``
:param options: Extra options passed to ``django.contrib.admin.site.register``
"""
assert issubclass(conf_admin, ConfigAdmin), (
'conf_admin is not a ConfigAdmin subclass')
assert issubclass(
getattr(conf_admin, 'change_list_form', None),
ConfigForm), 'No change_list_form set'
assert issubclass(conf, Config), (
'conf is not a Config subclass')
assert conf.app_label, 'No app_label set'
assert conf.verbose_name_plural, 'No verbose_name_plural set'
assert not conf.name or re.match(r"^[a-zA-Z_]+$", conf.name), (
'Not a valid name. Valid chars are [a-zA-Z_]')
config_class = type("Config", (), {})
config_class._meta = type("Meta", (_ConfigMeta,), {
'app_label': conf.app_label,
'verbose_name_plural': conf.verbose_name_plural,
'object_name': 'Config',
'model_name': conf.name,
'module_name': conf.name})
admin.site.register([config_class], conf_admin, **options) |
def lock(self, atime=30, ltime=5, identifier=None):
'''Context manager to acquire the namespace global lock.
This is typically used for multi-step registry operations,
such as a read-modify-write sequence::
with registry.lock() as session:
d = session.get('dict', 'key')
del d['traceback']
session.set('dict', 'key', d)
Callers may provide their own `identifier`; if they do, they
must ensure that it is reasonably unique (e.g., a UUID).
Using a stored worker ID that is traceable back to the lock
holder is a good practice.
:param int atime: maximum time (in seconds) to acquire lock
:param int ltime: maximum time (in seconds) to own lock
:param str identifier: worker-unique identifier for the lock
'''
if identifier is None:
identifier = nice_identifier()
if self._acquire_lock(identifier, atime, ltime) != identifier:
raise LockError("could not acquire lock")
try:
self._session_lock_identifier = identifier
yield self
finally:
self._release_lock(identifier)
self._session_lock_identifier = None | Context manager to acquire the namespace global lock.
This is typically used for multi-step registry operations,
such as a read-modify-write sequence::
with registry.lock() as session:
d = session.get('dict', 'key')
del d['traceback']
session.set('dict', 'key', d)
Callers may provide their own `identifier`; if they do, they
must ensure that it is reasonably unique (e.g., a UUID).
Using a stored worker ID that is traceable back to the lock
holder is a good practice.
:param int atime: maximum time (in seconds) to acquire lock
:param int ltime: maximum time (in seconds) to own lock
:param str identifier: worker-unique identifier for the lock | Below is the the instruction that describes the task:
### Input:
Context manager to acquire the namespace global lock.
This is typically used for multi-step registry operations,
such as a read-modify-write sequence::
with registry.lock() as session:
d = session.get('dict', 'key')
del d['traceback']
session.set('dict', 'key', d)
Callers may provide their own `identifier`; if they do, they
must ensure that it is reasonably unique (e.g., a UUID).
Using a stored worker ID that is traceable back to the lock
holder is a good practice.
:param int atime: maximum time (in seconds) to acquire lock
:param int ltime: maximum time (in seconds) to own lock
:param str identifier: worker-unique identifier for the lock
### Response:
def lock(self, atime=30, ltime=5, identifier=None):
'''Context manager to acquire the namespace global lock.
This is typically used for multi-step registry operations,
such as a read-modify-write sequence::
with registry.lock() as session:
d = session.get('dict', 'key')
del d['traceback']
session.set('dict', 'key', d)
Callers may provide their own `identifier`; if they do, they
must ensure that it is reasonably unique (e.g., a UUID).
Using a stored worker ID that is traceable back to the lock
holder is a good practice.
:param int atime: maximum time (in seconds) to acquire lock
:param int ltime: maximum time (in seconds) to own lock
:param str identifier: worker-unique identifier for the lock
'''
if identifier is None:
identifier = nice_identifier()
if self._acquire_lock(identifier, atime, ltime) != identifier:
raise LockError("could not acquire lock")
try:
self._session_lock_identifier = identifier
yield self
finally:
self._release_lock(identifier)
self._session_lock_identifier = None |
def find_package(name, installed, package=False):
'''Finds a package in the installed list.
If `package` is true, match package names, otherwise, match import paths.
'''
if package:
name = name.lower()
tests = (
lambda x: x.user and name == x.name.lower(),
lambda x: x.local and name == x.name.lower(),
lambda x: name == x.name.lower(),
)
else:
tests = (
lambda x: x.user and name in x.import_names,
lambda x: x.local and name in x.import_names,
lambda x: name in x.import_names,
)
for t in tests:
try:
found = list(filter(t, installed))
if found and not found[0].is_scan:
return found[0]
except StopIteration:
pass
return None | Finds a package in the installed list.
If `package` is true, match package names, otherwise, match import paths. | Below is the the instruction that describes the task:
### Input:
Finds a package in the installed list.
If `package` is true, match package names, otherwise, match import paths.
### Response:
def find_package(name, installed, package=False):
'''Finds a package in the installed list.
If `package` is true, match package names, otherwise, match import paths.
'''
if package:
name = name.lower()
tests = (
lambda x: x.user and name == x.name.lower(),
lambda x: x.local and name == x.name.lower(),
lambda x: name == x.name.lower(),
)
else:
tests = (
lambda x: x.user and name in x.import_names,
lambda x: x.local and name in x.import_names,
lambda x: name in x.import_names,
)
for t in tests:
try:
found = list(filter(t, installed))
if found and not found[0].is_scan:
return found[0]
except StopIteration:
pass
return None |
def regxy(pattern, response, supress_regex, custom):
"""Extract a string based on regex pattern supplied by user."""
try:
matches = re.findall(r'%s' % pattern, response)
for match in matches:
verb('Custom regex', match)
custom.add(match)
except:
supress_regex = True | Extract a string based on regex pattern supplied by user. | Below is the the instruction that describes the task:
### Input:
Extract a string based on regex pattern supplied by user.
### Response:
def regxy(pattern, response, supress_regex, custom):
"""Extract a string based on regex pattern supplied by user."""
try:
matches = re.findall(r'%s' % pattern, response)
for match in matches:
verb('Custom regex', match)
custom.add(match)
except:
supress_regex = True |
def squeezenet1_1(pretrained=False, **kwargs):
r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = SqueezeNet(version=1.1, **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['squeezenet1_1']))
return model | r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet | Below is the the instruction that describes the task:
### Input:
r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
### Response:
def squeezenet1_1(pretrained=False, **kwargs):
r"""SqueezeNet 1.1 model from the `official SqueezeNet repo
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>`_.
SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters
than SqueezeNet 1.0, without sacrificing accuracy.
Args:
pretrained (bool): If True, returns a model pre-trained on ImageNet
"""
model = SqueezeNet(version=1.1, **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url(model_urls['squeezenet1_1']))
return model |
def main(args):
"""
Nibble's entry point.
:param args: Command-line arguments, with the program in position 0.
"""
args = _parse_args(args)
# sort out logging output and level
level = util.log_level_from_vebosity(args.verbosity)
root = logging.getLogger()
root.setLevel(level)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(level)
handler.setFormatter(logging.Formatter('%(levelname)s %(message)s'))
root.addHandler(handler)
logger.debug(args)
expression = ' '.join(args.expression)
try:
print(Parser().parse(expression))
except (LexingError, ParsingError) as e:
util.print_error(e)
return 1
return 0 | Nibble's entry point.
:param args: Command-line arguments, with the program in position 0. | Below is the the instruction that describes the task:
### Input:
Nibble's entry point.
:param args: Command-line arguments, with the program in position 0.
### Response:
def main(args):
"""
Nibble's entry point.
:param args: Command-line arguments, with the program in position 0.
"""
args = _parse_args(args)
# sort out logging output and level
level = util.log_level_from_vebosity(args.verbosity)
root = logging.getLogger()
root.setLevel(level)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(level)
handler.setFormatter(logging.Formatter('%(levelname)s %(message)s'))
root.addHandler(handler)
logger.debug(args)
expression = ' '.join(args.expression)
try:
print(Parser().parse(expression))
except (LexingError, ParsingError) as e:
util.print_error(e)
return 1
return 0 |
def handle(self, targetdir, app=None, **options):
""" command execution """
translation.activate(settings.LANGUAGE_CODE)
if app:
unpack = app.split('.')
if len(unpack) == 2:
models = [get_model(unpack[0], unpack[1])]
elif len(unpack) == 1:
models = get_models(get_app(unpack[0]))
else:
models = get_models()
messagemaker = MakeModelMessages(targetdir)
for model in models:
if hasattr(model, 'localized_fields'):
for instance in model.objects.all():
messagemaker(instance) | command execution | Below is the the instruction that describes the task:
### Input:
command execution
### Response:
def handle(self, targetdir, app=None, **options):
""" command execution """
translation.activate(settings.LANGUAGE_CODE)
if app:
unpack = app.split('.')
if len(unpack) == 2:
models = [get_model(unpack[0], unpack[1])]
elif len(unpack) == 1:
models = get_models(get_app(unpack[0]))
else:
models = get_models()
messagemaker = MakeModelMessages(targetdir)
for model in models:
if hasattr(model, 'localized_fields'):
for instance in model.objects.all():
messagemaker(instance) |
def handle_input(self, input_str, place=True, check=False):
'''Transfer user input to valid chess position'''
user = self.get_player()
pos = self.validate_input(input_str)
if pos[0] == 'u':
self.undo(pos[1])
return pos
if place:
result = self.set_pos(pos, check)
return result
else:
return pos | Transfer user input to valid chess position | Below is the the instruction that describes the task:
### Input:
Transfer user input to valid chess position
### Response:
def handle_input(self, input_str, place=True, check=False):
'''Transfer user input to valid chess position'''
user = self.get_player()
pos = self.validate_input(input_str)
if pos[0] == 'u':
self.undo(pos[1])
return pos
if place:
result = self.set_pos(pos, check)
return result
else:
return pos |
def predict(self, data):
"""
Predict new values by running data through the fit model.
Parameters
----------
data : pandas.DataFrame
Table with columns corresponding to the RHS of `model_expression`.
Returns
-------
predicted : ndarray
Array of predicted values.
"""
with log_start_finish('_FakeRegressionResults prediction', logger):
model_design = dmatrix(
self._rhs, data=data, return_type='dataframe')
return model_design.dot(self.params).values | Predict new values by running data through the fit model.
Parameters
----------
data : pandas.DataFrame
Table with columns corresponding to the RHS of `model_expression`.
Returns
-------
predicted : ndarray
Array of predicted values. | Below is the the instruction that describes the task:
### Input:
Predict new values by running data through the fit model.
Parameters
----------
data : pandas.DataFrame
Table with columns corresponding to the RHS of `model_expression`.
Returns
-------
predicted : ndarray
Array of predicted values.
### Response:
def predict(self, data):
"""
Predict new values by running data through the fit model.
Parameters
----------
data : pandas.DataFrame
Table with columns corresponding to the RHS of `model_expression`.
Returns
-------
predicted : ndarray
Array of predicted values.
"""
with log_start_finish('_FakeRegressionResults prediction', logger):
model_design = dmatrix(
self._rhs, data=data, return_type='dataframe')
return model_design.dot(self.params).values |
def iflag_unique_items(list_):
"""
Returns a list of flags corresponding to the first time an item is seen
Args:
list_ (list): list of items
Returns:
flag_iter
"""
seen = set()
def unseen(item):
if item in seen:
return False
seen.add(item)
return True
flag_iter = (unseen(item) for item in list_)
return flag_iter | Returns a list of flags corresponding to the first time an item is seen
Args:
list_ (list): list of items
Returns:
flag_iter | Below is the the instruction that describes the task:
### Input:
Returns a list of flags corresponding to the first time an item is seen
Args:
list_ (list): list of items
Returns:
flag_iter
### Response:
def iflag_unique_items(list_):
"""
Returns a list of flags corresponding to the first time an item is seen
Args:
list_ (list): list of items
Returns:
flag_iter
"""
seen = set()
def unseen(item):
if item in seen:
return False
seen.add(item)
return True
flag_iter = (unseen(item) for item in list_)
return flag_iter |
def check_file_version(notebook, source_path, outputs_path):
"""Raise if file version in source file would override outputs"""
if not insert_or_test_version_number():
return
_, ext = os.path.splitext(source_path)
if ext.endswith('.ipynb'):
return
version = notebook.metadata.get('jupytext', {}).get('text_representation', {}).get('format_version')
format_name = format_name_for_ext(notebook.metadata, ext)
fmt = get_format_implementation(ext, format_name)
current = fmt.current_version_number
# Missing version, still generated by jupytext?
if notebook.metadata and not version:
version = current
# Same version? OK
if version == fmt.current_version_number:
return
# Version larger than minimum readable version
if (fmt.min_readable_version_number or current) <= version <= current:
return
raise JupytextFormatError("File {} is in format/version={}/{} (current version is {}). "
"It would not be safe to override the source of {} with that file. "
"Please remove one or the other file."
.format(os.path.basename(source_path),
format_name, version, current,
os.path.basename(outputs_path))) | Raise if file version in source file would override outputs | Below is the the instruction that describes the task:
### Input:
Raise if file version in source file would override outputs
### Response:
def check_file_version(notebook, source_path, outputs_path):
"""Raise if file version in source file would override outputs"""
if not insert_or_test_version_number():
return
_, ext = os.path.splitext(source_path)
if ext.endswith('.ipynb'):
return
version = notebook.metadata.get('jupytext', {}).get('text_representation', {}).get('format_version')
format_name = format_name_for_ext(notebook.metadata, ext)
fmt = get_format_implementation(ext, format_name)
current = fmt.current_version_number
# Missing version, still generated by jupytext?
if notebook.metadata and not version:
version = current
# Same version? OK
if version == fmt.current_version_number:
return
# Version larger than minimum readable version
if (fmt.min_readable_version_number or current) <= version <= current:
return
raise JupytextFormatError("File {} is in format/version={}/{} (current version is {}). "
"It would not be safe to override the source of {} with that file. "
"Please remove one or the other file."
.format(os.path.basename(source_path),
format_name, version, current,
os.path.basename(outputs_path))) |
def update(self):
"""Wrapper method to update the stats."""
# For standalone and server modes
# For each plugins, call the update method
for p in self._plugins:
if self._plugins[p].is_disable():
# If current plugin is disable
# then continue to next plugin
continue
# Update the stats...
self._plugins[p].update()
# ... the history
self._plugins[p].update_stats_history()
# ... and the views
self._plugins[p].update_views() | Wrapper method to update the stats. | Below is the the instruction that describes the task:
### Input:
Wrapper method to update the stats.
### Response:
def update(self):
"""Wrapper method to update the stats."""
# For standalone and server modes
# For each plugins, call the update method
for p in self._plugins:
if self._plugins[p].is_disable():
# If current plugin is disable
# then continue to next plugin
continue
# Update the stats...
self._plugins[p].update()
# ... the history
self._plugins[p].update_stats_history()
# ... and the views
self._plugins[p].update_views() |
def get_themes(templates_path):
"""Returns available themes list."""
themes = os.listdir(templates_path)
if '__common__' in themes:
themes.remove('__common__')
return themes | Returns available themes list. | Below is the the instruction that describes the task:
### Input:
Returns available themes list.
### Response:
def get_themes(templates_path):
"""Returns available themes list."""
themes = os.listdir(templates_path)
if '__common__' in themes:
themes.remove('__common__')
return themes |
def makeNodeTuple(citation, idVal, nodeInfo, fullInfo, nodeType, count, coreCitesDict, coreValues, detailedValues, addCR):
"""Makes a tuple of idVal and a dict of the selected attributes"""
d = {}
if nodeInfo:
if nodeType == 'full':
if coreValues:
if citation in coreCitesDict:
R = coreCitesDict[citation]
d['MK-ID'] = R.id
if not detailedValues:
infoVals = []
for tag in coreValues:
tagVal = R.get(tag)
if isinstance(tagVal, str):
infoVals.append(tagVal.replace(',',''))
elif isinstance(tagVal, list):
infoVals.append(tagVal[0].replace(',',''))
else:
pass
d['info'] = ', '.join(infoVals)
else:
for tag in coreValues:
v = R.get(tag, None)
if isinstance(v, list):
d[tag] = '|'.join(sorted(v))
else:
d[tag] = v
d['inCore'] = True
if addCR:
d['citations'] = '|'.join((str(c) for c in R.get('citations', [])))
else:
d['MK-ID'] = 'None'
d['info'] = citation.allButDOI()
d['inCore'] = False
if addCR:
d['citations'] = ''
else:
d['info'] = citation.allButDOI()
elif nodeType == 'journal':
if citation.isJournal():
d['info'] = str(citation.FullJournalName())
else:
d['info'] = "None"
elif nodeType == 'original':
d['info'] = str(citation)
else:
d['info'] = idVal
if fullInfo:
d['fullCite'] = str(citation)
if count:
d['count'] = 1
return (idVal, d) | Makes a tuple of idVal and a dict of the selected attributes | Below is the the instruction that describes the task:
### Input:
Makes a tuple of idVal and a dict of the selected attributes
### Response:
def makeNodeTuple(citation, idVal, nodeInfo, fullInfo, nodeType, count, coreCitesDict, coreValues, detailedValues, addCR):
"""Makes a tuple of idVal and a dict of the selected attributes"""
d = {}
if nodeInfo:
if nodeType == 'full':
if coreValues:
if citation in coreCitesDict:
R = coreCitesDict[citation]
d['MK-ID'] = R.id
if not detailedValues:
infoVals = []
for tag in coreValues:
tagVal = R.get(tag)
if isinstance(tagVal, str):
infoVals.append(tagVal.replace(',',''))
elif isinstance(tagVal, list):
infoVals.append(tagVal[0].replace(',',''))
else:
pass
d['info'] = ', '.join(infoVals)
else:
for tag in coreValues:
v = R.get(tag, None)
if isinstance(v, list):
d[tag] = '|'.join(sorted(v))
else:
d[tag] = v
d['inCore'] = True
if addCR:
d['citations'] = '|'.join((str(c) for c in R.get('citations', [])))
else:
d['MK-ID'] = 'None'
d['info'] = citation.allButDOI()
d['inCore'] = False
if addCR:
d['citations'] = ''
else:
d['info'] = citation.allButDOI()
elif nodeType == 'journal':
if citation.isJournal():
d['info'] = str(citation.FullJournalName())
else:
d['info'] = "None"
elif nodeType == 'original':
d['info'] = str(citation)
else:
d['info'] = idVal
if fullInfo:
d['fullCite'] = str(citation)
if count:
d['count'] = 1
return (idVal, d) |
def get_best_import_key(self, import_keys):
"""
Picks best RSA key for import from the import keys arrays.
:param import_keys:
:return:
"""
rsa2048 = None
rsa1024 = None
for c_key in import_keys:
if c_key is None \
or 'type' not in c_key \
or c_key['type'] is None:
logger.info("Invalid key: %s", c_key)
continue
if rsa1024 is None and c_key['type'] == 'rsa1024':
rsa1024 = c_key
if rsa2048 is None and c_key['type'] == 'rsa2048':
rsa2048 = c_key
return rsa2048 if rsa2048 is not None else rsa1024 | Picks best RSA key for import from the import keys arrays.
:param import_keys:
:return: | Below is the the instruction that describes the task:
### Input:
Picks best RSA key for import from the import keys arrays.
:param import_keys:
:return:
### Response:
def get_best_import_key(self, import_keys):
"""
Picks best RSA key for import from the import keys arrays.
:param import_keys:
:return:
"""
rsa2048 = None
rsa1024 = None
for c_key in import_keys:
if c_key is None \
or 'type' not in c_key \
or c_key['type'] is None:
logger.info("Invalid key: %s", c_key)
continue
if rsa1024 is None and c_key['type'] == 'rsa1024':
rsa1024 = c_key
if rsa2048 is None and c_key['type'] == 'rsa2048':
rsa2048 = c_key
return rsa2048 if rsa2048 is not None else rsa1024 |
def section(self, dept, course_number, sect_number):
"""Return a single section object for the given section. All arguments
should be strings. Throws a `ValueError` if the section is not found.
>>> lgst101_bfs = r.course('lgst', '101', '301')
"""
section_id = dept + course_number + sect_number
sections = self.search({'course_id': section_id})
try:
return next(sections)
except StopIteration:
raise ValueError('Section %s not found' % section_id) | Return a single section object for the given section. All arguments
should be strings. Throws a `ValueError` if the section is not found.
>>> lgst101_bfs = r.course('lgst', '101', '301') | Below is the the instruction that describes the task:
### Input:
Return a single section object for the given section. All arguments
should be strings. Throws a `ValueError` if the section is not found.
>>> lgst101_bfs = r.course('lgst', '101', '301')
### Response:
def section(self, dept, course_number, sect_number):
"""Return a single section object for the given section. All arguments
should be strings. Throws a `ValueError` if the section is not found.
>>> lgst101_bfs = r.course('lgst', '101', '301')
"""
section_id = dept + course_number + sect_number
sections = self.search({'course_id': section_id})
try:
return next(sections)
except StopIteration:
raise ValueError('Section %s not found' % section_id) |
def get_next_page(self):
"""Returns the next page of results as a sequence of Album objects."""
master_node = self._retrieve_next_page()
seq = []
for node in master_node.getElementsByTagName("album"):
seq.append(
Album(
_extract(node, "artist"),
_extract(node, "name"),
self.network,
info={"image": _extract_all(node, "image")},
)
)
return seq | Returns the next page of results as a sequence of Album objects. | Below is the the instruction that describes the task:
### Input:
Returns the next page of results as a sequence of Album objects.
### Response:
def get_next_page(self):
"""Returns the next page of results as a sequence of Album objects."""
master_node = self._retrieve_next_page()
seq = []
for node in master_node.getElementsByTagName("album"):
seq.append(
Album(
_extract(node, "artist"),
_extract(node, "name"),
self.network,
info={"image": _extract_all(node, "image")},
)
)
return seq |
def stack(self, value):
"""The stack property.
Args:
value (string). the property value.
"""
if value == self._defaults['stack'] and 'stack' in self._values:
del self._values['stack']
else:
self._values['stack'] = value | The stack property.
Args:
value (string). the property value. | Below is the the instruction that describes the task:
### Input:
The stack property.
Args:
value (string). the property value.
### Response:
def stack(self, value):
"""The stack property.
Args:
value (string). the property value.
"""
if value == self._defaults['stack'] and 'stack' in self._values:
del self._values['stack']
else:
self._values['stack'] = value |
def Serialize(self, writer):
"""
Serialize full object.
Args:
writer (neo.IO.BinaryWriter):
"""
super(StorageItem, self).Serialize(writer)
writer.WriteVarBytes(self.Value) | Serialize full object.
Args:
writer (neo.IO.BinaryWriter): | Below is the the instruction that describes the task:
### Input:
Serialize full object.
Args:
writer (neo.IO.BinaryWriter):
### Response:
def Serialize(self, writer):
"""
Serialize full object.
Args:
writer (neo.IO.BinaryWriter):
"""
super(StorageItem, self).Serialize(writer)
writer.WriteVarBytes(self.Value) |
def handler(event, context): # pylint: disable=W0613
"""
Historical security group event differ.
Listens to the Historical current table and determines if there are differences that need to be persisted in the
historical record.
"""
# De-serialize the records:
records = deserialize_records(event['Records'])
for record in records:
process_dynamodb_differ_record(record, CurrentSecurityGroupModel, DurableSecurityGroupModel) | Historical security group event differ.
Listens to the Historical current table and determines if there are differences that need to be persisted in the
historical record. | Below is the the instruction that describes the task:
### Input:
Historical security group event differ.
Listens to the Historical current table and determines if there are differences that need to be persisted in the
historical record.
### Response:
def handler(event, context): # pylint: disable=W0613
"""
Historical security group event differ.
Listens to the Historical current table and determines if there are differences that need to be persisted in the
historical record.
"""
# De-serialize the records:
records = deserialize_records(event['Records'])
for record in records:
process_dynamodb_differ_record(record, CurrentSecurityGroupModel, DurableSecurityGroupModel) |
def MD_restrained(dirname='MD_POSRES', **kwargs):
"""Set up MD with position restraints.
Additional itp files should be in the same directory as the top file.
Many of the keyword arguments below already have sensible values. Note that
setting *mainselection* = ``None`` will disable many of the automated
choices and is often recommended when using your own mdp file.
:Keywords:
*dirname*
set up under directory dirname [MD_POSRES]
*struct*
input structure (gro, pdb, ...) [em/em.pdb]
*top*
topology file [top/system.top]
*mdp*
mdp file (or use the template) [templates/md.mdp]
*ndx*
index file (supply when using a custom mdp)
*includes*
additional directories to search for itp files
*mainselection*
:program:`make_ndx` selection to select main group ["Protein"]
(If ``None`` then no canonical index file is generated and
it is the user's responsibility to set *tc_grps*,
*tau_t*, and *ref_t* as keyword arguments, or provide the mdp template
with all parameter pre-set in *mdp* and probably also your own *ndx*
index file.)
*deffnm*
default filename for Gromacs run [md]
*runtime*
total length of the simulation in ps [1000]
*dt*
integration time step in ps [0.002]
*qscript*
script to submit to the queuing system; by default
uses the template :data:`gromacs.config.qscript_template`, which can
be manually set to another template from :data:`gromacs.config.templates`;
can also be a list of template names.
*qname*
name to be used for the job in the queuing system [PR_GMX]
*mdrun_opts*
option flags for the :program:`mdrun` command in the queuing system
scripts such as "-stepout 100". [""]
*kwargs*
remaining key/value pairs that should be changed in the template mdp
file, eg ``nstxtcout=250, nstfout=250`` or command line options for
``grompp` such as ``maxwarn=1``.
In particular one can also set **define** and activate
whichever position restraints have been coded into the itp
and top file. For instance one could have
*define* = "-DPOSRES_MainChain -DPOSRES_LIGAND"
if these preprocessor constructs exist. Note that there
**must not be any space between "-D" and the value.**
By default *define* is set to "-DPOSRES".
:Returns: a dict that can be fed into :func:`gromacs.setup.MD`
(but check, just in case, especially if you want to
change the ``define`` parameter in the mdp file)
.. Note:: The output frequency is drastically reduced for position
restraint runs by default. Set the corresponding ``nst*``
variables if you require more output. The `pressure coupling`_
option *refcoord_scaling* is set to "com" by default (but can
be changed via *kwargs*) and the pressure coupling
algorithm itself is set to *Pcoupl* = "Berendsen" to
run a stable simulation.
.. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc
"""
logger.info("[{dirname!s}] Setting up MD with position restraints...".format(**vars()))
kwargs.setdefault('struct', 'em/em.pdb')
kwargs.setdefault('qname', 'PR_GMX')
kwargs.setdefault('define', '-DPOSRES')
# reduce size of output files
kwargs.setdefault('nstxout', '50000') # trr pos
kwargs.setdefault('nstvout', '50000') # trr veloc
kwargs.setdefault('nstfout', '0') # trr forces
kwargs.setdefault('nstlog', '500') # log file
kwargs.setdefault('nstenergy', '2500') # edr energy
kwargs.setdefault('nstxtcout', '5000') # xtc pos
# try to get good pressure equilibration
kwargs.setdefault('refcoord_scaling', 'com')
kwargs.setdefault('Pcoupl', "Berendsen")
new_kwargs = _setup_MD(dirname, **kwargs)
# clean up output kwargs
new_kwargs.pop('define', None) # but make sure that -DPOSRES does not stay...
new_kwargs.pop('refcoord_scaling', None)
new_kwargs.pop('Pcoupl', None)
return new_kwargs | Set up MD with position restraints.
Additional itp files should be in the same directory as the top file.
Many of the keyword arguments below already have sensible values. Note that
setting *mainselection* = ``None`` will disable many of the automated
choices and is often recommended when using your own mdp file.
:Keywords:
*dirname*
set up under directory dirname [MD_POSRES]
*struct*
input structure (gro, pdb, ...) [em/em.pdb]
*top*
topology file [top/system.top]
*mdp*
mdp file (or use the template) [templates/md.mdp]
*ndx*
index file (supply when using a custom mdp)
*includes*
additional directories to search for itp files
*mainselection*
:program:`make_ndx` selection to select main group ["Protein"]
(If ``None`` then no canonical index file is generated and
it is the user's responsibility to set *tc_grps*,
*tau_t*, and *ref_t* as keyword arguments, or provide the mdp template
with all parameter pre-set in *mdp* and probably also your own *ndx*
index file.)
*deffnm*
default filename for Gromacs run [md]
*runtime*
total length of the simulation in ps [1000]
*dt*
integration time step in ps [0.002]
*qscript*
script to submit to the queuing system; by default
uses the template :data:`gromacs.config.qscript_template`, which can
be manually set to another template from :data:`gromacs.config.templates`;
can also be a list of template names.
*qname*
name to be used for the job in the queuing system [PR_GMX]
*mdrun_opts*
option flags for the :program:`mdrun` command in the queuing system
scripts such as "-stepout 100". [""]
*kwargs*
remaining key/value pairs that should be changed in the template mdp
file, eg ``nstxtcout=250, nstfout=250`` or command line options for
``grompp` such as ``maxwarn=1``.
In particular one can also set **define** and activate
whichever position restraints have been coded into the itp
and top file. For instance one could have
*define* = "-DPOSRES_MainChain -DPOSRES_LIGAND"
if these preprocessor constructs exist. Note that there
**must not be any space between "-D" and the value.**
By default *define* is set to "-DPOSRES".
:Returns: a dict that can be fed into :func:`gromacs.setup.MD`
(but check, just in case, especially if you want to
change the ``define`` parameter in the mdp file)
.. Note:: The output frequency is drastically reduced for position
restraint runs by default. Set the corresponding ``nst*``
variables if you require more output. The `pressure coupling`_
option *refcoord_scaling* is set to "com" by default (but can
be changed via *kwargs*) and the pressure coupling
algorithm itself is set to *Pcoupl* = "Berendsen" to
run a stable simulation.
.. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc | Below is the the instruction that describes the task:
### Input:
Set up MD with position restraints.
Additional itp files should be in the same directory as the top file.
Many of the keyword arguments below already have sensible values. Note that
setting *mainselection* = ``None`` will disable many of the automated
choices and is often recommended when using your own mdp file.
:Keywords:
*dirname*
set up under directory dirname [MD_POSRES]
*struct*
input structure (gro, pdb, ...) [em/em.pdb]
*top*
topology file [top/system.top]
*mdp*
mdp file (or use the template) [templates/md.mdp]
*ndx*
index file (supply when using a custom mdp)
*includes*
additional directories to search for itp files
*mainselection*
:program:`make_ndx` selection to select main group ["Protein"]
(If ``None`` then no canonical index file is generated and
it is the user's responsibility to set *tc_grps*,
*tau_t*, and *ref_t* as keyword arguments, or provide the mdp template
with all parameter pre-set in *mdp* and probably also your own *ndx*
index file.)
*deffnm*
default filename for Gromacs run [md]
*runtime*
total length of the simulation in ps [1000]
*dt*
integration time step in ps [0.002]
*qscript*
script to submit to the queuing system; by default
uses the template :data:`gromacs.config.qscript_template`, which can
be manually set to another template from :data:`gromacs.config.templates`;
can also be a list of template names.
*qname*
name to be used for the job in the queuing system [PR_GMX]
*mdrun_opts*
option flags for the :program:`mdrun` command in the queuing system
scripts such as "-stepout 100". [""]
*kwargs*
remaining key/value pairs that should be changed in the template mdp
file, eg ``nstxtcout=250, nstfout=250`` or command line options for
``grompp` such as ``maxwarn=1``.
In particular one can also set **define** and activate
whichever position restraints have been coded into the itp
and top file. For instance one could have
*define* = "-DPOSRES_MainChain -DPOSRES_LIGAND"
if these preprocessor constructs exist. Note that there
**must not be any space between "-D" and the value.**
By default *define* is set to "-DPOSRES".
:Returns: a dict that can be fed into :func:`gromacs.setup.MD`
(but check, just in case, especially if you want to
change the ``define`` parameter in the mdp file)
.. Note:: The output frequency is drastically reduced for position
restraint runs by default. Set the corresponding ``nst*``
variables if you require more output. The `pressure coupling`_
option *refcoord_scaling* is set to "com" by default (but can
be changed via *kwargs*) and the pressure coupling
algorithm itself is set to *Pcoupl* = "Berendsen" to
run a stable simulation.
.. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc
### Response:
def MD_restrained(dirname='MD_POSRES', **kwargs):
"""Set up MD with position restraints.
Additional itp files should be in the same directory as the top file.
Many of the keyword arguments below already have sensible values. Note that
setting *mainselection* = ``None`` will disable many of the automated
choices and is often recommended when using your own mdp file.
:Keywords:
*dirname*
set up under directory dirname [MD_POSRES]
*struct*
input structure (gro, pdb, ...) [em/em.pdb]
*top*
topology file [top/system.top]
*mdp*
mdp file (or use the template) [templates/md.mdp]
*ndx*
index file (supply when using a custom mdp)
*includes*
additional directories to search for itp files
*mainselection*
:program:`make_ndx` selection to select main group ["Protein"]
(If ``None`` then no canonical index file is generated and
it is the user's responsibility to set *tc_grps*,
*tau_t*, and *ref_t* as keyword arguments, or provide the mdp template
with all parameter pre-set in *mdp* and probably also your own *ndx*
index file.)
*deffnm*
default filename for Gromacs run [md]
*runtime*
total length of the simulation in ps [1000]
*dt*
integration time step in ps [0.002]
*qscript*
script to submit to the queuing system; by default
uses the template :data:`gromacs.config.qscript_template`, which can
be manually set to another template from :data:`gromacs.config.templates`;
can also be a list of template names.
*qname*
name to be used for the job in the queuing system [PR_GMX]
*mdrun_opts*
option flags for the :program:`mdrun` command in the queuing system
scripts such as "-stepout 100". [""]
*kwargs*
remaining key/value pairs that should be changed in the template mdp
file, eg ``nstxtcout=250, nstfout=250`` or command line options for
``grompp` such as ``maxwarn=1``.
In particular one can also set **define** and activate
whichever position restraints have been coded into the itp
and top file. For instance one could have
*define* = "-DPOSRES_MainChain -DPOSRES_LIGAND"
if these preprocessor constructs exist. Note that there
**must not be any space between "-D" and the value.**
By default *define* is set to "-DPOSRES".
:Returns: a dict that can be fed into :func:`gromacs.setup.MD`
(but check, just in case, especially if you want to
change the ``define`` parameter in the mdp file)
.. Note:: The output frequency is drastically reduced for position
restraint runs by default. Set the corresponding ``nst*``
variables if you require more output. The `pressure coupling`_
option *refcoord_scaling* is set to "com" by default (but can
be changed via *kwargs*) and the pressure coupling
algorithm itself is set to *Pcoupl* = "Berendsen" to
run a stable simulation.
.. _`pressure coupling`: http://manual.gromacs.org/online/mdp_opt.html#pc
"""
logger.info("[{dirname!s}] Setting up MD with position restraints...".format(**vars()))
kwargs.setdefault('struct', 'em/em.pdb')
kwargs.setdefault('qname', 'PR_GMX')
kwargs.setdefault('define', '-DPOSRES')
# reduce size of output files
kwargs.setdefault('nstxout', '50000') # trr pos
kwargs.setdefault('nstvout', '50000') # trr veloc
kwargs.setdefault('nstfout', '0') # trr forces
kwargs.setdefault('nstlog', '500') # log file
kwargs.setdefault('nstenergy', '2500') # edr energy
kwargs.setdefault('nstxtcout', '5000') # xtc pos
# try to get good pressure equilibration
kwargs.setdefault('refcoord_scaling', 'com')
kwargs.setdefault('Pcoupl', "Berendsen")
new_kwargs = _setup_MD(dirname, **kwargs)
# clean up output kwargs
new_kwargs.pop('define', None) # but make sure that -DPOSRES does not stay...
new_kwargs.pop('refcoord_scaling', None)
new_kwargs.pop('Pcoupl', None)
return new_kwargs |
def create_simple_web_feature(o):
"""
Create an instance of SimpleWebFeature from a dict, o. If o does not
match a Python feature object, simply return o. This function serves as a
json decoder hook. See coding.load().
:param o: A dict to create the SimpleWebFeature from.
:type o: dict
:return: A SimpleWebFeature from the dict provided.
:rtype: SimpleWebFeature
"""
try:
id = o['id']
g = o['geometry']
p = o['properties']
return SimpleWebFeature(str(id), {
'type': str(g.get('type')),
'coordinates': g.get('coordinates', [])},
title=p.get('title'),
summary=p.get('summary'),
link=str(p.get('link')))
except (KeyError, TypeError):
pass
return o | Create an instance of SimpleWebFeature from a dict, o. If o does not
match a Python feature object, simply return o. This function serves as a
json decoder hook. See coding.load().
:param o: A dict to create the SimpleWebFeature from.
:type o: dict
:return: A SimpleWebFeature from the dict provided.
:rtype: SimpleWebFeature | Below is the the instruction that describes the task:
### Input:
Create an instance of SimpleWebFeature from a dict, o. If o does not
match a Python feature object, simply return o. This function serves as a
json decoder hook. See coding.load().
:param o: A dict to create the SimpleWebFeature from.
:type o: dict
:return: A SimpleWebFeature from the dict provided.
:rtype: SimpleWebFeature
### Response:
def create_simple_web_feature(o):
"""
Create an instance of SimpleWebFeature from a dict, o. If o does not
match a Python feature object, simply return o. This function serves as a
json decoder hook. See coding.load().
:param o: A dict to create the SimpleWebFeature from.
:type o: dict
:return: A SimpleWebFeature from the dict provided.
:rtype: SimpleWebFeature
"""
try:
id = o['id']
g = o['geometry']
p = o['properties']
return SimpleWebFeature(str(id), {
'type': str(g.get('type')),
'coordinates': g.get('coordinates', [])},
title=p.get('title'),
summary=p.get('summary'),
link=str(p.get('link')))
except (KeyError, TypeError):
pass
return o |
def _query_systemstate(self):
"""Query the maximum number of connections supported by this adapter
"""
def status_filter_func(event):
if event.command_class == 3 and event.command == 0:
return True
return False
try:
response = self._send_command(0, 6, [])
maxconn, = unpack("<B", response.payload)
except InternalTimeoutError:
return False, {'reason': 'Timeout waiting for command response'}
events = self._wait_process_events(0.5, status_filter_func, lambda x: False)
conns = []
for event in events:
handle, flags, addr, addr_type, interval, timeout, lat, bond = unpack("<BB6sBHHHB", event.payload)
if flags != 0:
conns.append(handle)
return True, {'max_connections': maxconn, 'active_connections': conns} | Query the maximum number of connections supported by this adapter | Below is the the instruction that describes the task:
### Input:
Query the maximum number of connections supported by this adapter
### Response:
def _query_systemstate(self):
"""Query the maximum number of connections supported by this adapter
"""
def status_filter_func(event):
if event.command_class == 3 and event.command == 0:
return True
return False
try:
response = self._send_command(0, 6, [])
maxconn, = unpack("<B", response.payload)
except InternalTimeoutError:
return False, {'reason': 'Timeout waiting for command response'}
events = self._wait_process_events(0.5, status_filter_func, lambda x: False)
conns = []
for event in events:
handle, flags, addr, addr_type, interval, timeout, lat, bond = unpack("<BB6sBHHHB", event.payload)
if flags != 0:
conns.append(handle)
return True, {'max_connections': maxconn, 'active_connections': conns} |
def ensure_sphinx_astropy_installed():
"""
Make sure that sphinx-astropy is available, installing it temporarily if not.
This returns the available version of sphinx-astropy as well as any
paths that should be added to sys.path for sphinx-astropy to be available.
"""
# We've split out the Sphinx part of astropy-helpers into sphinx-astropy
# but we want it to be auto-installed seamlessly for anyone using
# build_docs. We check if it's already installed, and if not, we install
# it to a local .eggs directory and add the eggs to the path (these
# have to each be added to the path, we can't add them by simply adding
# .eggs to the path)
sys_path_inserts = []
sphinx_astropy_version = None
try:
from sphinx_astropy import __version__ as sphinx_astropy_version # noqa
except ImportError:
from setuptools import Distribution
dist = Distribution()
# Numpydoc 0.9.0 requires sphinx 1.6+, we can limit the version here
# until we also makes our minimum required version Sphinx 1.6
if SPHINX_LT_16:
dist.fetch_build_eggs('numpydoc<0.9')
# This egg build doesn't respect python_requires, not aware of
# pre-releases. We know that mpl 3.1+ requires Python 3.6+, so this
# ugly workaround takes care of it until there is a solution for
# https://github.com/astropy/astropy-helpers/issues/462
if LooseVersion(sys.version) < LooseVersion('3.6'):
dist.fetch_build_eggs('matplotlib<3.1')
eggs = dist.fetch_build_eggs('sphinx-astropy')
# Find out the version of sphinx-astropy if possible. For some old
# setuptools version, eggs will be None even if sphinx-astropy was
# successfully installed.
if eggs is not None:
for egg in eggs:
if egg.project_name == 'sphinx-astropy':
sphinx_astropy_version = egg.parsed_version.public
break
eggs_path = os.path.abspath('.eggs')
for egg in glob.glob(os.path.join(eggs_path, '*.egg')):
sys_path_inserts.append(egg)
return sphinx_astropy_version, sys_path_inserts | Make sure that sphinx-astropy is available, installing it temporarily if not.
This returns the available version of sphinx-astropy as well as any
paths that should be added to sys.path for sphinx-astropy to be available. | Below is the the instruction that describes the task:
### Input:
Make sure that sphinx-astropy is available, installing it temporarily if not.
This returns the available version of sphinx-astropy as well as any
paths that should be added to sys.path for sphinx-astropy to be available.
### Response:
def ensure_sphinx_astropy_installed():
"""
Make sure that sphinx-astropy is available, installing it temporarily if not.
This returns the available version of sphinx-astropy as well as any
paths that should be added to sys.path for sphinx-astropy to be available.
"""
# We've split out the Sphinx part of astropy-helpers into sphinx-astropy
# but we want it to be auto-installed seamlessly for anyone using
# build_docs. We check if it's already installed, and if not, we install
# it to a local .eggs directory and add the eggs to the path (these
# have to each be added to the path, we can't add them by simply adding
# .eggs to the path)
sys_path_inserts = []
sphinx_astropy_version = None
try:
from sphinx_astropy import __version__ as sphinx_astropy_version # noqa
except ImportError:
from setuptools import Distribution
dist = Distribution()
# Numpydoc 0.9.0 requires sphinx 1.6+, we can limit the version here
# until we also makes our minimum required version Sphinx 1.6
if SPHINX_LT_16:
dist.fetch_build_eggs('numpydoc<0.9')
# This egg build doesn't respect python_requires, not aware of
# pre-releases. We know that mpl 3.1+ requires Python 3.6+, so this
# ugly workaround takes care of it until there is a solution for
# https://github.com/astropy/astropy-helpers/issues/462
if LooseVersion(sys.version) < LooseVersion('3.6'):
dist.fetch_build_eggs('matplotlib<3.1')
eggs = dist.fetch_build_eggs('sphinx-astropy')
# Find out the version of sphinx-astropy if possible. For some old
# setuptools version, eggs will be None even if sphinx-astropy was
# successfully installed.
if eggs is not None:
for egg in eggs:
if egg.project_name == 'sphinx-astropy':
sphinx_astropy_version = egg.parsed_version.public
break
eggs_path = os.path.abspath('.eggs')
for egg in glob.glob(os.path.join(eggs_path, '*.egg')):
sys_path_inserts.append(egg)
return sphinx_astropy_version, sys_path_inserts |
def clear_cache(self):
"""Clear the raw packet cache for the field and all its subfields"""
self.raw_packet_cache = None
for _, fval in six.iteritems(self.fields):
if isinstance(fval, Packet):
fval.clear_cache()
self.payload.clear_cache() | Clear the raw packet cache for the field and all its subfields | Below is the the instruction that describes the task:
### Input:
Clear the raw packet cache for the field and all its subfields
### Response:
def clear_cache(self):
"""Clear the raw packet cache for the field and all its subfields"""
self.raw_packet_cache = None
for _, fval in six.iteritems(self.fields):
if isinstance(fval, Packet):
fval.clear_cache()
self.payload.clear_cache() |
def buscar(self):
"""Faz a busca das informações do objeto no Postmon.
Retorna um ``bool`` indicando se a busca foi bem sucedida.
"""
headers = {'User-Agent': self.user_agent}
try:
self._response = requests.get(self.url, headers=headers)
except requests.RequestException:
logger.exception("%s.buscar() falhou: GET %s" %
(self.__class__.__name__, self.url))
return False
if self._response.ok:
self.atualizar(**self._response.json())
return self._response.ok | Faz a busca das informações do objeto no Postmon.
Retorna um ``bool`` indicando se a busca foi bem sucedida. | Below is the the instruction that describes the task:
### Input:
Faz a busca das informações do objeto no Postmon.
Retorna um ``bool`` indicando se a busca foi bem sucedida.
### Response:
def buscar(self):
"""Faz a busca das informações do objeto no Postmon.
Retorna um ``bool`` indicando se a busca foi bem sucedida.
"""
headers = {'User-Agent': self.user_agent}
try:
self._response = requests.get(self.url, headers=headers)
except requests.RequestException:
logger.exception("%s.buscar() falhou: GET %s" %
(self.__class__.__name__, self.url))
return False
if self._response.ok:
self.atualizar(**self._response.json())
return self._response.ok |
def update_points(self):
""" 椭圆的近似图形:72边形 """
n = max(8, min(72, int(2*sqrt(self.r_x+self.r_y))))
d = pi * 2 / n
x, y, r_x, r_y = self.x, self.y, self.r_x, self.r_y
ps = []
for i in range(n):
ps += [(x + r_x * sin(d * i)), (y + r_y * cos(d * i))]
self.points = tuple(ps) | 椭圆的近似图形:72边形 | Below is the the instruction that describes the task:
### Input:
椭圆的近似图形:72边形
### Response:
def update_points(self):
""" 椭圆的近似图形:72边形 """
n = max(8, min(72, int(2*sqrt(self.r_x+self.r_y))))
d = pi * 2 / n
x, y, r_x, r_y = self.x, self.y, self.r_x, self.r_y
ps = []
for i in range(n):
ps += [(x + r_x * sin(d * i)), (y + r_y * cos(d * i))]
self.points = tuple(ps) |
def account_representative_set(self, wallet, account, representative, work=None):
"""
Sets the representative for **account** in **wallet**
.. enable_control required
:param wallet: Wallet to use for account
:type wallet: str
:param account: Account to set representative for
:type account: str
:param representative: Representative to set to
:type representative: str
:param work: If set, is used as the work for the block
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.account_representative_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_39a73oy5ungrhxy5z5oao1xso4zo7dmgpjd4u74xcrx3r1w6rtazuouw6qfi",
... representative="xrb_16u1uufyoig8777y6r8iqjtrw8sg8maqrm36zzcm95jmbd9i9aj5i8abr8u5"
... )
"000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F"
"""
wallet = self._process_value(wallet, 'wallet')
account = self._process_value(account, 'account')
representative = self._process_value(representative, 'account')
payload = {
"wallet": wallet,
"account": account,
"representative": representative,
}
if work is not None:
payload['work'] = self._process_value(work, 'work')
resp = self.call('account_representative_set', payload)
return resp['block'] | Sets the representative for **account** in **wallet**
.. enable_control required
:param wallet: Wallet to use for account
:type wallet: str
:param account: Account to set representative for
:type account: str
:param representative: Representative to set to
:type representative: str
:param work: If set, is used as the work for the block
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.account_representative_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_39a73oy5ungrhxy5z5oao1xso4zo7dmgpjd4u74xcrx3r1w6rtazuouw6qfi",
... representative="xrb_16u1uufyoig8777y6r8iqjtrw8sg8maqrm36zzcm95jmbd9i9aj5i8abr8u5"
... )
"000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F" | Below is the the instruction that describes the task:
### Input:
Sets the representative for **account** in **wallet**
.. enable_control required
:param wallet: Wallet to use for account
:type wallet: str
:param account: Account to set representative for
:type account: str
:param representative: Representative to set to
:type representative: str
:param work: If set, is used as the work for the block
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.account_representative_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_39a73oy5ungrhxy5z5oao1xso4zo7dmgpjd4u74xcrx3r1w6rtazuouw6qfi",
... representative="xrb_16u1uufyoig8777y6r8iqjtrw8sg8maqrm36zzcm95jmbd9i9aj5i8abr8u5"
... )
"000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F"
### Response:
def account_representative_set(self, wallet, account, representative, work=None):
"""
Sets the representative for **account** in **wallet**
.. enable_control required
:param wallet: Wallet to use for account
:type wallet: str
:param account: Account to set representative for
:type account: str
:param representative: Representative to set to
:type representative: str
:param work: If set, is used as the work for the block
:type work: str
:raises: :py:exc:`nano.rpc.RPCException`
>>> rpc.account_representative_set(
... wallet="000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F",
... account="xrb_39a73oy5ungrhxy5z5oao1xso4zo7dmgpjd4u74xcrx3r1w6rtazuouw6qfi",
... representative="xrb_16u1uufyoig8777y6r8iqjtrw8sg8maqrm36zzcm95jmbd9i9aj5i8abr8u5"
... )
"000D1BAEC8EC208142C99059B393051BAC8380F9B5A2E6B2489A277D81789F3F"
"""
wallet = self._process_value(wallet, 'wallet')
account = self._process_value(account, 'account')
representative = self._process_value(representative, 'account')
payload = {
"wallet": wallet,
"account": account,
"representative": representative,
}
if work is not None:
payload['work'] = self._process_value(work, 'work')
resp = self.call('account_representative_set', payload)
return resp['block'] |
def search_pattern(regex):
"""
Return a value check function which raises a ValueError if the supplied
regular expression does not match anywhere in the value, see also
`re.search`.
"""
prog = re.compile(regex)
def checker(v):
result = prog.search(v)
if result is None:
raise ValueError(v)
return checker | Return a value check function which raises a ValueError if the supplied
regular expression does not match anywhere in the value, see also
`re.search`. | Below is the the instruction that describes the task:
### Input:
Return a value check function which raises a ValueError if the supplied
regular expression does not match anywhere in the value, see also
`re.search`.
### Response:
def search_pattern(regex):
"""
Return a value check function which raises a ValueError if the supplied
regular expression does not match anywhere in the value, see also
`re.search`.
"""
prog = re.compile(regex)
def checker(v):
result = prog.search(v)
if result is None:
raise ValueError(v)
return checker |
def Brock_Bird(T, Tb, Tc, Pc):
r'''Calculates air-water surface tension using the [1]_
emperical method. Old and tested.
.. math::
\sigma = P_c^{2/3}T_c^{1/3}Q(1-T_r)^{11/9}
Q = 0.1196 \left[ 1 + \frac{T_{br}\ln (P_c/1.01325)}{1-T_{br}}\right]-0.279
Parameters
----------
T : float
Temperature of fluid [K]
Tb : float
Boiling temperature of the fluid [K]
Tc : float
Critical temperature of fluid [K]
Pc : float
Critical pressure of fluid [Pa]
Returns
-------
sigma : float
Liquid surface tension, N/m
Notes
-----
Numerous arrangements of this equation are available.
This is DIPPR Procedure 7A: Method for the Surface Tension of Pure,
Nonpolar, Nonhydrocarbon Liquids
The exact equation is not in the original paper.
If the equation yields a negative result, return None.
Examples
--------
p-dichloribenzene at 412.15 K, from DIPPR; value differs due to a slight
difference in method.
>>> Brock_Bird(412.15, 447.3, 685, 3.952E6)
0.02208448325192495
Chlorobenzene from Poling, as compared with a % error value at 293 K.
>>> Brock_Bird(293.15, 404.75, 633.0, 4530000.0)
0.032985686413713036
References
----------
.. [1] Brock, James R., and R. Byron Bird. "Surface Tension and the
Principle of Corresponding States." AIChE Journal 1, no. 2
(June 1, 1955): 174-77. doi:10.1002/aic.690010208
'''
Tbr = Tb/Tc
Tr = T/Tc
Pc = Pc/1E5 # Convert to bar
Q = 0.1196*(1 + Tbr*log(Pc/1.01325)/(1-Tbr))-0.279
sigma = (Pc)**(2/3.)*Tc**(1/3.)*Q*(1-Tr)**(11/9.)
sigma = sigma/1000 # convert to N/m
return sigma | r'''Calculates air-water surface tension using the [1]_
emperical method. Old and tested.
.. math::
\sigma = P_c^{2/3}T_c^{1/3}Q(1-T_r)^{11/9}
Q = 0.1196 \left[ 1 + \frac{T_{br}\ln (P_c/1.01325)}{1-T_{br}}\right]-0.279
Parameters
----------
T : float
Temperature of fluid [K]
Tb : float
Boiling temperature of the fluid [K]
Tc : float
Critical temperature of fluid [K]
Pc : float
Critical pressure of fluid [Pa]
Returns
-------
sigma : float
Liquid surface tension, N/m
Notes
-----
Numerous arrangements of this equation are available.
This is DIPPR Procedure 7A: Method for the Surface Tension of Pure,
Nonpolar, Nonhydrocarbon Liquids
The exact equation is not in the original paper.
If the equation yields a negative result, return None.
Examples
--------
p-dichloribenzene at 412.15 K, from DIPPR; value differs due to a slight
difference in method.
>>> Brock_Bird(412.15, 447.3, 685, 3.952E6)
0.02208448325192495
Chlorobenzene from Poling, as compared with a % error value at 293 K.
>>> Brock_Bird(293.15, 404.75, 633.0, 4530000.0)
0.032985686413713036
References
----------
.. [1] Brock, James R., and R. Byron Bird. "Surface Tension and the
Principle of Corresponding States." AIChE Journal 1, no. 2
(June 1, 1955): 174-77. doi:10.1002/aic.690010208 | Below is the the instruction that describes the task:
### Input:
r'''Calculates air-water surface tension using the [1]_
emperical method. Old and tested.
.. math::
\sigma = P_c^{2/3}T_c^{1/3}Q(1-T_r)^{11/9}
Q = 0.1196 \left[ 1 + \frac{T_{br}\ln (P_c/1.01325)}{1-T_{br}}\right]-0.279
Parameters
----------
T : float
Temperature of fluid [K]
Tb : float
Boiling temperature of the fluid [K]
Tc : float
Critical temperature of fluid [K]
Pc : float
Critical pressure of fluid [Pa]
Returns
-------
sigma : float
Liquid surface tension, N/m
Notes
-----
Numerous arrangements of this equation are available.
This is DIPPR Procedure 7A: Method for the Surface Tension of Pure,
Nonpolar, Nonhydrocarbon Liquids
The exact equation is not in the original paper.
If the equation yields a negative result, return None.
Examples
--------
p-dichloribenzene at 412.15 K, from DIPPR; value differs due to a slight
difference in method.
>>> Brock_Bird(412.15, 447.3, 685, 3.952E6)
0.02208448325192495
Chlorobenzene from Poling, as compared with a % error value at 293 K.
>>> Brock_Bird(293.15, 404.75, 633.0, 4530000.0)
0.032985686413713036
References
----------
.. [1] Brock, James R., and R. Byron Bird. "Surface Tension and the
Principle of Corresponding States." AIChE Journal 1, no. 2
(June 1, 1955): 174-77. doi:10.1002/aic.690010208
### Response:
def Brock_Bird(T, Tb, Tc, Pc):
r'''Calculates air-water surface tension using the [1]_
emperical method. Old and tested.
.. math::
\sigma = P_c^{2/3}T_c^{1/3}Q(1-T_r)^{11/9}
Q = 0.1196 \left[ 1 + \frac{T_{br}\ln (P_c/1.01325)}{1-T_{br}}\right]-0.279
Parameters
----------
T : float
Temperature of fluid [K]
Tb : float
Boiling temperature of the fluid [K]
Tc : float
Critical temperature of fluid [K]
Pc : float
Critical pressure of fluid [Pa]
Returns
-------
sigma : float
Liquid surface tension, N/m
Notes
-----
Numerous arrangements of this equation are available.
This is DIPPR Procedure 7A: Method for the Surface Tension of Pure,
Nonpolar, Nonhydrocarbon Liquids
The exact equation is not in the original paper.
If the equation yields a negative result, return None.
Examples
--------
p-dichloribenzene at 412.15 K, from DIPPR; value differs due to a slight
difference in method.
>>> Brock_Bird(412.15, 447.3, 685, 3.952E6)
0.02208448325192495
Chlorobenzene from Poling, as compared with a % error value at 293 K.
>>> Brock_Bird(293.15, 404.75, 633.0, 4530000.0)
0.032985686413713036
References
----------
.. [1] Brock, James R., and R. Byron Bird. "Surface Tension and the
Principle of Corresponding States." AIChE Journal 1, no. 2
(June 1, 1955): 174-77. doi:10.1002/aic.690010208
'''
Tbr = Tb/Tc
Tr = T/Tc
Pc = Pc/1E5 # Convert to bar
Q = 0.1196*(1 + Tbr*log(Pc/1.01325)/(1-Tbr))-0.279
sigma = (Pc)**(2/3.)*Tc**(1/3.)*Q*(1-Tr)**(11/9.)
sigma = sigma/1000 # convert to N/m
return sigma |
def endswith_strip(s, endswith='.txt', ignorecase=True):
""" Strip a suffix from the end of a string
>>> endswith_strip('http://TotalGood.com', '.COM')
'http://TotalGood'
>>> endswith_strip('http://TotalGood.com', endswith='.COM', ignorecase=False)
'http://TotalGood.com'
"""
if ignorecase:
if s.lower().endswith(endswith.lower()):
return s[:-len(endswith)]
else:
if s.endswith(endswith):
return s[:-len(endswith)]
return s | Strip a suffix from the end of a string
>>> endswith_strip('http://TotalGood.com', '.COM')
'http://TotalGood'
>>> endswith_strip('http://TotalGood.com', endswith='.COM', ignorecase=False)
'http://TotalGood.com' | Below is the the instruction that describes the task:
### Input:
Strip a suffix from the end of a string
>>> endswith_strip('http://TotalGood.com', '.COM')
'http://TotalGood'
>>> endswith_strip('http://TotalGood.com', endswith='.COM', ignorecase=False)
'http://TotalGood.com'
### Response:
def endswith_strip(s, endswith='.txt', ignorecase=True):
""" Strip a suffix from the end of a string
>>> endswith_strip('http://TotalGood.com', '.COM')
'http://TotalGood'
>>> endswith_strip('http://TotalGood.com', endswith='.COM', ignorecase=False)
'http://TotalGood.com'
"""
if ignorecase:
if s.lower().endswith(endswith.lower()):
return s[:-len(endswith)]
else:
if s.endswith(endswith):
return s[:-len(endswith)]
return s |
def _getEventsByWeek(self, request, year, month):
"""Return my child events for the given month grouped by week."""
return getAllEventsByWeek(request, year, month, home=self) | Return my child events for the given month grouped by week. | Below is the the instruction that describes the task:
### Input:
Return my child events for the given month grouped by week.
### Response:
def _getEventsByWeek(self, request, year, month):
"""Return my child events for the given month grouped by week."""
return getAllEventsByWeek(request, year, month, home=self) |
def pv_present(name, **kwargs):
'''
Set a Physical Device to be used as an LVM Physical Volume
name
The device name to initialize.
kwargs
Any supported options to pvcreate. See
:mod:`linux_lvm <salt.modules.linux_lvm>` for more details.
'''
ret = {'changes': {},
'comment': '',
'name': name,
'result': True}
if __salt__['lvm.pvdisplay'](name, quiet=True):
ret['comment'] = 'Physical Volume {0} already present'.format(name)
elif __opts__['test']:
ret['comment'] = 'Physical Volume {0} is set to be created'.format(name)
ret['result'] = None
return ret
else:
changes = __salt__['lvm.pvcreate'](name, **kwargs)
if __salt__['lvm.pvdisplay'](name):
ret['comment'] = 'Created Physical Volume {0}'.format(name)
ret['changes']['created'] = changes
else:
ret['comment'] = 'Failed to create Physical Volume {0}'.format(name)
ret['result'] = False
return ret | Set a Physical Device to be used as an LVM Physical Volume
name
The device name to initialize.
kwargs
Any supported options to pvcreate. See
:mod:`linux_lvm <salt.modules.linux_lvm>` for more details. | Below is the the instruction that describes the task:
### Input:
Set a Physical Device to be used as an LVM Physical Volume
name
The device name to initialize.
kwargs
Any supported options to pvcreate. See
:mod:`linux_lvm <salt.modules.linux_lvm>` for more details.
### Response:
def pv_present(name, **kwargs):
'''
Set a Physical Device to be used as an LVM Physical Volume
name
The device name to initialize.
kwargs
Any supported options to pvcreate. See
:mod:`linux_lvm <salt.modules.linux_lvm>` for more details.
'''
ret = {'changes': {},
'comment': '',
'name': name,
'result': True}
if __salt__['lvm.pvdisplay'](name, quiet=True):
ret['comment'] = 'Physical Volume {0} already present'.format(name)
elif __opts__['test']:
ret['comment'] = 'Physical Volume {0} is set to be created'.format(name)
ret['result'] = None
return ret
else:
changes = __salt__['lvm.pvcreate'](name, **kwargs)
if __salt__['lvm.pvdisplay'](name):
ret['comment'] = 'Created Physical Volume {0}'.format(name)
ret['changes']['created'] = changes
else:
ret['comment'] = 'Failed to create Physical Volume {0}'.format(name)
ret['result'] = False
return ret |
def roc(series, window=14):
"""
compute rate of change
"""
res = (series - series.shift(window)) / series.shift(window)
return pd.Series(index=series.index, data=res) | compute rate of change | Below is the the instruction that describes the task:
### Input:
compute rate of change
### Response:
def roc(series, window=14):
"""
compute rate of change
"""
res = (series - series.shift(window)) / series.shift(window)
return pd.Series(index=series.index, data=res) |
def get_origin(tp):
"""Get the unsubscripted version of a type. Supports generic types, Union,
Callable, and Tuple. Returns None for unsupported types. Examples::
get_origin(int) == None
get_origin(ClassVar[int]) == None
get_origin(Generic) == Generic
get_origin(Generic[T]) == Generic
get_origin(Union[T, int]) == Union
get_origin(List[Tuple[T, T]][int]) == list # List prior to Python 3.7
"""
if NEW_TYPING:
if isinstance(tp, _GenericAlias):
return tp.__origin__ if tp.__origin__ is not ClassVar else None
if tp is Generic:
return Generic
return None
if isinstance(tp, GenericMeta):
return _gorg(tp)
if is_union_type(tp):
return Union
return None | Get the unsubscripted version of a type. Supports generic types, Union,
Callable, and Tuple. Returns None for unsupported types. Examples::
get_origin(int) == None
get_origin(ClassVar[int]) == None
get_origin(Generic) == Generic
get_origin(Generic[T]) == Generic
get_origin(Union[T, int]) == Union
get_origin(List[Tuple[T, T]][int]) == list # List prior to Python 3.7 | Below is the the instruction that describes the task:
### Input:
Get the unsubscripted version of a type. Supports generic types, Union,
Callable, and Tuple. Returns None for unsupported types. Examples::
get_origin(int) == None
get_origin(ClassVar[int]) == None
get_origin(Generic) == Generic
get_origin(Generic[T]) == Generic
get_origin(Union[T, int]) == Union
get_origin(List[Tuple[T, T]][int]) == list # List prior to Python 3.7
### Response:
def get_origin(tp):
"""Get the unsubscripted version of a type. Supports generic types, Union,
Callable, and Tuple. Returns None for unsupported types. Examples::
get_origin(int) == None
get_origin(ClassVar[int]) == None
get_origin(Generic) == Generic
get_origin(Generic[T]) == Generic
get_origin(Union[T, int]) == Union
get_origin(List[Tuple[T, T]][int]) == list # List prior to Python 3.7
"""
if NEW_TYPING:
if isinstance(tp, _GenericAlias):
return tp.__origin__ if tp.__origin__ is not ClassVar else None
if tp is Generic:
return Generic
return None
if isinstance(tp, GenericMeta):
return _gorg(tp)
if is_union_type(tp):
return Union
return None |
def get_table_description(self, cursor, table_name):
"Returns a description of the table, with the DB-API cursor.description interface."
# pylint:disable=too-many-locals,unused-argument
result = []
for field in self.table_description_cache(table_name)['fields']:
params = OrderedDict()
if field['label'] and field['label'] != camel_case_to_spaces(re.sub('__c$', '', field['name'])).title():
params['verbose_name'] = field['label']
if not field['updateable'] or not field['createable']:
# Fields that are result of a formula or system fields modified
# by triggers or by other apex code
sf_read_only = (0 if field['updateable'] else 1) | (0 if field['createable'] else 2)
# use symbolic names NOT_UPDATEABLE, NON_CREATABLE, READ_ONLY instead of 1, 2, 3
params['sf_read_only'] = reverse_models_names[sf_read_only]
if field['defaultValue'] is not None:
params['default'] = field['defaultValue']
if field['inlineHelpText']:
params['help_text'] = field['inlineHelpText']
if field['picklistValues']:
params['choices'] = [(x['value'], x['label']) for x in field['picklistValues'] if x['active']]
if field['defaultedOnCreate'] and field['createable']:
params['default'] = SymbolicModelsName('DEFAULTED_ON_CREATE')
if field['type'] == 'reference' and not field['referenceTo']:
params['ref_comment'] = 'No Reference table'
field['type'] = 'string'
if field['calculatedFormula']:
# calculated formula field are without length in Salesforce 45 Spring '19,
# but Django requires a length, though the field is read only and never written
field['length'] = 1300
# We prefer "length" over "byteLength" for "internal_size".
# (because strings have usually: byteLength == 3 * length)
result.append(FieldInfo(
field['name'], # name,
field['type'], # type_code,
field['length'], # display_size,
field['length'], # internal_size,
field['precision'], # precision,
field['scale'], # scale,
field['nillable'], # null_ok,
params.get('default'), # default
params,
))
return result | Returns a description of the table, with the DB-API cursor.description interface. | Below is the the instruction that describes the task:
### Input:
Returns a description of the table, with the DB-API cursor.description interface.
### Response:
def get_table_description(self, cursor, table_name):
"Returns a description of the table, with the DB-API cursor.description interface."
# pylint:disable=too-many-locals,unused-argument
result = []
for field in self.table_description_cache(table_name)['fields']:
params = OrderedDict()
if field['label'] and field['label'] != camel_case_to_spaces(re.sub('__c$', '', field['name'])).title():
params['verbose_name'] = field['label']
if not field['updateable'] or not field['createable']:
# Fields that are result of a formula or system fields modified
# by triggers or by other apex code
sf_read_only = (0 if field['updateable'] else 1) | (0 if field['createable'] else 2)
# use symbolic names NOT_UPDATEABLE, NON_CREATABLE, READ_ONLY instead of 1, 2, 3
params['sf_read_only'] = reverse_models_names[sf_read_only]
if field['defaultValue'] is not None:
params['default'] = field['defaultValue']
if field['inlineHelpText']:
params['help_text'] = field['inlineHelpText']
if field['picklistValues']:
params['choices'] = [(x['value'], x['label']) for x in field['picklistValues'] if x['active']]
if field['defaultedOnCreate'] and field['createable']:
params['default'] = SymbolicModelsName('DEFAULTED_ON_CREATE')
if field['type'] == 'reference' and not field['referenceTo']:
params['ref_comment'] = 'No Reference table'
field['type'] = 'string'
if field['calculatedFormula']:
# calculated formula field are without length in Salesforce 45 Spring '19,
# but Django requires a length, though the field is read only and never written
field['length'] = 1300
# We prefer "length" over "byteLength" for "internal_size".
# (because strings have usually: byteLength == 3 * length)
result.append(FieldInfo(
field['name'], # name,
field['type'], # type_code,
field['length'], # display_size,
field['length'], # internal_size,
field['precision'], # precision,
field['scale'], # scale,
field['nillable'], # null_ok,
params.get('default'), # default
params,
))
return result |
def get_ancestor_hash(self, block_number: int) -> Hash32:
"""
Return the hash for the ancestor block with number ``block_number``.
Return the empty bytestring ``b''`` if the block number is outside of the
range of available block numbers (typically the last 255 blocks).
"""
ancestor_depth = self.block_number - block_number - 1
is_ancestor_depth_out_of_range = (
ancestor_depth >= MAX_PREV_HEADER_DEPTH or
ancestor_depth < 0 or
block_number < 0
)
if is_ancestor_depth_out_of_range:
return Hash32(b'')
try:
return nth(ancestor_depth, self.execution_context.prev_hashes)
except StopIteration:
# Ancestor with specified depth not present
return Hash32(b'') | Return the hash for the ancestor block with number ``block_number``.
Return the empty bytestring ``b''`` if the block number is outside of the
range of available block numbers (typically the last 255 blocks). | Below is the the instruction that describes the task:
### Input:
Return the hash for the ancestor block with number ``block_number``.
Return the empty bytestring ``b''`` if the block number is outside of the
range of available block numbers (typically the last 255 blocks).
### Response:
def get_ancestor_hash(self, block_number: int) -> Hash32:
"""
Return the hash for the ancestor block with number ``block_number``.
Return the empty bytestring ``b''`` if the block number is outside of the
range of available block numbers (typically the last 255 blocks).
"""
ancestor_depth = self.block_number - block_number - 1
is_ancestor_depth_out_of_range = (
ancestor_depth >= MAX_PREV_HEADER_DEPTH or
ancestor_depth < 0 or
block_number < 0
)
if is_ancestor_depth_out_of_range:
return Hash32(b'')
try:
return nth(ancestor_depth, self.execution_context.prev_hashes)
except StopIteration:
# Ancestor with specified depth not present
return Hash32(b'') |
def differences_between(self, current_files, parent_files, changes, prefixes):
"""
yield (thing, changes, is_path)
If is_path is true, changes is None and thing is the path as a tuple.
If is_path is false, thing is the current_files and parent_files for
that changed treeentry and changes is the difference between current_files
and parent_files.
The code here is written to squeeze as much performance as possible out
of this operation.
"""
parent_oid = None
if any(is_tree for _, is_tree, _ in changes):
if len(changes) == 1:
wanted_path = list(changes)[0][0]
parent_oid = frozenset([oid for path, is_tree, oid in parent_files if path == wanted_path and is_tree])
else:
parent_values = defaultdict(set)
parent_changes = parent_files - current_files
for path, is_tree, oid in parent_changes:
if is_tree:
parent_values[path].add(oid)
for path, is_tree, oid in changes:
if is_tree and path not in prefixes:
continue
if not is_tree:
yield path, None, True
else:
parent_oids = parent_oid if parent_oid is not None else parent_values.get(path, empty)
cf_and_pf, changes = self.tree_structures_for(path, oid, parent_oids, prefixes)
if changes:
yield cf_and_pf, changes, False | yield (thing, changes, is_path)
If is_path is true, changes is None and thing is the path as a tuple.
If is_path is false, thing is the current_files and parent_files for
that changed treeentry and changes is the difference between current_files
and parent_files.
The code here is written to squeeze as much performance as possible out
of this operation. | Below is the the instruction that describes the task:
### Input:
yield (thing, changes, is_path)
If is_path is true, changes is None and thing is the path as a tuple.
If is_path is false, thing is the current_files and parent_files for
that changed treeentry and changes is the difference between current_files
and parent_files.
The code here is written to squeeze as much performance as possible out
of this operation.
### Response:
def differences_between(self, current_files, parent_files, changes, prefixes):
"""
yield (thing, changes, is_path)
If is_path is true, changes is None and thing is the path as a tuple.
If is_path is false, thing is the current_files and parent_files for
that changed treeentry and changes is the difference between current_files
and parent_files.
The code here is written to squeeze as much performance as possible out
of this operation.
"""
parent_oid = None
if any(is_tree for _, is_tree, _ in changes):
if len(changes) == 1:
wanted_path = list(changes)[0][0]
parent_oid = frozenset([oid for path, is_tree, oid in parent_files if path == wanted_path and is_tree])
else:
parent_values = defaultdict(set)
parent_changes = parent_files - current_files
for path, is_tree, oid in parent_changes:
if is_tree:
parent_values[path].add(oid)
for path, is_tree, oid in changes:
if is_tree and path not in prefixes:
continue
if not is_tree:
yield path, None, True
else:
parent_oids = parent_oid if parent_oid is not None else parent_values.get(path, empty)
cf_and_pf, changes = self.tree_structures_for(path, oid, parent_oids, prefixes)
if changes:
yield cf_and_pf, changes, False |
def warn_deprecated(message, stacklevel=2): # pragma: no cover
"""Warn deprecated."""
warnings.warn(
message,
category=DeprecationWarning,
stacklevel=stacklevel
) | Warn deprecated. | Below is the the instruction that describes the task:
### Input:
Warn deprecated.
### Response:
def warn_deprecated(message, stacklevel=2): # pragma: no cover
"""Warn deprecated."""
warnings.warn(
message,
category=DeprecationWarning,
stacklevel=stacklevel
) |
def itake_column(list_, colx):
""" iterator version of get_list_column """
if isinstance(colx, list):
# multi select
return ([row[colx_] for colx_ in colx] for row in list_)
else:
return (row[colx] for row in list_) | iterator version of get_list_column | Below is the the instruction that describes the task:
### Input:
iterator version of get_list_column
### Response:
def itake_column(list_, colx):
""" iterator version of get_list_column """
if isinstance(colx, list):
# multi select
return ([row[colx_] for colx_ in colx] for row in list_)
else:
return (row[colx] for row in list_) |
def get_boolean(self, input_string):
"""
Return boolean type user input
"""
if input_string in ('--write_roc', '--plot', '--compare'):
# was the flag set?
try:
index = self.args.index(input_string) + 1
except ValueError:
# it wasn't, args are optional, so return the appropriate default
return False
# the flag was set, so return the True
return True | Return boolean type user input | Below is the the instruction that describes the task:
### Input:
Return boolean type user input
### Response:
def get_boolean(self, input_string):
"""
Return boolean type user input
"""
if input_string in ('--write_roc', '--plot', '--compare'):
# was the flag set?
try:
index = self.args.index(input_string) + 1
except ValueError:
# it wasn't, args are optional, so return the appropriate default
return False
# the flag was set, so return the True
return True |
def get_all():
'''
Return a list of all available services
CLI Example:
.. code-block:: bash
salt '*' service.get_all
'''
if not os.path.isdir(_GRAINMAP.get(__grains__.get('os'), '/etc/init.d')):
return []
return sorted(os.listdir(_GRAINMAP.get(__grains__.get('os'), '/etc/init.d'))) | Return a list of all available services
CLI Example:
.. code-block:: bash
salt '*' service.get_all | Below is the the instruction that describes the task:
### Input:
Return a list of all available services
CLI Example:
.. code-block:: bash
salt '*' service.get_all
### Response:
def get_all():
'''
Return a list of all available services
CLI Example:
.. code-block:: bash
salt '*' service.get_all
'''
if not os.path.isdir(_GRAINMAP.get(__grains__.get('os'), '/etc/init.d')):
return []
return sorted(os.listdir(_GRAINMAP.get(__grains__.get('os'), '/etc/init.d'))) |
def verify_docker_image_sha(chain, link):
"""Verify that built docker shas match the artifact.
Args:
chain (ChainOfTrust): the chain we're operating on.
link (LinkOfTrust): the task link we're checking.
Raises:
CoTError: on failure.
"""
cot = link.cot
task = link.task
errors = []
if isinstance(task['payload'].get('image'), dict):
# Using pre-built image from docker-image task
docker_image_task_id = task['extra']['chainOfTrust']['inputs']['docker-image']
log.debug("Verifying {} {} against docker-image {}".format(
link.name, link.task_id, docker_image_task_id
))
if docker_image_task_id != task['payload']['image']['taskId']:
errors.append("{} {} docker-image taskId isn't consistent!: {} vs {}".format(
link.name, link.task_id, docker_image_task_id,
task['payload']['image']['taskId']
))
else:
path = task['payload']['image']['path']
# we need change the hash alg everywhere if we change, and recreate
# the docker images...
image_hash = cot['environment']['imageArtifactHash']
alg, sha = image_hash.split(':')
docker_image_link = chain.get_link(docker_image_task_id)
upstream_sha = docker_image_link.cot['artifacts'].get(path, {}).get(alg)
if upstream_sha is None:
errors.append("{} {} docker-image docker sha {} is missing! {}".format(
link.name, link.task_id, alg,
docker_image_link.cot['artifacts'][path]
))
elif upstream_sha != sha:
errors.append("{} {} docker-image docker sha doesn't match! {} {} vs {}".format(
link.name, link.task_id, alg, sha, upstream_sha
))
else:
log.debug("Found matching docker-image sha {}".format(upstream_sha))
else:
prebuilt_task_types = chain.context.config['prebuilt_docker_image_task_types']
if prebuilt_task_types != "any" and link.task_type not in prebuilt_task_types:
errors.append(
"Task type {} not allowed to use a prebuilt docker image!".format(
link.task_type
)
)
raise_on_errors(errors) | Verify that built docker shas match the artifact.
Args:
chain (ChainOfTrust): the chain we're operating on.
link (LinkOfTrust): the task link we're checking.
Raises:
CoTError: on failure. | Below is the the instruction that describes the task:
### Input:
Verify that built docker shas match the artifact.
Args:
chain (ChainOfTrust): the chain we're operating on.
link (LinkOfTrust): the task link we're checking.
Raises:
CoTError: on failure.
### Response:
def verify_docker_image_sha(chain, link):
"""Verify that built docker shas match the artifact.
Args:
chain (ChainOfTrust): the chain we're operating on.
link (LinkOfTrust): the task link we're checking.
Raises:
CoTError: on failure.
"""
cot = link.cot
task = link.task
errors = []
if isinstance(task['payload'].get('image'), dict):
# Using pre-built image from docker-image task
docker_image_task_id = task['extra']['chainOfTrust']['inputs']['docker-image']
log.debug("Verifying {} {} against docker-image {}".format(
link.name, link.task_id, docker_image_task_id
))
if docker_image_task_id != task['payload']['image']['taskId']:
errors.append("{} {} docker-image taskId isn't consistent!: {} vs {}".format(
link.name, link.task_id, docker_image_task_id,
task['payload']['image']['taskId']
))
else:
path = task['payload']['image']['path']
# we need change the hash alg everywhere if we change, and recreate
# the docker images...
image_hash = cot['environment']['imageArtifactHash']
alg, sha = image_hash.split(':')
docker_image_link = chain.get_link(docker_image_task_id)
upstream_sha = docker_image_link.cot['artifacts'].get(path, {}).get(alg)
if upstream_sha is None:
errors.append("{} {} docker-image docker sha {} is missing! {}".format(
link.name, link.task_id, alg,
docker_image_link.cot['artifacts'][path]
))
elif upstream_sha != sha:
errors.append("{} {} docker-image docker sha doesn't match! {} {} vs {}".format(
link.name, link.task_id, alg, sha, upstream_sha
))
else:
log.debug("Found matching docker-image sha {}".format(upstream_sha))
else:
prebuilt_task_types = chain.context.config['prebuilt_docker_image_task_types']
if prebuilt_task_types != "any" and link.task_type not in prebuilt_task_types:
errors.append(
"Task type {} not allowed to use a prebuilt docker image!".format(
link.task_type
)
)
raise_on_errors(errors) |
def get_workflows() -> dict:
"""Get dict of ALL known workflow definitions.
Returns
list[dict]
"""
keys = DB.get_keys("workflow_definitions:*")
known_workflows = dict()
for key in keys:
values = key.split(':')
if values[1] not in known_workflows:
known_workflows[values[1]] = list()
known_workflows[values[1]].append(values[2])
return known_workflows | Get dict of ALL known workflow definitions.
Returns
list[dict] | Below is the the instruction that describes the task:
### Input:
Get dict of ALL known workflow definitions.
Returns
list[dict]
### Response:
def get_workflows() -> dict:
"""Get dict of ALL known workflow definitions.
Returns
list[dict]
"""
keys = DB.get_keys("workflow_definitions:*")
known_workflows = dict()
for key in keys:
values = key.split(':')
if values[1] not in known_workflows:
known_workflows[values[1]] = list()
known_workflows[values[1]].append(values[2])
return known_workflows |
def _get_ssh_public_key(self):
"""Generate SSH public key from private key."""
key = ipa_utils.generate_public_ssh_key(self.ssh_private_key_file)
return '{user}:{key} {user}'.format(
user=self.ssh_user,
key=key.decode()
) | Generate SSH public key from private key. | Below is the the instruction that describes the task:
### Input:
Generate SSH public key from private key.
### Response:
def _get_ssh_public_key(self):
"""Generate SSH public key from private key."""
key = ipa_utils.generate_public_ssh_key(self.ssh_private_key_file)
return '{user}:{key} {user}'.format(
user=self.ssh_user,
key=key.decode()
) |
def general_stats_addcols(self, data, headers=None, namespace=None):
""" Helper function to add to the General Statistics variable.
Adds to report.general_stats and does not return anything. Fills
in required config variables if not supplied.
:param data: A dict with the data. First key should be sample name,
then the data key, then the data.
:param headers: Dict / OrderedDict with information for the headers,
such as colour scales, min and max values etc.
See docs/writing_python.md for more information.
:return: None
"""
if headers is None:
headers = {}
# Use the module namespace as the name if not supplied
if namespace is None:
namespace = self.name
# Guess the column headers from the data if not supplied
if headers is None or len(headers) == 0:
hs = set()
for d in data.values():
hs.update(d.keys())
hs = list(hs)
hs.sort()
headers = OrderedDict()
for k in hs:
headers[k] = dict()
# Add the module name to the description if not already done
keys = headers.keys()
for k in keys:
if 'namespace' not in headers[k]:
headers[k]['namespace'] = namespace
if 'description' not in headers[k]:
headers[k]['description'] = headers[k].get('title', k)
# Append to report.general_stats for later assembly into table
report.general_stats_data.append(data)
report.general_stats_headers.append(headers) | Helper function to add to the General Statistics variable.
Adds to report.general_stats and does not return anything. Fills
in required config variables if not supplied.
:param data: A dict with the data. First key should be sample name,
then the data key, then the data.
:param headers: Dict / OrderedDict with information for the headers,
such as colour scales, min and max values etc.
See docs/writing_python.md for more information.
:return: None | Below is the the instruction that describes the task:
### Input:
Helper function to add to the General Statistics variable.
Adds to report.general_stats and does not return anything. Fills
in required config variables if not supplied.
:param data: A dict with the data. First key should be sample name,
then the data key, then the data.
:param headers: Dict / OrderedDict with information for the headers,
such as colour scales, min and max values etc.
See docs/writing_python.md for more information.
:return: None
### Response:
def general_stats_addcols(self, data, headers=None, namespace=None):
""" Helper function to add to the General Statistics variable.
Adds to report.general_stats and does not return anything. Fills
in required config variables if not supplied.
:param data: A dict with the data. First key should be sample name,
then the data key, then the data.
:param headers: Dict / OrderedDict with information for the headers,
such as colour scales, min and max values etc.
See docs/writing_python.md for more information.
:return: None
"""
if headers is None:
headers = {}
# Use the module namespace as the name if not supplied
if namespace is None:
namespace = self.name
# Guess the column headers from the data if not supplied
if headers is None or len(headers) == 0:
hs = set()
for d in data.values():
hs.update(d.keys())
hs = list(hs)
hs.sort()
headers = OrderedDict()
for k in hs:
headers[k] = dict()
# Add the module name to the description if not already done
keys = headers.keys()
for k in keys:
if 'namespace' not in headers[k]:
headers[k]['namespace'] = namespace
if 'description' not in headers[k]:
headers[k]['description'] = headers[k].get('title', k)
# Append to report.general_stats for later assembly into table
report.general_stats_data.append(data)
report.general_stats_headers.append(headers) |
def calculate_time_difference(startDate, endDate):
"""
*Return the time difference between two dates as a string*
**Key Arguments:**
- ``startDate`` -- the first date in YYYY-MM-DDTHH:MM:SS format
- ``endDate`` -- the final date YYYY-MM-DDTHH:MM:SS format
**Return:**
- ``relTime`` -- the difference between the two dates in Y,M,D,h,m,s (string)
**Usage:**
.. code-block:: python
from fundamentals import times
diff = times.calculate_time_difference(startDate="2015-10-13 10:02:12", endDate="2017-11-04 16:47:05")
print diff
# OUT: 2yrs 22dys 6h 44m 53s
"""
################ > IMPORTS ################
from datetime import datetime
from dateutil import relativedelta
################ > VARIABLE SETTINGS ######
################ >ACTION(S) ################
if "T" not in startDate:
startDate = startDate.strip().replace(" ", "T")
if "T" not in endDate:
endDate = endDate.strip().replace(" ", "T")
startDate = datetime.strptime(startDate, '%Y-%m-%dT%H:%M:%S')
endDate = datetime.strptime(endDate, '%Y-%m-%dT%H:%M:%S')
d = relativedelta.relativedelta(endDate, startDate)
relTime = ""
if d.years > 0:
relTime += str(d.years) + "yrs "
if d.months > 0:
relTime += str(d.months) + "mths "
if d.days > 0:
relTime += str(d.days) + "dys "
if d.hours > 0:
relTime += str(d.hours) + "h "
if d.minutes > 0:
relTime += str(d.minutes) + "m "
if d.seconds > 0:
relTime += str(d.seconds) + "s"
###############################
if relTime == "":
relTime = "0s"
return relTime | *Return the time difference between two dates as a string*
**Key Arguments:**
- ``startDate`` -- the first date in YYYY-MM-DDTHH:MM:SS format
- ``endDate`` -- the final date YYYY-MM-DDTHH:MM:SS format
**Return:**
- ``relTime`` -- the difference between the two dates in Y,M,D,h,m,s (string)
**Usage:**
.. code-block:: python
from fundamentals import times
diff = times.calculate_time_difference(startDate="2015-10-13 10:02:12", endDate="2017-11-04 16:47:05")
print diff
# OUT: 2yrs 22dys 6h 44m 53s | Below is the the instruction that describes the task:
### Input:
*Return the time difference between two dates as a string*
**Key Arguments:**
- ``startDate`` -- the first date in YYYY-MM-DDTHH:MM:SS format
- ``endDate`` -- the final date YYYY-MM-DDTHH:MM:SS format
**Return:**
- ``relTime`` -- the difference between the two dates in Y,M,D,h,m,s (string)
**Usage:**
.. code-block:: python
from fundamentals import times
diff = times.calculate_time_difference(startDate="2015-10-13 10:02:12", endDate="2017-11-04 16:47:05")
print diff
# OUT: 2yrs 22dys 6h 44m 53s
### Response:
def calculate_time_difference(startDate, endDate):
"""
*Return the time difference between two dates as a string*
**Key Arguments:**
- ``startDate`` -- the first date in YYYY-MM-DDTHH:MM:SS format
- ``endDate`` -- the final date YYYY-MM-DDTHH:MM:SS format
**Return:**
- ``relTime`` -- the difference between the two dates in Y,M,D,h,m,s (string)
**Usage:**
.. code-block:: python
from fundamentals import times
diff = times.calculate_time_difference(startDate="2015-10-13 10:02:12", endDate="2017-11-04 16:47:05")
print diff
# OUT: 2yrs 22dys 6h 44m 53s
"""
################ > IMPORTS ################
from datetime import datetime
from dateutil import relativedelta
################ > VARIABLE SETTINGS ######
################ >ACTION(S) ################
if "T" not in startDate:
startDate = startDate.strip().replace(" ", "T")
if "T" not in endDate:
endDate = endDate.strip().replace(" ", "T")
startDate = datetime.strptime(startDate, '%Y-%m-%dT%H:%M:%S')
endDate = datetime.strptime(endDate, '%Y-%m-%dT%H:%M:%S')
d = relativedelta.relativedelta(endDate, startDate)
relTime = ""
if d.years > 0:
relTime += str(d.years) + "yrs "
if d.months > 0:
relTime += str(d.months) + "mths "
if d.days > 0:
relTime += str(d.days) + "dys "
if d.hours > 0:
relTime += str(d.hours) + "h "
if d.minutes > 0:
relTime += str(d.minutes) + "m "
if d.seconds > 0:
relTime += str(d.seconds) + "s"
###############################
if relTime == "":
relTime = "0s"
return relTime |
def selectedNodes( self ):
"""
Returns a list of the selected nodes in a scene.
:return <list> [ <XNode>, .. ]
"""
output = []
for item in self.selectedItems():
if ( isinstance(item, XNode) ):
output.append(item)
return output | Returns a list of the selected nodes in a scene.
:return <list> [ <XNode>, .. ] | Below is the the instruction that describes the task:
### Input:
Returns a list of the selected nodes in a scene.
:return <list> [ <XNode>, .. ]
### Response:
def selectedNodes( self ):
"""
Returns a list of the selected nodes in a scene.
:return <list> [ <XNode>, .. ]
"""
output = []
for item in self.selectedItems():
if ( isinstance(item, XNode) ):
output.append(item)
return output |
def make_Hs_implicit(original_mol, keep_stereo=True):
"""Return molecule that explicit hydrogens removed
TODO: this query function should be renamed to "explicitHs_removed"
"""
mol = clone(original_mol)
mol.descriptors.clear() # Reset descriptor
to_remove = set()
for i, nbrs in mol.neighbors_iter():
if mol.atom(i).symbol == "H": # do not remove H2
continue
for nbr, bond in nbrs.items():
if mol.atom(nbr).symbol == "H" and \
not (bond.order == 1 and bond.type and keep_stereo):
to_remove.add(nbr)
for r in to_remove:
mol.remove_atom(r)
assign_descriptors(mol)
return mol | Return molecule that explicit hydrogens removed
TODO: this query function should be renamed to "explicitHs_removed" | Below is the the instruction that describes the task:
### Input:
Return molecule that explicit hydrogens removed
TODO: this query function should be renamed to "explicitHs_removed"
### Response:
def make_Hs_implicit(original_mol, keep_stereo=True):
"""Return molecule that explicit hydrogens removed
TODO: this query function should be renamed to "explicitHs_removed"
"""
mol = clone(original_mol)
mol.descriptors.clear() # Reset descriptor
to_remove = set()
for i, nbrs in mol.neighbors_iter():
if mol.atom(i).symbol == "H": # do not remove H2
continue
for nbr, bond in nbrs.items():
if mol.atom(nbr).symbol == "H" and \
not (bond.order == 1 and bond.type and keep_stereo):
to_remove.add(nbr)
for r in to_remove:
mol.remove_atom(r)
assign_descriptors(mol)
return mol |
def _extractAssociation(self, assoc_response, assoc_session):
"""Attempt to extract an association from the response, given
the association response message and the established
association session.
@param assoc_response: The association response message from
the server
@type assoc_response: openid.message.Message
@param assoc_session: The association session object that was
used when making the request
@type assoc_session: depends on the session type of the request
@raises ProtocolError: when data is malformed
@raises KeyError: when a field is missing
@rtype: openid.association.Association
"""
# Extract the common fields from the response, raising an
# exception if they are not found
assoc_type = assoc_response.getArg(
OPENID_NS, 'assoc_type', no_default)
assoc_handle = assoc_response.getArg(
OPENID_NS, 'assoc_handle', no_default)
# expires_in is a base-10 string. The Python parsing will
# accept literals that have whitespace around them and will
# accept negative values. Neither of these are really in-spec,
# but we think it's OK to accept them.
expires_in_str = assoc_response.getArg(
OPENID_NS, 'expires_in', no_default)
try:
expires_in = int(expires_in_str)
except ValueError, why:
raise ProtocolError('Invalid expires_in field: %s' % (why[0],))
# OpenID 1 has funny association session behaviour.
if assoc_response.isOpenID1():
session_type = self._getOpenID1SessionType(assoc_response)
else:
session_type = assoc_response.getArg(
OPENID2_NS, 'session_type', no_default)
# Session type mismatch
if assoc_session.session_type != session_type:
if (assoc_response.isOpenID1() and
session_type == 'no-encryption'):
# In OpenID 1, any association request can result in a
# 'no-encryption' association response. Setting
# assoc_session to a new no-encryption session should
# make the rest of this function work properly for
# that case.
assoc_session = PlainTextConsumerSession()
else:
# Any other mismatch, regardless of protocol version
# results in the failure of the association session
# altogether.
fmt = 'Session type mismatch. Expected %r, got %r'
message = fmt % (assoc_session.session_type, session_type)
raise ProtocolError(message)
# Make sure assoc_type is valid for session_type
if assoc_type not in assoc_session.allowed_assoc_types:
fmt = 'Unsupported assoc_type for session %s returned: %s'
raise ProtocolError(fmt % (assoc_session.session_type, assoc_type))
# Delegate to the association session to extract the secret
# from the response, however is appropriate for that session
# type.
try:
secret = assoc_session.extractSecret(assoc_response)
except ValueError, why:
fmt = 'Malformed response for %s session: %s'
raise ProtocolError(fmt % (assoc_session.session_type, why[0]))
return Association.fromExpiresIn(
expires_in, assoc_handle, secret, assoc_type) | Attempt to extract an association from the response, given
the association response message and the established
association session.
@param assoc_response: The association response message from
the server
@type assoc_response: openid.message.Message
@param assoc_session: The association session object that was
used when making the request
@type assoc_session: depends on the session type of the request
@raises ProtocolError: when data is malformed
@raises KeyError: when a field is missing
@rtype: openid.association.Association | Below is the the instruction that describes the task:
### Input:
Attempt to extract an association from the response, given
the association response message and the established
association session.
@param assoc_response: The association response message from
the server
@type assoc_response: openid.message.Message
@param assoc_session: The association session object that was
used when making the request
@type assoc_session: depends on the session type of the request
@raises ProtocolError: when data is malformed
@raises KeyError: when a field is missing
@rtype: openid.association.Association
### Response:
def _extractAssociation(self, assoc_response, assoc_session):
"""Attempt to extract an association from the response, given
the association response message and the established
association session.
@param assoc_response: The association response message from
the server
@type assoc_response: openid.message.Message
@param assoc_session: The association session object that was
used when making the request
@type assoc_session: depends on the session type of the request
@raises ProtocolError: when data is malformed
@raises KeyError: when a field is missing
@rtype: openid.association.Association
"""
# Extract the common fields from the response, raising an
# exception if they are not found
assoc_type = assoc_response.getArg(
OPENID_NS, 'assoc_type', no_default)
assoc_handle = assoc_response.getArg(
OPENID_NS, 'assoc_handle', no_default)
# expires_in is a base-10 string. The Python parsing will
# accept literals that have whitespace around them and will
# accept negative values. Neither of these are really in-spec,
# but we think it's OK to accept them.
expires_in_str = assoc_response.getArg(
OPENID_NS, 'expires_in', no_default)
try:
expires_in = int(expires_in_str)
except ValueError, why:
raise ProtocolError('Invalid expires_in field: %s' % (why[0],))
# OpenID 1 has funny association session behaviour.
if assoc_response.isOpenID1():
session_type = self._getOpenID1SessionType(assoc_response)
else:
session_type = assoc_response.getArg(
OPENID2_NS, 'session_type', no_default)
# Session type mismatch
if assoc_session.session_type != session_type:
if (assoc_response.isOpenID1() and
session_type == 'no-encryption'):
# In OpenID 1, any association request can result in a
# 'no-encryption' association response. Setting
# assoc_session to a new no-encryption session should
# make the rest of this function work properly for
# that case.
assoc_session = PlainTextConsumerSession()
else:
# Any other mismatch, regardless of protocol version
# results in the failure of the association session
# altogether.
fmt = 'Session type mismatch. Expected %r, got %r'
message = fmt % (assoc_session.session_type, session_type)
raise ProtocolError(message)
# Make sure assoc_type is valid for session_type
if assoc_type not in assoc_session.allowed_assoc_types:
fmt = 'Unsupported assoc_type for session %s returned: %s'
raise ProtocolError(fmt % (assoc_session.session_type, assoc_type))
# Delegate to the association session to extract the secret
# from the response, however is appropriate for that session
# type.
try:
secret = assoc_session.extractSecret(assoc_response)
except ValueError, why:
fmt = 'Malformed response for %s session: %s'
raise ProtocolError(fmt % (assoc_session.session_type, why[0]))
return Association.fromExpiresIn(
expires_in, assoc_handle, secret, assoc_type) |
def op_at_code_loc(code, loc, opc):
"""Return the instruction name at code[loc] using
opc to look up instruction names. Returns 'got IndexError'
if code[loc] is invalid.
`code` is instruction bytecode, `loc` is an offset (integer) and
`opc` is an opcode module from `xdis`.
"""
try:
op = code[loc]
except IndexError:
return 'got IndexError'
return opc.opname[op] | Return the instruction name at code[loc] using
opc to look up instruction names. Returns 'got IndexError'
if code[loc] is invalid.
`code` is instruction bytecode, `loc` is an offset (integer) and
`opc` is an opcode module from `xdis`. | Below is the the instruction that describes the task:
### Input:
Return the instruction name at code[loc] using
opc to look up instruction names. Returns 'got IndexError'
if code[loc] is invalid.
`code` is instruction bytecode, `loc` is an offset (integer) and
`opc` is an opcode module from `xdis`.
### Response:
def op_at_code_loc(code, loc, opc):
"""Return the instruction name at code[loc] using
opc to look up instruction names. Returns 'got IndexError'
if code[loc] is invalid.
`code` is instruction bytecode, `loc` is an offset (integer) and
`opc` is an opcode module from `xdis`.
"""
try:
op = code[loc]
except IndexError:
return 'got IndexError'
return opc.opname[op] |
def clean_intersections(G, tolerance=15, dead_ends=False):
"""
Clean-up intersections comprising clusters of nodes by merging them and
returning their centroids.
Divided roads are represented by separate centerline edges. The intersection
of two divided roads thus creates 4 nodes, representing where each edge
intersects a perpendicular edge. These 4 nodes represent a single
intersection in the real world. This function cleans them up by buffering
their points to an arbitrary distance, merging overlapping buffers, and
taking their centroid. For best results, the tolerance argument should be
adjusted to approximately match street design standards in the specific
street network.
Parameters
----------
G : networkx multidigraph
tolerance : float
nodes within this distance (in graph's geometry's units) will be
dissolved into a single intersection
dead_ends : bool
if False, discard dead-end nodes to return only street-intersection
points
Returns
----------
intersection_centroids : geopandas.GeoSeries
a GeoSeries of shapely Points representing the centroids of street
intersections
"""
# if dead_ends is False, discard dead-end nodes to only work with edge
# intersections
if not dead_ends:
if 'streets_per_node' in G.graph:
streets_per_node = G.graph['streets_per_node']
else:
streets_per_node = count_streets_per_node(G)
dead_end_nodes = [node for node, count in streets_per_node.items() if count <= 1]
G = G.copy()
G.remove_nodes_from(dead_end_nodes)
# create a GeoDataFrame of nodes, buffer to passed-in distance, merge
# overlaps
gdf_nodes = graph_to_gdfs(G, edges=False)
buffered_nodes = gdf_nodes.buffer(tolerance).unary_union
if isinstance(buffered_nodes, Polygon):
# if only a single node results, make it iterable so we can turn it into
# a GeoSeries
buffered_nodes = [buffered_nodes]
# get the centroids of the merged intersection polygons
unified_intersections = gpd.GeoSeries(list(buffered_nodes))
intersection_centroids = unified_intersections.centroid
return intersection_centroids | Clean-up intersections comprising clusters of nodes by merging them and
returning their centroids.
Divided roads are represented by separate centerline edges. The intersection
of two divided roads thus creates 4 nodes, representing where each edge
intersects a perpendicular edge. These 4 nodes represent a single
intersection in the real world. This function cleans them up by buffering
their points to an arbitrary distance, merging overlapping buffers, and
taking their centroid. For best results, the tolerance argument should be
adjusted to approximately match street design standards in the specific
street network.
Parameters
----------
G : networkx multidigraph
tolerance : float
nodes within this distance (in graph's geometry's units) will be
dissolved into a single intersection
dead_ends : bool
if False, discard dead-end nodes to return only street-intersection
points
Returns
----------
intersection_centroids : geopandas.GeoSeries
a GeoSeries of shapely Points representing the centroids of street
intersections | Below is the the instruction that describes the task:
### Input:
Clean-up intersections comprising clusters of nodes by merging them and
returning their centroids.
Divided roads are represented by separate centerline edges. The intersection
of two divided roads thus creates 4 nodes, representing where each edge
intersects a perpendicular edge. These 4 nodes represent a single
intersection in the real world. This function cleans them up by buffering
their points to an arbitrary distance, merging overlapping buffers, and
taking their centroid. For best results, the tolerance argument should be
adjusted to approximately match street design standards in the specific
street network.
Parameters
----------
G : networkx multidigraph
tolerance : float
nodes within this distance (in graph's geometry's units) will be
dissolved into a single intersection
dead_ends : bool
if False, discard dead-end nodes to return only street-intersection
points
Returns
----------
intersection_centroids : geopandas.GeoSeries
a GeoSeries of shapely Points representing the centroids of street
intersections
### Response:
def clean_intersections(G, tolerance=15, dead_ends=False):
"""
Clean-up intersections comprising clusters of nodes by merging them and
returning their centroids.
Divided roads are represented by separate centerline edges. The intersection
of two divided roads thus creates 4 nodes, representing where each edge
intersects a perpendicular edge. These 4 nodes represent a single
intersection in the real world. This function cleans them up by buffering
their points to an arbitrary distance, merging overlapping buffers, and
taking their centroid. For best results, the tolerance argument should be
adjusted to approximately match street design standards in the specific
street network.
Parameters
----------
G : networkx multidigraph
tolerance : float
nodes within this distance (in graph's geometry's units) will be
dissolved into a single intersection
dead_ends : bool
if False, discard dead-end nodes to return only street-intersection
points
Returns
----------
intersection_centroids : geopandas.GeoSeries
a GeoSeries of shapely Points representing the centroids of street
intersections
"""
# if dead_ends is False, discard dead-end nodes to only work with edge
# intersections
if not dead_ends:
if 'streets_per_node' in G.graph:
streets_per_node = G.graph['streets_per_node']
else:
streets_per_node = count_streets_per_node(G)
dead_end_nodes = [node for node, count in streets_per_node.items() if count <= 1]
G = G.copy()
G.remove_nodes_from(dead_end_nodes)
# create a GeoDataFrame of nodes, buffer to passed-in distance, merge
# overlaps
gdf_nodes = graph_to_gdfs(G, edges=False)
buffered_nodes = gdf_nodes.buffer(tolerance).unary_union
if isinstance(buffered_nodes, Polygon):
# if only a single node results, make it iterable so we can turn it into
# a GeoSeries
buffered_nodes = [buffered_nodes]
# get the centroids of the merged intersection polygons
unified_intersections = gpd.GeoSeries(list(buffered_nodes))
intersection_centroids = unified_intersections.centroid
return intersection_centroids |
def plot_graph_route(G, route, bbox=None, fig_height=6, fig_width=None,
margin=0.02, bgcolor='w', axis_off=True, show=True,
save=False, close=True, file_format='png', filename='temp',
dpi=300, annotate=False, node_color='#999999',
node_size=15, node_alpha=1, node_edgecolor='none',
node_zorder=1, edge_color='#999999', edge_linewidth=1,
edge_alpha=1, use_geom=True, origin_point=None,
destination_point=None, route_color='r', route_linewidth=4,
route_alpha=0.5, orig_dest_node_alpha=0.5,
orig_dest_node_size=100, orig_dest_node_color='r',
orig_dest_point_color='b'):
"""
Plot a route along a networkx spatial graph.
Parameters
----------
G : networkx multidigraph
route : list
the route as a list of nodes
bbox : tuple
bounding box as north,south,east,west - if None will calculate from
spatial extents of data. if passing a bbox, you probably also want to
pass margin=0 to constrain it.
fig_height : int
matplotlib figure height in inches
fig_width : int
matplotlib figure width in inches
margin : float
relative margin around the figure
axis_off : bool
if True turn off the matplotlib axis
bgcolor : string
the background color of the figure and axis
show : bool
if True, show the figure
save : bool
if True, save the figure as an image file to disk
close : bool
close the figure (only if show equals False) to prevent display
file_format : string
the format of the file to save (e.g., 'jpg', 'png', 'svg')
filename : string
the name of the file if saving
dpi : int
the resolution of the image file if saving
annotate : bool
if True, annotate the nodes in the figure
node_color : string
the color of the nodes
node_size : int
the size of the nodes
node_alpha : float
the opacity of the nodes
node_edgecolor : string
the color of the node's marker's border
node_zorder : int
zorder to plot nodes, edges are always 2, so make node_zorder 1 to plot
nodes beneath them or 3 to plot nodes atop them
edge_color : string
the color of the edges' lines
edge_linewidth : float
the width of the edges' lines
edge_alpha : float
the opacity of the edges' lines
use_geom : bool
if True, use the spatial geometry attribute of the edges to draw
geographically accurate edges, rather than just lines straight from node
to node
origin_point : tuple
optional, an origin (lat, lon) point to plot instead of the origin node
destination_point : tuple
optional, a destination (lat, lon) point to plot instead of the
destination node
route_color : string
the color of the route
route_linewidth : int
the width of the route line
route_alpha : float
the opacity of the route line
orig_dest_node_alpha : float
the opacity of the origin and destination nodes
orig_dest_node_size : int
the size of the origin and destination nodes
orig_dest_node_color : string
the color of the origin and destination nodes
orig_dest_point_color : string
the color of the origin and destination points if being plotted instead
of nodes
Returns
-------
fig, ax : tuple
"""
# plot the graph but not the route
fig, ax = plot_graph(G, bbox=bbox, fig_height=fig_height, fig_width=fig_width,
margin=margin, axis_off=axis_off, bgcolor=bgcolor,
show=False, save=False, close=False, filename=filename,
dpi=dpi, annotate=annotate, node_color=node_color,
node_size=node_size, node_alpha=node_alpha,
node_edgecolor=node_edgecolor, node_zorder=node_zorder,
edge_color=edge_color, edge_linewidth=edge_linewidth,
edge_alpha=edge_alpha, use_geom=use_geom)
# the origin and destination nodes are the first and last nodes in the route
origin_node = route[0]
destination_node = route[-1]
if origin_point is None or destination_point is None:
# if caller didn't pass points, use the first and last node in route as
# origin/destination
origin_destination_lats = (G.nodes[origin_node]['y'], G.nodes[destination_node]['y'])
origin_destination_lons = (G.nodes[origin_node]['x'], G.nodes[destination_node]['x'])
else:
# otherwise, use the passed points as origin/destination
origin_destination_lats = (origin_point[0], destination_point[0])
origin_destination_lons = (origin_point[1], destination_point[1])
orig_dest_node_color = orig_dest_point_color
# scatter the origin and destination points
ax.scatter(origin_destination_lons, origin_destination_lats, s=orig_dest_node_size,
c=orig_dest_node_color, alpha=orig_dest_node_alpha, edgecolor=node_edgecolor, zorder=4)
# plot the route lines
lines = node_list_to_coordinate_lines(G, route, use_geom)
# add the lines to the axis as a linecollection
lc = LineCollection(lines, colors=route_color, linewidths=route_linewidth, alpha=route_alpha, zorder=3)
ax.add_collection(lc)
# save and show the figure as specified
fig, ax = save_and_show(fig, ax, save, show, close, filename, file_format, dpi, axis_off)
return fig, ax | Plot a route along a networkx spatial graph.
Parameters
----------
G : networkx multidigraph
route : list
the route as a list of nodes
bbox : tuple
bounding box as north,south,east,west - if None will calculate from
spatial extents of data. if passing a bbox, you probably also want to
pass margin=0 to constrain it.
fig_height : int
matplotlib figure height in inches
fig_width : int
matplotlib figure width in inches
margin : float
relative margin around the figure
axis_off : bool
if True turn off the matplotlib axis
bgcolor : string
the background color of the figure and axis
show : bool
if True, show the figure
save : bool
if True, save the figure as an image file to disk
close : bool
close the figure (only if show equals False) to prevent display
file_format : string
the format of the file to save (e.g., 'jpg', 'png', 'svg')
filename : string
the name of the file if saving
dpi : int
the resolution of the image file if saving
annotate : bool
if True, annotate the nodes in the figure
node_color : string
the color of the nodes
node_size : int
the size of the nodes
node_alpha : float
the opacity of the nodes
node_edgecolor : string
the color of the node's marker's border
node_zorder : int
zorder to plot nodes, edges are always 2, so make node_zorder 1 to plot
nodes beneath them or 3 to plot nodes atop them
edge_color : string
the color of the edges' lines
edge_linewidth : float
the width of the edges' lines
edge_alpha : float
the opacity of the edges' lines
use_geom : bool
if True, use the spatial geometry attribute of the edges to draw
geographically accurate edges, rather than just lines straight from node
to node
origin_point : tuple
optional, an origin (lat, lon) point to plot instead of the origin node
destination_point : tuple
optional, a destination (lat, lon) point to plot instead of the
destination node
route_color : string
the color of the route
route_linewidth : int
the width of the route line
route_alpha : float
the opacity of the route line
orig_dest_node_alpha : float
the opacity of the origin and destination nodes
orig_dest_node_size : int
the size of the origin and destination nodes
orig_dest_node_color : string
the color of the origin and destination nodes
orig_dest_point_color : string
the color of the origin and destination points if being plotted instead
of nodes
Returns
-------
fig, ax : tuple | Below is the the instruction that describes the task:
### Input:
Plot a route along a networkx spatial graph.
Parameters
----------
G : networkx multidigraph
route : list
the route as a list of nodes
bbox : tuple
bounding box as north,south,east,west - if None will calculate from
spatial extents of data. if passing a bbox, you probably also want to
pass margin=0 to constrain it.
fig_height : int
matplotlib figure height in inches
fig_width : int
matplotlib figure width in inches
margin : float
relative margin around the figure
axis_off : bool
if True turn off the matplotlib axis
bgcolor : string
the background color of the figure and axis
show : bool
if True, show the figure
save : bool
if True, save the figure as an image file to disk
close : bool
close the figure (only if show equals False) to prevent display
file_format : string
the format of the file to save (e.g., 'jpg', 'png', 'svg')
filename : string
the name of the file if saving
dpi : int
the resolution of the image file if saving
annotate : bool
if True, annotate the nodes in the figure
node_color : string
the color of the nodes
node_size : int
the size of the nodes
node_alpha : float
the opacity of the nodes
node_edgecolor : string
the color of the node's marker's border
node_zorder : int
zorder to plot nodes, edges are always 2, so make node_zorder 1 to plot
nodes beneath them or 3 to plot nodes atop them
edge_color : string
the color of the edges' lines
edge_linewidth : float
the width of the edges' lines
edge_alpha : float
the opacity of the edges' lines
use_geom : bool
if True, use the spatial geometry attribute of the edges to draw
geographically accurate edges, rather than just lines straight from node
to node
origin_point : tuple
optional, an origin (lat, lon) point to plot instead of the origin node
destination_point : tuple
optional, a destination (lat, lon) point to plot instead of the
destination node
route_color : string
the color of the route
route_linewidth : int
the width of the route line
route_alpha : float
the opacity of the route line
orig_dest_node_alpha : float
the opacity of the origin and destination nodes
orig_dest_node_size : int
the size of the origin and destination nodes
orig_dest_node_color : string
the color of the origin and destination nodes
orig_dest_point_color : string
the color of the origin and destination points if being plotted instead
of nodes
Returns
-------
fig, ax : tuple
### Response:
def plot_graph_route(G, route, bbox=None, fig_height=6, fig_width=None,
margin=0.02, bgcolor='w', axis_off=True, show=True,
save=False, close=True, file_format='png', filename='temp',
dpi=300, annotate=False, node_color='#999999',
node_size=15, node_alpha=1, node_edgecolor='none',
node_zorder=1, edge_color='#999999', edge_linewidth=1,
edge_alpha=1, use_geom=True, origin_point=None,
destination_point=None, route_color='r', route_linewidth=4,
route_alpha=0.5, orig_dest_node_alpha=0.5,
orig_dest_node_size=100, orig_dest_node_color='r',
orig_dest_point_color='b'):
"""
Plot a route along a networkx spatial graph.
Parameters
----------
G : networkx multidigraph
route : list
the route as a list of nodes
bbox : tuple
bounding box as north,south,east,west - if None will calculate from
spatial extents of data. if passing a bbox, you probably also want to
pass margin=0 to constrain it.
fig_height : int
matplotlib figure height in inches
fig_width : int
matplotlib figure width in inches
margin : float
relative margin around the figure
axis_off : bool
if True turn off the matplotlib axis
bgcolor : string
the background color of the figure and axis
show : bool
if True, show the figure
save : bool
if True, save the figure as an image file to disk
close : bool
close the figure (only if show equals False) to prevent display
file_format : string
the format of the file to save (e.g., 'jpg', 'png', 'svg')
filename : string
the name of the file if saving
dpi : int
the resolution of the image file if saving
annotate : bool
if True, annotate the nodes in the figure
node_color : string
the color of the nodes
node_size : int
the size of the nodes
node_alpha : float
the opacity of the nodes
node_edgecolor : string
the color of the node's marker's border
node_zorder : int
zorder to plot nodes, edges are always 2, so make node_zorder 1 to plot
nodes beneath them or 3 to plot nodes atop them
edge_color : string
the color of the edges' lines
edge_linewidth : float
the width of the edges' lines
edge_alpha : float
the opacity of the edges' lines
use_geom : bool
if True, use the spatial geometry attribute of the edges to draw
geographically accurate edges, rather than just lines straight from node
to node
origin_point : tuple
optional, an origin (lat, lon) point to plot instead of the origin node
destination_point : tuple
optional, a destination (lat, lon) point to plot instead of the
destination node
route_color : string
the color of the route
route_linewidth : int
the width of the route line
route_alpha : float
the opacity of the route line
orig_dest_node_alpha : float
the opacity of the origin and destination nodes
orig_dest_node_size : int
the size of the origin and destination nodes
orig_dest_node_color : string
the color of the origin and destination nodes
orig_dest_point_color : string
the color of the origin and destination points if being plotted instead
of nodes
Returns
-------
fig, ax : tuple
"""
# plot the graph but not the route
fig, ax = plot_graph(G, bbox=bbox, fig_height=fig_height, fig_width=fig_width,
margin=margin, axis_off=axis_off, bgcolor=bgcolor,
show=False, save=False, close=False, filename=filename,
dpi=dpi, annotate=annotate, node_color=node_color,
node_size=node_size, node_alpha=node_alpha,
node_edgecolor=node_edgecolor, node_zorder=node_zorder,
edge_color=edge_color, edge_linewidth=edge_linewidth,
edge_alpha=edge_alpha, use_geom=use_geom)
# the origin and destination nodes are the first and last nodes in the route
origin_node = route[0]
destination_node = route[-1]
if origin_point is None or destination_point is None:
# if caller didn't pass points, use the first and last node in route as
# origin/destination
origin_destination_lats = (G.nodes[origin_node]['y'], G.nodes[destination_node]['y'])
origin_destination_lons = (G.nodes[origin_node]['x'], G.nodes[destination_node]['x'])
else:
# otherwise, use the passed points as origin/destination
origin_destination_lats = (origin_point[0], destination_point[0])
origin_destination_lons = (origin_point[1], destination_point[1])
orig_dest_node_color = orig_dest_point_color
# scatter the origin and destination points
ax.scatter(origin_destination_lons, origin_destination_lats, s=orig_dest_node_size,
c=orig_dest_node_color, alpha=orig_dest_node_alpha, edgecolor=node_edgecolor, zorder=4)
# plot the route lines
lines = node_list_to_coordinate_lines(G, route, use_geom)
# add the lines to the axis as a linecollection
lc = LineCollection(lines, colors=route_color, linewidths=route_linewidth, alpha=route_alpha, zorder=3)
ax.add_collection(lc)
# save and show the figure as specified
fig, ax = save_and_show(fig, ax, save, show, close, filename, file_format, dpi, axis_off)
return fig, ax |
def threadpooled(
func: typing.Callable[..., typing.Union["typing.Awaitable[typing.Any]", typing.Any]],
*,
loop_getter: None = None,
loop_getter_need_context: bool = False,
) -> typing.Callable[..., "concurrent.futures.Future[typing.Any]"]:
"""Overload: function callable, no loop getter.""" | Overload: function callable, no loop getter. | Below is the the instruction that describes the task:
### Input:
Overload: function callable, no loop getter.
### Response:
def threadpooled(
func: typing.Callable[..., typing.Union["typing.Awaitable[typing.Any]", typing.Any]],
*,
loop_getter: None = None,
loop_getter_need_context: bool = False,
) -> typing.Callable[..., "concurrent.futures.Future[typing.Any]"]:
"""Overload: function callable, no loop getter.""" |
def histogram(self, data, bins=10, color='w', orientation='h'):
"""Calculate and show a histogram of data
Parameters
----------
data : array-like
Data to histogram. Currently only 1D data is supported.
bins : int | array-like
Number of bins, or bin edges.
color : instance of Color
Color of the histogram.
orientation : {'h', 'v'}
Orientation of the histogram.
Returns
-------
hist : instance of Polygon
The histogram polygon.
"""
self._configure_2d()
hist = scene.Histogram(data, bins, color, orientation)
self.view.add(hist)
self.view.camera.set_range()
return hist | Calculate and show a histogram of data
Parameters
----------
data : array-like
Data to histogram. Currently only 1D data is supported.
bins : int | array-like
Number of bins, or bin edges.
color : instance of Color
Color of the histogram.
orientation : {'h', 'v'}
Orientation of the histogram.
Returns
-------
hist : instance of Polygon
The histogram polygon. | Below is the the instruction that describes the task:
### Input:
Calculate and show a histogram of data
Parameters
----------
data : array-like
Data to histogram. Currently only 1D data is supported.
bins : int | array-like
Number of bins, or bin edges.
color : instance of Color
Color of the histogram.
orientation : {'h', 'v'}
Orientation of the histogram.
Returns
-------
hist : instance of Polygon
The histogram polygon.
### Response:
def histogram(self, data, bins=10, color='w', orientation='h'):
"""Calculate and show a histogram of data
Parameters
----------
data : array-like
Data to histogram. Currently only 1D data is supported.
bins : int | array-like
Number of bins, or bin edges.
color : instance of Color
Color of the histogram.
orientation : {'h', 'v'}
Orientation of the histogram.
Returns
-------
hist : instance of Polygon
The histogram polygon.
"""
self._configure_2d()
hist = scene.Histogram(data, bins, color, orientation)
self.view.add(hist)
self.view.camera.set_range()
return hist |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.