repository_name
stringlengths 5
67
| func_path_in_repository
stringlengths 4
234
| func_name
stringlengths 0
314
| whole_func_string
stringlengths 52
3.87M
| language
stringclasses 6
values | func_code_string
stringlengths 52
3.87M
| func_documentation_string
stringlengths 1
47.2k
| func_code_url
stringlengths 85
339
|
---|---|---|---|---|---|---|---|
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskDefined | def taskDefined(self, *args, **kwargs):
"""
Task Defined Messages
When a task is created or just defined a message is posted to this
exchange.
This message exchange is mainly useful when tasks are scheduled by a
scheduler that uses `defineTask` as this does not make the task
`pending`. Thus, no `taskPending` message is published.
Please, note that messages are also published on this exchange if defined
using `createTask`.
This exchange outputs: ``v1/task-defined-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-defined',
'name': 'taskDefined',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-defined-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskDefined(self, *args, **kwargs):
"""
Task Defined Messages
When a task is created or just defined a message is posted to this
exchange.
This message exchange is mainly useful when tasks are scheduled by a
scheduler that uses `defineTask` as this does not make the task
`pending`. Thus, no `taskPending` message is published.
Please, note that messages are also published on this exchange if defined
using `createTask`.
This exchange outputs: ``v1/task-defined-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-defined',
'name': 'taskDefined',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-defined-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Defined Messages
When a task is created or just defined a message is posted to this
exchange.
This message exchange is mainly useful when tasks are scheduled by a
scheduler that uses `defineTask` as this does not make the task
`pending`. Thus, no `taskPending` message is published.
Please, note that messages are also published on this exchange if defined
using `createTask`.
This exchange outputs: ``v1/task-defined-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L71-L155 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskPending | def taskPending(self, *args, **kwargs):
"""
Task Pending Messages
When a task becomes `pending` a message is posted to this exchange.
This is useful for workers who doesn't want to constantly poll the queue
for new tasks. The queue will also be authority for task states and
claims. But using this exchange workers should be able to distribute work
efficiently and they would be able to reduce their polling interval
significantly without affecting general responsiveness.
This exchange outputs: ``v1/task-pending-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-pending',
'name': 'taskPending',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-pending-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskPending(self, *args, **kwargs):
"""
Task Pending Messages
When a task becomes `pending` a message is posted to this exchange.
This is useful for workers who doesn't want to constantly poll the queue
for new tasks. The queue will also be authority for task states and
claims. But using this exchange workers should be able to distribute work
efficiently and they would be able to reduce their polling interval
significantly without affecting general responsiveness.
This exchange outputs: ``v1/task-pending-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-pending',
'name': 'taskPending',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-pending-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Pending Messages
When a task becomes `pending` a message is posted to this exchange.
This is useful for workers who doesn't want to constantly poll the queue
for new tasks. The queue will also be authority for task states and
claims. But using this exchange workers should be able to distribute work
efficiently and they would be able to reduce their polling interval
significantly without affecting general responsiveness.
This exchange outputs: ``v1/task-pending-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L157-L240 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskRunning | def taskRunning(self, *args, **kwargs):
"""
Task Running Messages
Whenever a task is claimed by a worker, a run is started on the worker,
and a message is posted on this exchange.
This exchange outputs: ``v1/task-running-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-running',
'name': 'taskRunning',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-running-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskRunning(self, *args, **kwargs):
"""
Task Running Messages
Whenever a task is claimed by a worker, a run is started on the worker,
and a message is posted on this exchange.
This exchange outputs: ``v1/task-running-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-running',
'name': 'taskRunning',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-running-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Running Messages
Whenever a task is claimed by a worker, a run is started on the worker,
and a message is posted on this exchange.
This exchange outputs: ``v1/task-running-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L242-L320 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.artifactCreated | def artifactCreated(self, *args, **kwargs):
"""
Artifact Creation Messages
Whenever the `createArtifact` end-point is called, the queue will create
a record of the artifact and post a message on this exchange. All of this
happens before the queue returns a signed URL for the caller to upload
the actual artifact with (pending on `storageType`).
This means that the actual artifact is rarely available when this message
is posted. But it is not unreasonable to assume that the artifact will
will become available at some point later. Most signatures will expire in
30 minutes or so, forcing the uploader to call `createArtifact` with
the same payload again in-order to continue uploading the artifact.
However, in most cases (especially for small artifacts) it's very
reasonable assume the artifact will be available within a few minutes.
This property means that this exchange is mostly useful for tools
monitoring task evaluation. One could also use it count number of
artifacts per task, or _index_ artifacts though in most cases it'll be
smarter to index artifacts after the task in question have completed
successfully.
This exchange outputs: ``v1/artifact-created-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'artifact-created',
'name': 'artifactCreated',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/artifact-created-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def artifactCreated(self, *args, **kwargs):
"""
Artifact Creation Messages
Whenever the `createArtifact` end-point is called, the queue will create
a record of the artifact and post a message on this exchange. All of this
happens before the queue returns a signed URL for the caller to upload
the actual artifact with (pending on `storageType`).
This means that the actual artifact is rarely available when this message
is posted. But it is not unreasonable to assume that the artifact will
will become available at some point later. Most signatures will expire in
30 minutes or so, forcing the uploader to call `createArtifact` with
the same payload again in-order to continue uploading the artifact.
However, in most cases (especially for small artifacts) it's very
reasonable assume the artifact will be available within a few minutes.
This property means that this exchange is mostly useful for tools
monitoring task evaluation. One could also use it count number of
artifacts per task, or _index_ artifacts though in most cases it'll be
smarter to index artifacts after the task in question have completed
successfully.
This exchange outputs: ``v1/artifact-created-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'artifact-created',
'name': 'artifactCreated',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/artifact-created-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Artifact Creation Messages
Whenever the `createArtifact` end-point is called, the queue will create
a record of the artifact and post a message on this exchange. All of this
happens before the queue returns a signed URL for the caller to upload
the actual artifact with (pending on `storageType`).
This means that the actual artifact is rarely available when this message
is posted. But it is not unreasonable to assume that the artifact will
will become available at some point later. Most signatures will expire in
30 minutes or so, forcing the uploader to call `createArtifact` with
the same payload again in-order to continue uploading the artifact.
However, in most cases (especially for small artifacts) it's very
reasonable assume the artifact will be available within a few minutes.
This property means that this exchange is mostly useful for tools
monitoring task evaluation. One could also use it count number of
artifacts per task, or _index_ artifacts though in most cases it'll be
smarter to index artifacts after the task in question have completed
successfully.
This exchange outputs: ``v1/artifact-created-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L322-L416 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskCompleted | def taskCompleted(self, *args, **kwargs):
"""
Task Completed Messages
When a task is successfully completed by a worker a message is posted
this exchange.
This message is routed using the `runId`, `workerGroup` and `workerId`
that completed the task. But information about additional runs is also
available from the task status structure.
This exchange outputs: ``v1/task-completed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-completed',
'name': 'taskCompleted',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-completed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskCompleted(self, *args, **kwargs):
"""
Task Completed Messages
When a task is successfully completed by a worker a message is posted
this exchange.
This message is routed using the `runId`, `workerGroup` and `workerId`
that completed the task. But information about additional runs is also
available from the task status structure.
This exchange outputs: ``v1/task-completed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-completed',
'name': 'taskCompleted',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-completed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Completed Messages
When a task is successfully completed by a worker a message is posted
this exchange.
This message is routed using the `runId`, `workerGroup` and `workerId`
that completed the task. But information about additional runs is also
available from the task status structure.
This exchange outputs: ``v1/task-completed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task. (required)
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task. (required)
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task. (required)
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L418-L499 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskFailed | def taskFailed(self, *args, **kwargs):
"""
Task Failed Messages
When a task ran, but failed to complete successfully a message is posted
to this exchange. This is same as worker ran task-specific code, but the
task specific code exited non-zero.
This exchange outputs: ``v1/task-failed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-failed',
'name': 'taskFailed',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-failed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskFailed(self, *args, **kwargs):
"""
Task Failed Messages
When a task ran, but failed to complete successfully a message is posted
to this exchange. This is same as worker ran task-specific code, but the
task specific code exited non-zero.
This exchange outputs: ``v1/task-failed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-failed',
'name': 'taskFailed',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-failed-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Failed Messages
When a task ran, but failed to complete successfully a message is posted
to this exchange. This is same as worker ran task-specific code, but the
task specific code exited non-zero.
This exchange outputs: ``v1/task-failed-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L501-L580 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskException | def taskException(self, *args, **kwargs):
"""
Task Exception Messages
Whenever Taskcluster fails to run a message is posted to this exchange.
This happens if the task isn't completed before its `deadlìne`,
all retries failed (i.e. workers stopped responding), the task was
canceled by another entity, or the task carried a malformed payload.
The specific _reason_ is evident from that task status structure, refer
to the `reasonResolved` property for the last run.
This exchange outputs: ``v1/task-exception-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-exception',
'name': 'taskException',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-exception-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskException(self, *args, **kwargs):
"""
Task Exception Messages
Whenever Taskcluster fails to run a message is posted to this exchange.
This happens if the task isn't completed before its `deadlìne`,
all retries failed (i.e. workers stopped responding), the task was
canceled by another entity, or the task carried a malformed payload.
The specific _reason_ is evident from that task status structure, refer
to the `reasonResolved` property for the last run.
This exchange outputs: ``v1/task-exception-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-exception',
'name': 'taskException',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskId',
},
{
'multipleWords': False,
'name': 'runId',
},
{
'multipleWords': False,
'name': 'workerGroup',
},
{
'multipleWords': False,
'name': 'workerId',
},
{
'multipleWords': False,
'name': 'provisionerId',
},
{
'multipleWords': False,
'name': 'workerType',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-exception-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Exception Messages
Whenever Taskcluster fails to run a message is posted to this exchange.
This happens if the task isn't completed before its `deadlìne`,
all retries failed (i.e. workers stopped responding), the task was
canceled by another entity, or the task carried a malformed payload.
The specific _reason_ is evident from that task status structure, refer
to the `reasonResolved` property for the last run.
This exchange outputs: ``v1/task-exception-message.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskId: `taskId` for the task this message concerns (required)
* runId: `runId` of latest run for the task, `_` if no run is exists for the task.
* workerGroup: `workerGroup` of latest run for the task, `_` if no run is exists for the task.
* workerId: `workerId` of latest run for the task, `_` if no run is exists for the task.
* provisionerId: `provisionerId` this task is targeted at. (required)
* workerType: `workerType` this task must run on. (required)
* schedulerId: `schedulerId` this task was created by. (required)
* taskGroupId: `taskGroupId` this task was created in. (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L582-L665 |
taskcluster/taskcluster-client.py | taskcluster/queueevents.py | QueueEvents.taskGroupResolved | def taskGroupResolved(self, *args, **kwargs):
"""
Task Group Resolved Messages
A message is published on task-group-resolved whenever all submitted
tasks (whether scheduled or unscheduled) for a given task group have
been resolved, regardless of whether they resolved as successful or
not. A task group may be resolved multiple times, since new tasks may
be submitted against an already resolved task group.
This exchange outputs: ``v1/task-group-resolved.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskGroupId: `taskGroupId` for the task-group this message concerns (required)
* schedulerId: `schedulerId` for the task-group this message concerns (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-group-resolved',
'name': 'taskGroupResolved',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-group-resolved.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def taskGroupResolved(self, *args, **kwargs):
"""
Task Group Resolved Messages
A message is published on task-group-resolved whenever all submitted
tasks (whether scheduled or unscheduled) for a given task group have
been resolved, regardless of whether they resolved as successful or
not. A task group may be resolved multiple times, since new tasks may
be submitted against an already resolved task group.
This exchange outputs: ``v1/task-group-resolved.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskGroupId: `taskGroupId` for the task-group this message concerns (required)
* schedulerId: `schedulerId` for the task-group this message concerns (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'task-group-resolved',
'name': 'taskGroupResolved',
'routingKey': [
{
'constant': 'primary',
'multipleWords': False,
'name': 'routingKeyKind',
},
{
'multipleWords': False,
'name': 'taskGroupId',
},
{
'multipleWords': False,
'name': 'schedulerId',
},
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/task-group-resolved.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Task Group Resolved Messages
A message is published on task-group-resolved whenever all submitted
tasks (whether scheduled or unscheduled) for a given task group have
been resolved, regardless of whether they resolved as successful or
not. A task group may be resolved multiple times, since new tasks may
be submitted against an already resolved task group.
This exchange outputs: ``v1/task-group-resolved.json#``This exchange takes the following keys:
* routingKeyKind: Identifier for the routing-key kind. This is always `'primary'` for the formalized routing key. (required)
* taskGroupId: `taskGroupId` for the task-group this message concerns (required)
* schedulerId: `schedulerId` for the task-group this message concerns (required)
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/queueevents.py#L667-L712 |
jart/fabulous | fabulous/rlcomplete.py | Completer.complete | def complete(self, text, state):
"""The actual completion method
This method is not meant to be overridden. Override the
completelist method instead. It will make your life much easier.
For more detail see documentation for readline.set_completer
"""
if text != self.text:
self.matches = self.completelist(text)
self.text = text
try:
return self.matches[state]
except IndexError:
return None | python | def complete(self, text, state):
"""The actual completion method
This method is not meant to be overridden. Override the
completelist method instead. It will make your life much easier.
For more detail see documentation for readline.set_completer
"""
if text != self.text:
self.matches = self.completelist(text)
self.text = text
try:
return self.matches[state]
except IndexError:
return None | The actual completion method
This method is not meant to be overridden. Override the
completelist method instead. It will make your life much easier.
For more detail see documentation for readline.set_completer | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/rlcomplete.py#L36-L50 |
jart/fabulous | fabulous/rlcomplete.py | PathCompleter.matchuserhome | def matchuserhome(prefix):
"""To find matches that start with prefix.
For example, if prefix = '~user' this
returns list of possible matches in form of ['~userspam','~usereggs'] etc.
matchuserdir('~') returns all users
"""
if not prefix.startswith('~'):
raise ValueError("prefix must start with ~")
try: import pwd
except ImportError:
try: import winpwd as pwd
except ImportError: return []
return ['~' + u[0] for u in pwd.getpwall() if u[0].startswith(prefix[1:])] | python | def matchuserhome(prefix):
"""To find matches that start with prefix.
For example, if prefix = '~user' this
returns list of possible matches in form of ['~userspam','~usereggs'] etc.
matchuserdir('~') returns all users
"""
if not prefix.startswith('~'):
raise ValueError("prefix must start with ~")
try: import pwd
except ImportError:
try: import winpwd as pwd
except ImportError: return []
return ['~' + u[0] for u in pwd.getpwall() if u[0].startswith(prefix[1:])] | To find matches that start with prefix.
For example, if prefix = '~user' this
returns list of possible matches in form of ['~userspam','~usereggs'] etc.
matchuserdir('~') returns all users | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/rlcomplete.py#L86-L100 |
jart/fabulous | fabulous/rlcomplete.py | PathCompleter.completelist | def completelist(self, text):
"""Return a list of potential matches for completion
n.b. you want to complete to a file in the current working directory
that starts with a ~, use ./~ when typing in. Paths that start with
~ are magical and specify users' home paths
"""
path = os.path.expanduser(text)
if len(path) == 0 or path[0] != os.path.sep:
path = os.path.join(os.getcwd(), path)
if text == '~':
dpath = dtext = ''
bpath = '~'
files = ['~/']
elif text.startswith('~') and text.find('/', 1) < 0:
return self.matchuserhome(text)
else:
dtext = os.path.dirname(text)
dpath = os.path.dirname(path)
bpath = os.path.basename(path)
files = os.listdir(dpath)
if bpath =='':
matches = [self.buildpath(text, f) for f in files if not f.startswith('.')]
else:
matches = [self.buildpath(dtext, f) for f in files if f.startswith(bpath)]
if len(matches) == 0 and os.path.basename(path)=='..':
files = os.listdir(path)
matches = [os.path.join(text, f) for f in files]
return matches | python | def completelist(self, text):
"""Return a list of potential matches for completion
n.b. you want to complete to a file in the current working directory
that starts with a ~, use ./~ when typing in. Paths that start with
~ are magical and specify users' home paths
"""
path = os.path.expanduser(text)
if len(path) == 0 or path[0] != os.path.sep:
path = os.path.join(os.getcwd(), path)
if text == '~':
dpath = dtext = ''
bpath = '~'
files = ['~/']
elif text.startswith('~') and text.find('/', 1) < 0:
return self.matchuserhome(text)
else:
dtext = os.path.dirname(text)
dpath = os.path.dirname(path)
bpath = os.path.basename(path)
files = os.listdir(dpath)
if bpath =='':
matches = [self.buildpath(text, f) for f in files if not f.startswith('.')]
else:
matches = [self.buildpath(dtext, f) for f in files if f.startswith(bpath)]
if len(matches) == 0 and os.path.basename(path)=='..':
files = os.listdir(path)
matches = [os.path.join(text, f) for f in files]
return matches | Return a list of potential matches for completion
n.b. you want to complete to a file in the current working directory
that starts with a ~, use ./~ when typing in. Paths that start with
~ are magical and specify users' home paths | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/rlcomplete.py#L103-L131 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.status | async def status(self, *args, **kwargs):
"""
Get task status
Get task status structure from `taskId`
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["status"], *args, **kwargs) | python | async def status(self, *args, **kwargs):
"""
Get task status
Get task status structure from `taskId`
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["status"], *args, **kwargs) | Get task status
Get task status structure from `taskId`
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L59-L70 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.listTaskGroup | async def listTaskGroup(self, *args, **kwargs):
"""
List Task Group
List tasks sharing the same `taskGroupId`.
As a task-group may contain an unbounded number of tasks, this end-point
may return a `continuationToken`. To continue listing tasks you must call
the `listTaskGroup` again with the `continuationToken` as the
query-string option `continuationToken`.
By default this end-point will try to return up to 1000 members in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listTaskGroup` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-task-group-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listTaskGroup"], *args, **kwargs) | python | async def listTaskGroup(self, *args, **kwargs):
"""
List Task Group
List tasks sharing the same `taskGroupId`.
As a task-group may contain an unbounded number of tasks, this end-point
may return a `continuationToken`. To continue listing tasks you must call
the `listTaskGroup` again with the `continuationToken` as the
query-string option `continuationToken`.
By default this end-point will try to return up to 1000 members in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listTaskGroup` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-task-group-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listTaskGroup"], *args, **kwargs) | List Task Group
List tasks sharing the same `taskGroupId`.
As a task-group may contain an unbounded number of tasks, this end-point
may return a `continuationToken`. To continue listing tasks you must call
the `listTaskGroup` again with the `continuationToken` as the
query-string option `continuationToken`.
By default this end-point will try to return up to 1000 members in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listTaskGroup` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-task-group-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L72-L98 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.listDependentTasks | async def listDependentTasks(self, *args, **kwargs):
"""
List Dependent Tasks
List tasks that depend on the given `taskId`.
As many tasks from different task-groups may dependent on a single tasks,
this end-point may return a `continuationToken`. To continue listing
tasks you must call `listDependentTasks` again with the
`continuationToken` as the query-string option `continuationToken`.
By default this end-point will try to return up to 1000 tasks in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listDependentTasks` with the last `continuationToken` until
you get a result without a `continuationToken`.
If you are not interested in listing all the tasks at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-dependent-tasks-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listDependentTasks"], *args, **kwargs) | python | async def listDependentTasks(self, *args, **kwargs):
"""
List Dependent Tasks
List tasks that depend on the given `taskId`.
As many tasks from different task-groups may dependent on a single tasks,
this end-point may return a `continuationToken`. To continue listing
tasks you must call `listDependentTasks` again with the
`continuationToken` as the query-string option `continuationToken`.
By default this end-point will try to return up to 1000 tasks in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listDependentTasks` with the last `continuationToken` until
you get a result without a `continuationToken`.
If you are not interested in listing all the tasks at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-dependent-tasks-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listDependentTasks"], *args, **kwargs) | List Dependent Tasks
List tasks that depend on the given `taskId`.
As many tasks from different task-groups may dependent on a single tasks,
this end-point may return a `continuationToken`. To continue listing
tasks you must call `listDependentTasks` again with the
`continuationToken` as the query-string option `continuationToken`.
By default this end-point will try to return up to 1000 tasks in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `listDependentTasks` with the last `continuationToken` until
you get a result without a `continuationToken`.
If you are not interested in listing all the tasks at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/list-dependent-tasks-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L100-L126 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.createTask | async def createTask(self, *args, **kwargs):
"""
Create New Task
Create a new task, this is an **idempotent** operation, so repeat it if
you get an internal server error or network connection is dropped.
**Task `deadline`**: the deadline property can be no more than 5 days
into the future. This is to limit the amount of pending tasks not being
taken care of. Ideally, you should use a much shorter deadline.
**Task expiration**: the `expires` property must be greater than the
task `deadline`. If not provided it will default to `deadline` + one
year. Notice, that artifacts created by task must expire before the task.
**Task specific routing-keys**: using the `task.routes` property you may
define task specific routing-keys. If a task has a task specific
routing-key: `<route>`, then when the AMQP message about the task is
published, the message will be CC'ed with the routing-key:
`route.<route>`. This is useful if you want another component to listen
for completed tasks you have posted. The caller must have scope
`queue:route:<route>` for each route.
**Dependencies**: any tasks referenced in `task.dependencies` must have
already been created at the time of this call.
**Scopes**: Note that the scopes required to complete this API call depend
on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
`provisionerId`, and `workerType` properties of the task definition.
**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
legacy and should not be used. Note that the new, non-legacy scopes require
a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createTask"], *args, **kwargs) | python | async def createTask(self, *args, **kwargs):
"""
Create New Task
Create a new task, this is an **idempotent** operation, so repeat it if
you get an internal server error or network connection is dropped.
**Task `deadline`**: the deadline property can be no more than 5 days
into the future. This is to limit the amount of pending tasks not being
taken care of. Ideally, you should use a much shorter deadline.
**Task expiration**: the `expires` property must be greater than the
task `deadline`. If not provided it will default to `deadline` + one
year. Notice, that artifacts created by task must expire before the task.
**Task specific routing-keys**: using the `task.routes` property you may
define task specific routing-keys. If a task has a task specific
routing-key: `<route>`, then when the AMQP message about the task is
published, the message will be CC'ed with the routing-key:
`route.<route>`. This is useful if you want another component to listen
for completed tasks you have posted. The caller must have scope
`queue:route:<route>` for each route.
**Dependencies**: any tasks referenced in `task.dependencies` must have
already been created at the time of this call.
**Scopes**: Note that the scopes required to complete this API call depend
on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
`provisionerId`, and `workerType` properties of the task definition.
**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
legacy and should not be used. Note that the new, non-legacy scopes require
a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createTask"], *args, **kwargs) | Create New Task
Create a new task, this is an **idempotent** operation, so repeat it if
you get an internal server error or network connection is dropped.
**Task `deadline`**: the deadline property can be no more than 5 days
into the future. This is to limit the amount of pending tasks not being
taken care of. Ideally, you should use a much shorter deadline.
**Task expiration**: the `expires` property must be greater than the
task `deadline`. If not provided it will default to `deadline` + one
year. Notice, that artifacts created by task must expire before the task.
**Task specific routing-keys**: using the `task.routes` property you may
define task specific routing-keys. If a task has a task specific
routing-key: `<route>`, then when the AMQP message about the task is
published, the message will be CC'ed with the routing-key:
`route.<route>`. This is useful if you want another component to listen
for completed tasks you have posted. The caller must have scope
`queue:route:<route>` for each route.
**Dependencies**: any tasks referenced in `task.dependencies` must have
already been created at the time of this call.
**Scopes**: Note that the scopes required to complete this API call depend
on the content of the `scopes`, `routes`, `schedulerId`, `priority`,
`provisionerId`, and `workerType` properties of the task definition.
**Legacy Scopes**: The `queue:create-task:..` scope without a priority and
the `queue:define-task:..` and `queue:task-group-id:..` scopes are considered
legacy and should not be used. Note that the new, non-legacy scopes require
a `queue:scheduler-id:..` scope as well as scopes for the proper priority.
This method takes input: ``v1/create-task-request.json#``
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L128-L170 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.claimWork | async def claimWork(self, *args, **kwargs):
"""
Claim Work
Claim pending task(s) for the given `provisionerId`/`workerType` queue.
If any work is available (even if fewer than the requested number of
tasks, this will return immediately. Otherwise, it will block for tens of
seconds waiting for work. If no work appears, it will return an emtpy
list of tasks. Callers should sleep a short while (to avoid denial of
service in an error condition) and call the endpoint again. This is a
simple implementation of "long polling".
This method takes input: ``v1/claim-work-request.json#``
This method gives output: ``v1/claim-work-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["claimWork"], *args, **kwargs) | python | async def claimWork(self, *args, **kwargs):
"""
Claim Work
Claim pending task(s) for the given `provisionerId`/`workerType` queue.
If any work is available (even if fewer than the requested number of
tasks, this will return immediately. Otherwise, it will block for tens of
seconds waiting for work. If no work appears, it will return an emtpy
list of tasks. Callers should sleep a short while (to avoid denial of
service in an error condition) and call the endpoint again. This is a
simple implementation of "long polling".
This method takes input: ``v1/claim-work-request.json#``
This method gives output: ``v1/claim-work-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["claimWork"], *args, **kwargs) | Claim Work
Claim pending task(s) for the given `provisionerId`/`workerType` queue.
If any work is available (even if fewer than the requested number of
tasks, this will return immediately. Otherwise, it will block for tens of
seconds waiting for work. If no work appears, it will return an emtpy
list of tasks. Callers should sleep a short while (to avoid denial of
service in an error condition) and call the endpoint again. This is a
simple implementation of "long polling".
This method takes input: ``v1/claim-work-request.json#``
This method gives output: ``v1/claim-work-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L265-L285 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.claimTask | async def claimTask(self, *args, **kwargs):
"""
Claim Task
claim a task - never documented
This method takes input: ``v1/task-claim-request.json#``
This method gives output: ``v1/task-claim-response.json#``
This method is ``deprecated``
"""
return await self._makeApiCall(self.funcinfo["claimTask"], *args, **kwargs) | python | async def claimTask(self, *args, **kwargs):
"""
Claim Task
claim a task - never documented
This method takes input: ``v1/task-claim-request.json#``
This method gives output: ``v1/task-claim-response.json#``
This method is ``deprecated``
"""
return await self._makeApiCall(self.funcinfo["claimTask"], *args, **kwargs) | Claim Task
claim a task - never documented
This method takes input: ``v1/task-claim-request.json#``
This method gives output: ``v1/task-claim-response.json#``
This method is ``deprecated`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L287-L300 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.reclaimTask | async def reclaimTask(self, *args, **kwargs):
"""
Reclaim task
Refresh the claim for a specific `runId` for given `taskId`. This updates
the `takenUntil` property and returns a new set of temporary credentials
for performing requests on behalf of the task. These credentials should
be used in-place of the credentials returned by `claimWork`.
The `reclaimTask` requests serves to:
* Postpone `takenUntil` preventing the queue from resolving
`claim-expired`,
* Refresh temporary credentials used for processing the task, and
* Abort execution if the task/run have been resolved.
If the `takenUntil` timestamp is exceeded the queue will resolve the run
as _exception_ with reason `claim-expired`, and proceeded to retry to the
task. This ensures that tasks are retried, even if workers disappear
without warning.
If the task is resolved, this end-point will return `409` reporting
`RequestConflict`. This typically happens if the task have been canceled
or the `task.deadline` have been exceeded. If reclaiming fails, workers
should abort the task and forget about the given `runId`. There is no
need to resolve the run or upload artifacts.
This method gives output: ``v1/task-reclaim-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["reclaimTask"], *args, **kwargs) | python | async def reclaimTask(self, *args, **kwargs):
"""
Reclaim task
Refresh the claim for a specific `runId` for given `taskId`. This updates
the `takenUntil` property and returns a new set of temporary credentials
for performing requests on behalf of the task. These credentials should
be used in-place of the credentials returned by `claimWork`.
The `reclaimTask` requests serves to:
* Postpone `takenUntil` preventing the queue from resolving
`claim-expired`,
* Refresh temporary credentials used for processing the task, and
* Abort execution if the task/run have been resolved.
If the `takenUntil` timestamp is exceeded the queue will resolve the run
as _exception_ with reason `claim-expired`, and proceeded to retry to the
task. This ensures that tasks are retried, even if workers disappear
without warning.
If the task is resolved, this end-point will return `409` reporting
`RequestConflict`. This typically happens if the task have been canceled
or the `task.deadline` have been exceeded. If reclaiming fails, workers
should abort the task and forget about the given `runId`. There is no
need to resolve the run or upload artifacts.
This method gives output: ``v1/task-reclaim-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["reclaimTask"], *args, **kwargs) | Reclaim task
Refresh the claim for a specific `runId` for given `taskId`. This updates
the `takenUntil` property and returns a new set of temporary credentials
for performing requests on behalf of the task. These credentials should
be used in-place of the credentials returned by `claimWork`.
The `reclaimTask` requests serves to:
* Postpone `takenUntil` preventing the queue from resolving
`claim-expired`,
* Refresh temporary credentials used for processing the task, and
* Abort execution if the task/run have been resolved.
If the `takenUntil` timestamp is exceeded the queue will resolve the run
as _exception_ with reason `claim-expired`, and proceeded to retry to the
task. This ensures that tasks are retried, even if workers disappear
without warning.
If the task is resolved, this end-point will return `409` reporting
`RequestConflict`. This typically happens if the task have been canceled
or the `task.deadline` have been exceeded. If reclaiming fails, workers
should abort the task and forget about the given `runId`. There is no
need to resolve the run or upload artifacts.
This method gives output: ``v1/task-reclaim-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L302-L333 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.reportCompleted | async def reportCompleted(self, *args, **kwargs):
"""
Report Run Completed
Report a task completed, resolving the run as `completed`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["reportCompleted"], *args, **kwargs) | python | async def reportCompleted(self, *args, **kwargs):
"""
Report Run Completed
Report a task completed, resolving the run as `completed`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["reportCompleted"], *args, **kwargs) | Report Run Completed
Report a task completed, resolving the run as `completed`.
This method gives output: ``v1/task-status-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L335-L346 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.getArtifact | async def getArtifact(self, *args, **kwargs):
"""
Get Artifact from Run
Get artifact by `<name>` from a specific run.
**Public Artifacts**, in-order to get an artifact you need the scope
`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
But if the artifact `name` starts with `public/`, authentication and
authorization is not necessary to fetch the artifact.
**API Clients**, this method will redirect you to the artifact, if it is
stored externally. Either way, the response may not be JSON. So API
client users might want to generate a signed URL for this end-point and
use that URL with an HTTP client that can handle responses correctly.
**Downloading artifacts**
There are some special considerations for those http clients which download
artifacts. This api endpoint is designed to be compatible with an HTTP 1.1
compliant client, but has extra features to ensure the download is valid.
It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
artifacts.
In order to download an artifact the following must be done:
1. Obtain queue url. Building a signed url with a taskcluster client is
recommended
1. Make a GET request which does not follow redirects
1. In all cases, if specified, the
x-taskcluster-location-{content,transfer}-{sha256,length} values must be
validated to be equal to the Content-Length and Sha256 checksum of the
final artifact downloaded. as well as any intermediate redirects
1. If this response is a 500-series error, retry using an exponential
backoff. No more than 5 retries should be attempted
1. If this response is a 400-series error, treat it appropriately for
your context. This might be an error in responding to this request or
an Error storage type body. This request should not be retried.
1. If this response is a 200-series response, the response body is the artifact.
If the x-taskcluster-location-{content,transfer}-{sha256,length} and
x-taskcluster-location-content-encoding are specified, they should match
this response body
1. If the response type is a 300-series redirect, the artifact will be at the
location specified by the `Location` header. There are multiple artifact storage
types which use a 300-series redirect.
1. For all redirects followed, the user must verify that the content-sha256, content-length,
transfer-sha256, transfer-length and content-encoding match every further request. The final
artifact must also be validated against the values specified in the original queue response
1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
must not occur
1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
must be treated as an error
**Headers**
The following important headers are set on the response to this method:
* location: the url of the artifact if a redirect is to be performed
* x-taskcluster-artifact-storage-type: the storage type. Example: blob, s3, error
The following important headers are set on responses to this method for Blob artifacts
* x-taskcluster-location-content-sha256: the SHA256 of the artifact
*after* any content-encoding is undone. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
*before* any content-encoding is undone. This is the SHA256 of what is sent over
the wire. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-content-encoding: the content-encoding used. It will either
be `gzip` or `identity` right now. This is hardcoded to a value set when the artifact
was created and no content-negotiation occurs
* x-taskcluster-location-content-type: the content-type of the artifact
**Caching**, artifacts may be cached in data centers closer to the
workers in-order to reduce bandwidth costs. This can lead to longer
response times. Caching can be skipped by setting the header
`x-taskcluster-skip-cache: true`, this should only be used for resources
where request volume is known to be low, and caching not useful.
(This feature may be disabled in the future, use is sparingly!)
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getArtifact"], *args, **kwargs) | python | async def getArtifact(self, *args, **kwargs):
"""
Get Artifact from Run
Get artifact by `<name>` from a specific run.
**Public Artifacts**, in-order to get an artifact you need the scope
`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
But if the artifact `name` starts with `public/`, authentication and
authorization is not necessary to fetch the artifact.
**API Clients**, this method will redirect you to the artifact, if it is
stored externally. Either way, the response may not be JSON. So API
client users might want to generate a signed URL for this end-point and
use that URL with an HTTP client that can handle responses correctly.
**Downloading artifacts**
There are some special considerations for those http clients which download
artifacts. This api endpoint is designed to be compatible with an HTTP 1.1
compliant client, but has extra features to ensure the download is valid.
It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
artifacts.
In order to download an artifact the following must be done:
1. Obtain queue url. Building a signed url with a taskcluster client is
recommended
1. Make a GET request which does not follow redirects
1. In all cases, if specified, the
x-taskcluster-location-{content,transfer}-{sha256,length} values must be
validated to be equal to the Content-Length and Sha256 checksum of the
final artifact downloaded. as well as any intermediate redirects
1. If this response is a 500-series error, retry using an exponential
backoff. No more than 5 retries should be attempted
1. If this response is a 400-series error, treat it appropriately for
your context. This might be an error in responding to this request or
an Error storage type body. This request should not be retried.
1. If this response is a 200-series response, the response body is the artifact.
If the x-taskcluster-location-{content,transfer}-{sha256,length} and
x-taskcluster-location-content-encoding are specified, they should match
this response body
1. If the response type is a 300-series redirect, the artifact will be at the
location specified by the `Location` header. There are multiple artifact storage
types which use a 300-series redirect.
1. For all redirects followed, the user must verify that the content-sha256, content-length,
transfer-sha256, transfer-length and content-encoding match every further request. The final
artifact must also be validated against the values specified in the original queue response
1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
must not occur
1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
must be treated as an error
**Headers**
The following important headers are set on the response to this method:
* location: the url of the artifact if a redirect is to be performed
* x-taskcluster-artifact-storage-type: the storage type. Example: blob, s3, error
The following important headers are set on responses to this method for Blob artifacts
* x-taskcluster-location-content-sha256: the SHA256 of the artifact
*after* any content-encoding is undone. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
*before* any content-encoding is undone. This is the SHA256 of what is sent over
the wire. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-content-encoding: the content-encoding used. It will either
be `gzip` or `identity` right now. This is hardcoded to a value set when the artifact
was created and no content-negotiation occurs
* x-taskcluster-location-content-type: the content-type of the artifact
**Caching**, artifacts may be cached in data centers closer to the
workers in-order to reduce bandwidth costs. This can lead to longer
response times. Caching can be skipped by setting the header
`x-taskcluster-skip-cache: true`, this should only be used for resources
where request volume is known to be low, and caching not useful.
(This feature may be disabled in the future, use is sparingly!)
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getArtifact"], *args, **kwargs) | Get Artifact from Run
Get artifact by `<name>` from a specific run.
**Public Artifacts**, in-order to get an artifact you need the scope
`queue:get-artifact:<name>`, where `<name>` is the name of the artifact.
But if the artifact `name` starts with `public/`, authentication and
authorization is not necessary to fetch the artifact.
**API Clients**, this method will redirect you to the artifact, if it is
stored externally. Either way, the response may not be JSON. So API
client users might want to generate a signed URL for this end-point and
use that URL with an HTTP client that can handle responses correctly.
**Downloading artifacts**
There are some special considerations for those http clients which download
artifacts. This api endpoint is designed to be compatible with an HTTP 1.1
compliant client, but has extra features to ensure the download is valid.
It is strongly recommend that consumers use either taskcluster-lib-artifact (JS),
taskcluster-lib-artifact-go (Go) or the CLI written in Go to interact with
artifacts.
In order to download an artifact the following must be done:
1. Obtain queue url. Building a signed url with a taskcluster client is
recommended
1. Make a GET request which does not follow redirects
1. In all cases, if specified, the
x-taskcluster-location-{content,transfer}-{sha256,length} values must be
validated to be equal to the Content-Length and Sha256 checksum of the
final artifact downloaded. as well as any intermediate redirects
1. If this response is a 500-series error, retry using an exponential
backoff. No more than 5 retries should be attempted
1. If this response is a 400-series error, treat it appropriately for
your context. This might be an error in responding to this request or
an Error storage type body. This request should not be retried.
1. If this response is a 200-series response, the response body is the artifact.
If the x-taskcluster-location-{content,transfer}-{sha256,length} and
x-taskcluster-location-content-encoding are specified, they should match
this response body
1. If the response type is a 300-series redirect, the artifact will be at the
location specified by the `Location` header. There are multiple artifact storage
types which use a 300-series redirect.
1. For all redirects followed, the user must verify that the content-sha256, content-length,
transfer-sha256, transfer-length and content-encoding match every further request. The final
artifact must also be validated against the values specified in the original queue response
1. Caching of requests with an x-taskcluster-artifact-storage-type value of `reference`
must not occur
1. A request which has x-taskcluster-artifact-storage-type value of `blob` and does not
have x-taskcluster-location-content-sha256 or x-taskcluster-location-content-length
must be treated as an error
**Headers**
The following important headers are set on the response to this method:
* location: the url of the artifact if a redirect is to be performed
* x-taskcluster-artifact-storage-type: the storage type. Example: blob, s3, error
The following important headers are set on responses to this method for Blob artifacts
* x-taskcluster-location-content-sha256: the SHA256 of the artifact
*after* any content-encoding is undone. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-content-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-transfer-sha256: the SHA256 of the artifact
*before* any content-encoding is undone. This is the SHA256 of what is sent over
the wire. Sha256 is hex encoded (e.g. [0-9A-Fa-f]{64})
* x-taskcluster-location-transfer-length: the number of bytes *after* any content-encoding
is undone
* x-taskcluster-location-content-encoding: the content-encoding used. It will either
be `gzip` or `identity` right now. This is hardcoded to a value set when the artifact
was created and no content-negotiation occurs
* x-taskcluster-location-content-type: the content-type of the artifact
**Caching**, artifacts may be cached in data centers closer to the
workers in-order to reduce bandwidth costs. This can lead to longer
response times. Caching can be skipped by setting the header
`x-taskcluster-skip-cache: true`, this should only be used for resources
where request volume is known to be low, and caching not useful.
(This feature may be disabled in the future, use is sparingly!)
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L498-L584 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.listArtifacts | async def listArtifacts(self, *args, **kwargs):
"""
Get Artifacts from Run
Returns a list of artifacts and associated meta-data for a given run.
As a task may have many artifacts paging may be necessary. If this
end-point returns a `continuationToken`, you should call the end-point
again with the `continuationToken` as the query-string option:
`continuationToken`.
By default this end-point will list up-to 1000 artifacts in a single page
you may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-artifacts-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["listArtifacts"], *args, **kwargs) | python | async def listArtifacts(self, *args, **kwargs):
"""
Get Artifacts from Run
Returns a list of artifacts and associated meta-data for a given run.
As a task may have many artifacts paging may be necessary. If this
end-point returns a `continuationToken`, you should call the end-point
again with the `continuationToken` as the query-string option:
`continuationToken`.
By default this end-point will list up-to 1000 artifacts in a single page
you may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-artifacts-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["listArtifacts"], *args, **kwargs) | Get Artifacts from Run
Returns a list of artifacts and associated meta-data for a given run.
As a task may have many artifacts paging may be necessary. If this
end-point returns a `continuationToken`, you should call the end-point
again with the `continuationToken` as the query-string option:
`continuationToken`.
By default this end-point will list up-to 1000 artifacts in a single page
you may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-artifacts-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L611-L630 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.getWorkerType | async def getWorkerType(self, *args, **kwargs):
"""
Get a worker-type
Get a worker-type from a provisioner.
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getWorkerType"], *args, **kwargs) | python | async def getWorkerType(self, *args, **kwargs):
"""
Get a worker-type
Get a worker-type from a provisioner.
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getWorkerType"], *args, **kwargs) | Get a worker-type
Get a worker-type from a provisioner.
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L754-L765 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.declareWorkerType | async def declareWorkerType(self, *args, **kwargs):
"""
Update a worker-type
Declare a workerType, supplying some details about it.
`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
This method takes input: ``v1/update-workertype-request.json#``
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["declareWorkerType"], *args, **kwargs) | python | async def declareWorkerType(self, *args, **kwargs):
"""
Update a worker-type
Declare a workerType, supplying some details about it.
`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
This method takes input: ``v1/update-workertype-request.json#``
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["declareWorkerType"], *args, **kwargs) | Update a worker-type
Declare a workerType, supplying some details about it.
`declareWorkerType` allows updating one or more properties of a worker-type as long as the required scopes are
possessed. For example, a request to update the `gecko-b-1-w2008` worker-type within the `aws-provisioner-v1`
provisioner with a body `{description: 'This worker type is great'}` would require you to have the scope
`queue:declare-worker-type:aws-provisioner-v1/gecko-b-1-w2008#description`.
This method takes input: ``v1/update-workertype-request.json#``
This method gives output: ``v1/workertype-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L767-L785 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.listWorkers | async def listWorkers(self, *args, **kwargs):
"""
Get a list of all active workers of a workerType
Get a list of all active workers of a workerType.
`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
To filter the query, you should call the end-point with `quarantined` as a query-string option with a
true or false value.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 workers in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workers-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["listWorkers"], *args, **kwargs) | python | async def listWorkers(self, *args, **kwargs):
"""
Get a list of all active workers of a workerType
Get a list of all active workers of a workerType.
`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
To filter the query, you should call the end-point with `quarantined` as a query-string option with a
true or false value.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 workers in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workers-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["listWorkers"], *args, **kwargs) | Get a list of all active workers of a workerType
Get a list of all active workers of a workerType.
`listWorkers` allows a response to be filtered by quarantined and non quarantined workers.
To filter the query, you should call the end-point with `quarantined` as a query-string option with a
true or false value.
The response is paged. If this end-point returns a `continuationToken`, you
should call the end-point again with the `continuationToken` as a query-string
option. By default this end-point will list up to 1000 workers in a single
page. You may limit this with the query-string parameter `limit`.
This method gives output: ``v1/list-workers-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L787-L807 |
taskcluster/taskcluster-client.py | taskcluster/aio/queue.py | Queue.getWorker | async def getWorker(self, *args, **kwargs):
"""
Get a worker-type
Get a worker from a worker-type.
This method gives output: ``v1/worker-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getWorker"], *args, **kwargs) | python | async def getWorker(self, *args, **kwargs):
"""
Get a worker-type
Get a worker from a worker-type.
This method gives output: ``v1/worker-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getWorker"], *args, **kwargs) | Get a worker-type
Get a worker from a worker-type.
This method gives output: ``v1/worker-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/queue.py#L809-L820 |
jart/fabulous | fabulous/casts.py | file | def file(value, **kwarg):
"""value should be a path to file in the filesystem.
returns a file object
"""
#a bit weird, but I don't want to hard code default values
try:
f = open(value, **kwarg)
except IOError as e:
raise ValueError("unable to open %s : %s" % (path.abspath(value), e))
return f | python | def file(value, **kwarg):
"""value should be a path to file in the filesystem.
returns a file object
"""
#a bit weird, but I don't want to hard code default values
try:
f = open(value, **kwarg)
except IOError as e:
raise ValueError("unable to open %s : %s" % (path.abspath(value), e))
return f | value should be a path to file in the filesystem.
returns a file object | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/casts.py#L32-L42 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | calculateSleepTime | def calculateSleepTime(attempt):
""" From the go client
https://github.com/taskcluster/go-got/blob/031f55c/backoff.go#L24-L29
"""
if attempt <= 0:
return 0
# We subtract one to get exponents: 1, 2, 3, 4, 5, ..
delay = float(2 ** (attempt - 1)) * float(DELAY_FACTOR)
# Apply randomization factor
delay = delay * (RANDOMIZATION_FACTOR * (random.random() * 2 - 1) + 1)
# Always limit with a maximum delay
return min(delay, MAX_DELAY) | python | def calculateSleepTime(attempt):
""" From the go client
https://github.com/taskcluster/go-got/blob/031f55c/backoff.go#L24-L29
"""
if attempt <= 0:
return 0
# We subtract one to get exponents: 1, 2, 3, 4, 5, ..
delay = float(2 ** (attempt - 1)) * float(DELAY_FACTOR)
# Apply randomization factor
delay = delay * (RANDOMIZATION_FACTOR * (random.random() * 2 - 1) + 1)
# Always limit with a maximum delay
return min(delay, MAX_DELAY) | From the go client
https://github.com/taskcluster/go-got/blob/031f55c/backoff.go#L24-L29 | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L40-L52 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | fromNow | def fromNow(offset, dateObj=None):
"""
Generate a `datetime.datetime` instance which is offset using a string.
See the README.md for a full example, but offset could be '1 day' for
a datetime object one day in the future
"""
# We want to handle past dates as well as future
future = True
offset = offset.lstrip()
if offset.startswith('-'):
future = False
offset = offset[1:].lstrip()
if offset.startswith('+'):
offset = offset[1:].lstrip()
# Parse offset
m = r.match(offset)
if m is None:
raise ValueError("offset string: '%s' does not parse" % offset)
# In order to calculate years and months we need to calculate how many days
# to offset the offset by, since timedelta only goes as high as weeks
days = 0
hours = 0
minutes = 0
seconds = 0
if m.group('years'):
years = int(m.group('years'))
days += 365 * years
if m.group('months'):
months = int(m.group('months'))
days += 30 * months
days += int(m.group('days') or 0)
hours += int(m.group('hours') or 0)
minutes += int(m.group('minutes') or 0)
seconds += int(m.group('seconds') or 0)
# Offset datetime from utc
delta = datetime.timedelta(
weeks=int(m.group('weeks') or 0),
days=days,
hours=hours,
minutes=minutes,
seconds=seconds,
)
if not dateObj:
dateObj = datetime.datetime.utcnow()
return dateObj + delta if future else dateObj - delta | python | def fromNow(offset, dateObj=None):
"""
Generate a `datetime.datetime` instance which is offset using a string.
See the README.md for a full example, but offset could be '1 day' for
a datetime object one day in the future
"""
# We want to handle past dates as well as future
future = True
offset = offset.lstrip()
if offset.startswith('-'):
future = False
offset = offset[1:].lstrip()
if offset.startswith('+'):
offset = offset[1:].lstrip()
# Parse offset
m = r.match(offset)
if m is None:
raise ValueError("offset string: '%s' does not parse" % offset)
# In order to calculate years and months we need to calculate how many days
# to offset the offset by, since timedelta only goes as high as weeks
days = 0
hours = 0
minutes = 0
seconds = 0
if m.group('years'):
years = int(m.group('years'))
days += 365 * years
if m.group('months'):
months = int(m.group('months'))
days += 30 * months
days += int(m.group('days') or 0)
hours += int(m.group('hours') or 0)
minutes += int(m.group('minutes') or 0)
seconds += int(m.group('seconds') or 0)
# Offset datetime from utc
delta = datetime.timedelta(
weeks=int(m.group('weeks') or 0),
days=days,
hours=hours,
minutes=minutes,
seconds=seconds,
)
if not dateObj:
dateObj = datetime.datetime.utcnow()
return dateObj + delta if future else dateObj - delta | Generate a `datetime.datetime` instance which is offset using a string.
See the README.md for a full example, but offset could be '1 day' for
a datetime object one day in the future | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L63-L113 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | dumpJson | def dumpJson(obj, **kwargs):
""" Match JS's JSON.stringify. When using the default seperators,
base64 encoding JSON results in \n sequences in the output. Hawk
barfs in your face if you have that in the text"""
def handleDateAndBinaryForJs(x):
if six.PY3 and isinstance(x, six.binary_type):
x = x.decode()
if isinstance(x, datetime.datetime) or isinstance(x, datetime.date):
return stringDate(x)
else:
return x
d = json.dumps(obj, separators=(',', ':'), default=handleDateAndBinaryForJs, **kwargs)
assert '\n' not in d
return d | python | def dumpJson(obj, **kwargs):
""" Match JS's JSON.stringify. When using the default seperators,
base64 encoding JSON results in \n sequences in the output. Hawk
barfs in your face if you have that in the text"""
def handleDateAndBinaryForJs(x):
if six.PY3 and isinstance(x, six.binary_type):
x = x.decode()
if isinstance(x, datetime.datetime) or isinstance(x, datetime.date):
return stringDate(x)
else:
return x
d = json.dumps(obj, separators=(',', ':'), default=handleDateAndBinaryForJs, **kwargs)
assert '\n' not in d
return d | Match JS's JSON.stringify. When using the default seperators,
base64 encoding JSON results in \n sequences in the output. Hawk
barfs in your face if you have that in the text | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L123-L136 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | makeB64UrlSafe | def makeB64UrlSafe(b64str):
""" Make a base64 string URL Safe """
if isinstance(b64str, six.text_type):
b64str = b64str.encode()
# see RFC 4648, sec. 5
return b64str.replace(b'+', b'-').replace(b'/', b'_') | python | def makeB64UrlSafe(b64str):
""" Make a base64 string URL Safe """
if isinstance(b64str, six.text_type):
b64str = b64str.encode()
# see RFC 4648, sec. 5
return b64str.replace(b'+', b'-').replace(b'/', b'_') | Make a base64 string URL Safe | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L153-L158 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | encodeStringForB64Header | def encodeStringForB64Header(s):
""" HTTP Headers can't have new lines in them, let's """
if isinstance(s, six.text_type):
s = s.encode()
return base64.encodestring(s).strip().replace(b'\n', b'') | python | def encodeStringForB64Header(s):
""" HTTP Headers can't have new lines in them, let's """
if isinstance(s, six.text_type):
s = s.encode()
return base64.encodestring(s).strip().replace(b'\n', b'') | HTTP Headers can't have new lines in them, let's | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L169-L173 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | stableSlugId | def stableSlugId():
"""Returns a closure which can be used to generate stable slugIds.
Stable slugIds can be used in a graph to specify task IDs in multiple
places without regenerating them, e.g. taskId, requires, etc.
"""
_cache = {}
def closure(name):
if name not in _cache:
_cache[name] = slugId()
return _cache[name]
return closure | python | def stableSlugId():
"""Returns a closure which can be used to generate stable slugIds.
Stable slugIds can be used in a graph to specify task IDs in multiple
places without regenerating them, e.g. taskId, requires, etc.
"""
_cache = {}
def closure(name):
if name not in _cache:
_cache[name] = slugId()
return _cache[name]
return closure | Returns a closure which can be used to generate stable slugIds.
Stable slugIds can be used in a graph to specify task IDs in multiple
places without regenerating them, e.g. taskId, requires, etc. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L182-L194 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | scopeMatch | def scopeMatch(assumedScopes, requiredScopeSets):
"""
Take a list of a assumed scopes, and a list of required scope sets on
disjunctive normal form, and check if any of the required scope sets are
satisfied.
Example:
requiredScopeSets = [
["scopeA", "scopeB"],
["scopeC"]
]
In this case assumed_scopes must contain, either:
"scopeA" AND "scopeB", OR just "scopeC".
"""
for scopeSet in requiredScopeSets:
for requiredScope in scopeSet:
for scope in assumedScopes:
if scope == requiredScope:
# requiredScope satisifed, no need to check more scopes
break
if scope.endswith("*") and requiredScope.startswith(scope[:-1]):
# requiredScope satisifed, no need to check more scopes
break
else:
# requiredScope not satisfied, stop checking scopeSet
break
else:
# scopeSet satisfied, so we're happy
return True
# none of the requiredScopeSets were satisfied
return False | python | def scopeMatch(assumedScopes, requiredScopeSets):
"""
Take a list of a assumed scopes, and a list of required scope sets on
disjunctive normal form, and check if any of the required scope sets are
satisfied.
Example:
requiredScopeSets = [
["scopeA", "scopeB"],
["scopeC"]
]
In this case assumed_scopes must contain, either:
"scopeA" AND "scopeB", OR just "scopeC".
"""
for scopeSet in requiredScopeSets:
for requiredScope in scopeSet:
for scope in assumedScopes:
if scope == requiredScope:
# requiredScope satisifed, no need to check more scopes
break
if scope.endswith("*") and requiredScope.startswith(scope[:-1]):
# requiredScope satisifed, no need to check more scopes
break
else:
# requiredScope not satisfied, stop checking scopeSet
break
else:
# scopeSet satisfied, so we're happy
return True
# none of the requiredScopeSets were satisfied
return False | Take a list of a assumed scopes, and a list of required scope sets on
disjunctive normal form, and check if any of the required scope sets are
satisfied.
Example:
requiredScopeSets = [
["scopeA", "scopeB"],
["scopeC"]
]
In this case assumed_scopes must contain, either:
"scopeA" AND "scopeB", OR just "scopeC". | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L197-L229 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | makeHttpRequest | def makeHttpRequest(method, url, payload, headers, retries=MAX_RETRIES, session=None):
""" Make an HTTP request and retry it until success, return request """
retry = -1
response = None
while retry < retries:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
snooze = float(retry * retry) / 10.0
log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
time.sleep(snooze)
# Seek payload to start, if it is a file
if hasattr(payload, 'seek'):
payload.seek(0)
log.debug('Making attempt %d', retry)
try:
response = makeSingleHttpRequest(method, url, payload, headers, session)
except requests.exceptions.RequestException as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise rerr
# Handle non 2xx status code and retry if possible
try:
response.raise_for_status()
except requests.exceptions.RequestException:
pass
status = response.status_code
if 500 <= status and status < 600 and retry < retries:
if retry < retries:
log.warn('Retrying because of: %d status' % status)
continue
else:
raise exceptions.TaskclusterRestFailure("Unknown Server Error", superExc=None)
return response
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!" | python | def makeHttpRequest(method, url, payload, headers, retries=MAX_RETRIES, session=None):
""" Make an HTTP request and retry it until success, return request """
retry = -1
response = None
while retry < retries:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
snooze = float(retry * retry) / 10.0
log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
time.sleep(snooze)
# Seek payload to start, if it is a file
if hasattr(payload, 'seek'):
payload.seek(0)
log.debug('Making attempt %d', retry)
try:
response = makeSingleHttpRequest(method, url, payload, headers, session)
except requests.exceptions.RequestException as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise rerr
# Handle non 2xx status code and retry if possible
try:
response.raise_for_status()
except requests.exceptions.RequestException:
pass
status = response.status_code
if 500 <= status and status < 600 and retry < retries:
if retry < retries:
log.warn('Retrying because of: %d status' % status)
continue
else:
raise exceptions.TaskclusterRestFailure("Unknown Server Error", superExc=None)
return response
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!" | Make an HTTP request and retry it until success, return request | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L241-L281 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | isExpired | def isExpired(certificate):
""" Check if certificate is expired """
if isinstance(certificate, six.string_types):
certificate = json.loads(certificate)
expiry = certificate.get('expiry', 0)
return expiry < int(time.time() * 1000) + 20 * 60 | python | def isExpired(certificate):
""" Check if certificate is expired """
if isinstance(certificate, six.string_types):
certificate = json.loads(certificate)
expiry = certificate.get('expiry', 0)
return expiry < int(time.time() * 1000) + 20 * 60 | Check if certificate is expired | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L314-L319 |
taskcluster/taskcluster-client.py | taskcluster/utils.py | optionsFromEnvironment | def optionsFromEnvironment(defaults=None):
"""Fetch root URL and credentials from the standard TASKCLUSTER_…
environment variables and return them in a format suitable for passing to a
client constructor."""
options = defaults or {}
credentials = options.get('credentials', {})
rootUrl = os.environ.get('TASKCLUSTER_ROOT_URL')
if rootUrl:
options['rootUrl'] = rootUrl
clientId = os.environ.get('TASKCLUSTER_CLIENT_ID')
if clientId:
credentials['clientId'] = clientId
accessToken = os.environ.get('TASKCLUSTER_ACCESS_TOKEN')
if accessToken:
credentials['accessToken'] = accessToken
certificate = os.environ.get('TASKCLUSTER_CERTIFICATE')
if certificate:
credentials['certificate'] = certificate
if credentials:
options['credentials'] = credentials
return options | python | def optionsFromEnvironment(defaults=None):
"""Fetch root URL and credentials from the standard TASKCLUSTER_…
environment variables and return them in a format suitable for passing to a
client constructor."""
options = defaults or {}
credentials = options.get('credentials', {})
rootUrl = os.environ.get('TASKCLUSTER_ROOT_URL')
if rootUrl:
options['rootUrl'] = rootUrl
clientId = os.environ.get('TASKCLUSTER_CLIENT_ID')
if clientId:
credentials['clientId'] = clientId
accessToken = os.environ.get('TASKCLUSTER_ACCESS_TOKEN')
if accessToken:
credentials['accessToken'] = accessToken
certificate = os.environ.get('TASKCLUSTER_CERTIFICATE')
if certificate:
credentials['certificate'] = certificate
if credentials:
options['credentials'] = credentials
return options | Fetch root URL and credentials from the standard TASKCLUSTER_…
environment variables and return them in a format suitable for passing to a
client constructor. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/utils.py#L322-L348 |
jart/fabulous | fabulous/xterm256.py | xterm_to_rgb | def xterm_to_rgb(xcolor):
"""Convert xterm Color ID to an RGB value
All 256 values are precalculated and stored in :data:`COLOR_TABLE`
"""
assert 0 <= xcolor <= 255
if xcolor < 16:
# basic colors
return BASIC16[xcolor]
elif 16 <= xcolor <= 231:
# color cube
xcolor -= 16
return (CUBE_STEPS[xcolor // 36 % 6],
CUBE_STEPS[xcolor // 6 % 6],
CUBE_STEPS[xcolor % 6])
elif 232 <= xcolor <= 255:
# gray tone
c = 8 + (xcolor - 232) * 0x0A
return (c, c, c) | python | def xterm_to_rgb(xcolor):
"""Convert xterm Color ID to an RGB value
All 256 values are precalculated and stored in :data:`COLOR_TABLE`
"""
assert 0 <= xcolor <= 255
if xcolor < 16:
# basic colors
return BASIC16[xcolor]
elif 16 <= xcolor <= 231:
# color cube
xcolor -= 16
return (CUBE_STEPS[xcolor // 36 % 6],
CUBE_STEPS[xcolor // 6 % 6],
CUBE_STEPS[xcolor % 6])
elif 232 <= xcolor <= 255:
# gray tone
c = 8 + (xcolor - 232) * 0x0A
return (c, c, c) | Convert xterm Color ID to an RGB value
All 256 values are precalculated and stored in :data:`COLOR_TABLE` | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/xterm256.py#L42-L60 |
jart/fabulous | fabulous/xterm256.py | rgb_to_xterm | def rgb_to_xterm(r, g, b):
"""Quantize RGB values to an xterm 256-color ID
This works by envisioning the RGB values for all 256 xterm colors
as 3D euclidean space and brute-force searching for the nearest
neighbor.
This is very slow. If you're very lucky, :func:`compile_speedup`
will replace this function automatically with routines in
`_xterm256.c`.
"""
if r < 5 and g < 5 and b < 5:
return 16
best_match = 0
smallest_distance = 10000000000
for c in range(16, 256):
d = (COLOR_TABLE[c][0] - r) ** 2 + \
(COLOR_TABLE[c][1] - g) ** 2 + \
(COLOR_TABLE[c][2] - b) ** 2
if d < smallest_distance:
smallest_distance = d
best_match = c
return best_match | python | def rgb_to_xterm(r, g, b):
"""Quantize RGB values to an xterm 256-color ID
This works by envisioning the RGB values for all 256 xterm colors
as 3D euclidean space and brute-force searching for the nearest
neighbor.
This is very slow. If you're very lucky, :func:`compile_speedup`
will replace this function automatically with routines in
`_xterm256.c`.
"""
if r < 5 and g < 5 and b < 5:
return 16
best_match = 0
smallest_distance = 10000000000
for c in range(16, 256):
d = (COLOR_TABLE[c][0] - r) ** 2 + \
(COLOR_TABLE[c][1] - g) ** 2 + \
(COLOR_TABLE[c][2] - b) ** 2
if d < smallest_distance:
smallest_distance = d
best_match = c
return best_match | Quantize RGB values to an xterm 256-color ID
This works by envisioning the RGB values for all 256 xterm colors
as 3D euclidean space and brute-force searching for the nearest
neighbor.
This is very slow. If you're very lucky, :func:`compile_speedup`
will replace this function automatically with routines in
`_xterm256.c`. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/xterm256.py#L66-L88 |
jart/fabulous | fabulous/xterm256.py | compile_speedup | def compile_speedup():
"""Tries to compile/link the C version of this module
Like it really makes a huge difference. With a little bit of luck
this should *just work* for you.
You need:
- Python >= 2.5 for ctypes library
- gcc (``sudo apt-get install gcc``)
"""
import os
import ctypes
from os.path import join, dirname, getmtime, exists, expanduser
# library = join(dirname(__file__), '_xterm256.so')
library = expanduser('~/.xterm256.so')
sauce = join(dirname(__file__), '_xterm256.c')
if not exists(library) or getmtime(sauce) > getmtime(library):
build = "gcc -fPIC -shared -o %s %s" % (library, sauce)
if (os.system(build + " >/dev/null 2>&1") != 0):
raise OSError("GCC error")
xterm256_c = ctypes.cdll.LoadLibrary(library)
xterm256_c.init()
def xterm_to_rgb(xcolor):
res = xterm256_c.xterm_to_rgb_i(xcolor)
return ((res >> 16) & 0xFF, (res >> 8) & 0xFF, res & 0xFF)
return (xterm256_c.rgb_to_xterm, xterm_to_rgb) | python | def compile_speedup():
"""Tries to compile/link the C version of this module
Like it really makes a huge difference. With a little bit of luck
this should *just work* for you.
You need:
- Python >= 2.5 for ctypes library
- gcc (``sudo apt-get install gcc``)
"""
import os
import ctypes
from os.path import join, dirname, getmtime, exists, expanduser
# library = join(dirname(__file__), '_xterm256.so')
library = expanduser('~/.xterm256.so')
sauce = join(dirname(__file__), '_xterm256.c')
if not exists(library) or getmtime(sauce) > getmtime(library):
build = "gcc -fPIC -shared -o %s %s" % (library, sauce)
if (os.system(build + " >/dev/null 2>&1") != 0):
raise OSError("GCC error")
xterm256_c = ctypes.cdll.LoadLibrary(library)
xterm256_c.init()
def xterm_to_rgb(xcolor):
res = xterm256_c.xterm_to_rgb_i(xcolor)
return ((res >> 16) & 0xFF, (res >> 8) & 0xFF, res & 0xFF)
return (xterm256_c.rgb_to_xterm, xterm_to_rgb) | Tries to compile/link the C version of this module
Like it really makes a huge difference. With a little bit of luck
this should *just work* for you.
You need:
- Python >= 2.5 for ctypes library
- gcc (``sudo apt-get install gcc``) | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/xterm256.py#L91-L118 |
jart/fabulous | fabulous/image.py | main | def main():
"""Main function for :command:`fabulous-image`."""
import optparse
parser = optparse.OptionParser()
parser.add_option(
"-w", "--width", dest="width", type="int", default=None,
help=("Width of printed image in characters. Default: %default"))
(options, args) = parser.parse_args(args=sys.argv[1:])
for imgpath in args:
for line in Image(imgpath, options.width):
printy(line) | python | def main():
"""Main function for :command:`fabulous-image`."""
import optparse
parser = optparse.OptionParser()
parser.add_option(
"-w", "--width", dest="width", type="int", default=None,
help=("Width of printed image in characters. Default: %default"))
(options, args) = parser.parse_args(args=sys.argv[1:])
for imgpath in args:
for line in Image(imgpath, options.width):
printy(line) | Main function for :command:`fabulous-image`. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/image.py#L180-L190 |
jart/fabulous | fabulous/image.py | Image.resize | def resize(self, width=None):
"""Resizes image to fit inside terminal
Called by the constructor automatically.
"""
(iw, ih) = self.size
if width is None:
width = min(iw, utils.term.width)
elif isinstance(width, basestring):
percents = dict([(pct, '%s%%' % (pct)) for pct in range(101)])
width = percents[width]
height = int(float(ih) * (float(width) / float(iw)))
height //= 2
self.img = self.img.resize((width, height)) | python | def resize(self, width=None):
"""Resizes image to fit inside terminal
Called by the constructor automatically.
"""
(iw, ih) = self.size
if width is None:
width = min(iw, utils.term.width)
elif isinstance(width, basestring):
percents = dict([(pct, '%s%%' % (pct)) for pct in range(101)])
width = percents[width]
height = int(float(ih) * (float(width) / float(iw)))
height //= 2
self.img = self.img.resize((width, height)) | Resizes image to fit inside terminal
Called by the constructor automatically. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/image.py#L107-L120 |
jart/fabulous | fabulous/image.py | Image.reduce | def reduce(self, colors):
"""Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text.
"""
need_reset = False
line = []
for color, items in itertools.groupby(colors):
if color is None:
if need_reset:
line.append("\x1b[49m")
need_reset = False
line.append(self.pad * len(list(items)))
elif color == "EOL":
if need_reset:
line.append("\x1b[49m")
need_reset = False
yield "".join(line)
else:
line.pop()
yield "".join(line)
line = []
else:
need_reset = True
line.append("\x1b[48;5;%dm%s" % (
color, self.pad * len(list(items)))) | python | def reduce(self, colors):
"""Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text.
"""
need_reset = False
line = []
for color, items in itertools.groupby(colors):
if color is None:
if need_reset:
line.append("\x1b[49m")
need_reset = False
line.append(self.pad * len(list(items)))
elif color == "EOL":
if need_reset:
line.append("\x1b[49m")
need_reset = False
yield "".join(line)
else:
line.pop()
yield "".join(line)
line = []
else:
need_reset = True
line.append("\x1b[48;5;%dm%s" % (
color, self.pad * len(list(items)))) | Converts color codes into optimized text
This optimizer works by merging adjacent colors so we don't
have to repeat the same escape codes for each pixel. There is
no loss of information.
:param colors: Iterable yielding an xterm color code for each
pixel, None to indicate a transparent pixel, or
``'EOL'`` to indicate th end of a line.
:return: Yields lines of optimized text. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/image.py#L122-L156 |
jart/fabulous | fabulous/image.py | Image.convert | def convert(self):
"""Yields xterm color codes for each pixel in image
"""
(width, height) = self.img.size
bgcolor = utils.term.bgcolor
self.img.load()
for y in range(height):
for x in range(width):
rgba = self.img.getpixel((x, y))
if len(rgba) == 4 and rgba[3] == 0:
yield None
elif len(rgba) == 3 or rgba[3] == 255:
yield xterm256.rgb_to_xterm(*rgba[:3])
else:
color = grapefruit.Color.NewFromRgb(
*[c / 255.0 for c in rgba])
rgba = grapefruit.Color.AlphaBlend(color, bgcolor).rgb
yield xterm256.rgb_to_xterm(
*[int(c * 255.0) for c in rgba])
yield "EOL" | python | def convert(self):
"""Yields xterm color codes for each pixel in image
"""
(width, height) = self.img.size
bgcolor = utils.term.bgcolor
self.img.load()
for y in range(height):
for x in range(width):
rgba = self.img.getpixel((x, y))
if len(rgba) == 4 and rgba[3] == 0:
yield None
elif len(rgba) == 3 or rgba[3] == 255:
yield xterm256.rgb_to_xterm(*rgba[:3])
else:
color = grapefruit.Color.NewFromRgb(
*[c / 255.0 for c in rgba])
rgba = grapefruit.Color.AlphaBlend(color, bgcolor).rgb
yield xterm256.rgb_to_xterm(
*[int(c * 255.0) for c in rgba])
yield "EOL" | Yields xterm color codes for each pixel in image | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/image.py#L158-L177 |
taskcluster/taskcluster-client.py | taskcluster/login.py | Login.oidcCredentials | def oidcCredentials(self, *args, **kwargs):
"""
Get Taskcluster credentials given a suitable `access_token`
Given an OIDC `access_token` from a trusted OpenID provider, return a
set of Taskcluster credentials for use on behalf of the identified
user.
This method is typically not called with a Taskcluster client library
and does not accept Hawk credentials. The `access_token` should be
given in an `Authorization` header:
```
Authorization: Bearer abc.xyz
```
The `access_token` is first verified against the named
:provider, then passed to the provider's APIBuilder to retrieve a user
profile. That profile is then used to generate Taskcluster credentials
appropriate to the user. Note that the resulting credentials may or may
not include a `certificate` property. Callers should be prepared for either
alternative.
The given credentials will expire in a relatively short time. Callers should
monitor this expiration and refresh the credentials if necessary, by calling
this endpoint again, if they have expired.
This method gives output: ``v1/oidc-credentials-response.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["oidcCredentials"], *args, **kwargs) | python | def oidcCredentials(self, *args, **kwargs):
"""
Get Taskcluster credentials given a suitable `access_token`
Given an OIDC `access_token` from a trusted OpenID provider, return a
set of Taskcluster credentials for use on behalf of the identified
user.
This method is typically not called with a Taskcluster client library
and does not accept Hawk credentials. The `access_token` should be
given in an `Authorization` header:
```
Authorization: Bearer abc.xyz
```
The `access_token` is first verified against the named
:provider, then passed to the provider's APIBuilder to retrieve a user
profile. That profile is then used to generate Taskcluster credentials
appropriate to the user. Note that the resulting credentials may or may
not include a `certificate` property. Callers should be prepared for either
alternative.
The given credentials will expire in a relatively short time. Callers should
monitor this expiration and refresh the credentials if necessary, by calling
this endpoint again, if they have expired.
This method gives output: ``v1/oidc-credentials-response.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["oidcCredentials"], *args, **kwargs) | Get Taskcluster credentials given a suitable `access_token`
Given an OIDC `access_token` from a trusted OpenID provider, return a
set of Taskcluster credentials for use on behalf of the identified
user.
This method is typically not called with a Taskcluster client library
and does not accept Hawk credentials. The `access_token` should be
given in an `Authorization` header:
```
Authorization: Bearer abc.xyz
```
The `access_token` is first verified against the named
:provider, then passed to the provider's APIBuilder to retrieve a user
profile. That profile is then used to generate Taskcluster credentials
appropriate to the user. Note that the resulting credentials may or may
not include a `certificate` property. Callers should be prepared for either
alternative.
The given credentials will expire in a relatively short time. Callers should
monitor this expiration and refresh the credentials if necessary, by calling
this endpoint again, if they have expired.
This method gives output: ``v1/oidc-credentials-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/login.py#L37-L68 |
taskcluster/taskcluster-client.py | taskcluster/aio/github.py | Github.builds | async def builds(self, *args, **kwargs):
"""
List of Builds
A paginated list of builds that have been run in
Taskcluster. Can be filtered on various git-specific
fields.
This method gives output: ``v1/build-list.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["builds"], *args, **kwargs) | python | async def builds(self, *args, **kwargs):
"""
List of Builds
A paginated list of builds that have been run in
Taskcluster. Can be filtered on various git-specific
fields.
This method gives output: ``v1/build-list.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["builds"], *args, **kwargs) | List of Builds
A paginated list of builds that have been run in
Taskcluster. Can be filtered on various git-specific
fields.
This method gives output: ``v1/build-list.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/github.py#L55-L68 |
taskcluster/taskcluster-client.py | taskcluster/aio/github.py | Github.repository | async def repository(self, *args, **kwargs):
"""
Get Repository Info
Returns any repository metadata that is
useful within Taskcluster related services.
This method gives output: ``v1/repository.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["repository"], *args, **kwargs) | python | async def repository(self, *args, **kwargs):
"""
Get Repository Info
Returns any repository metadata that is
useful within Taskcluster related services.
This method gives output: ``v1/repository.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["repository"], *args, **kwargs) | Get Repository Info
Returns any repository metadata that is
useful within Taskcluster related services.
This method gives output: ``v1/repository.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/github.py#L82-L94 |
taskcluster/taskcluster-client.py | taskcluster/aio/github.py | Github.createStatus | async def createStatus(self, *args, **kwargs):
"""
Post a status against a given changeset
For a given changeset (SHA) of a repository, this will attach a "commit status"
on github. These statuses are links displayed next to each revision.
The status is either OK (green check) or FAILURE (red cross),
made of a custom title and link.
This method takes input: ``v1/create-status.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["createStatus"], *args, **kwargs) | python | async def createStatus(self, *args, **kwargs):
"""
Post a status against a given changeset
For a given changeset (SHA) of a repository, this will attach a "commit status"
on github. These statuses are links displayed next to each revision.
The status is either OK (green check) or FAILURE (red cross),
made of a custom title and link.
This method takes input: ``v1/create-status.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["createStatus"], *args, **kwargs) | Post a status against a given changeset
For a given changeset (SHA) of a repository, this will attach a "commit status"
on github. These statuses are links displayed next to each revision.
The status is either OK (green check) or FAILURE (red cross),
made of a custom title and link.
This method takes input: ``v1/create-status.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/github.py#L111-L125 |
taskcluster/taskcluster-client.py | taskcluster/aio/notify.py | Notify.email | async def email(self, *args, **kwargs):
"""
Send an Email
Send an email to `address`. The content is markdown and will be rendered
to HTML, but both the HTML and raw markdown text will be sent in the
email. If a link is included, it will be rendered to a nice button in the
HTML version of the email
This method takes input: ``v1/email-request.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["email"], *args, **kwargs) | python | async def email(self, *args, **kwargs):
"""
Send an Email
Send an email to `address`. The content is markdown and will be rendered
to HTML, but both the HTML and raw markdown text will be sent in the
email. If a link is included, it will be rendered to a nice button in the
HTML version of the email
This method takes input: ``v1/email-request.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["email"], *args, **kwargs) | Send an Email
Send an email to `address`. The content is markdown and will be rendered
to HTML, but both the HTML and raw markdown text will be sent in the
email. If a link is included, it will be rendered to a nice button in the
HTML version of the email
This method takes input: ``v1/email-request.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/notify.py#L37-L51 |
taskcluster/taskcluster-client.py | taskcluster/aio/notify.py | Notify.pulse | async def pulse(self, *args, **kwargs):
"""
Publish a Pulse Message
Publish a message on pulse with the given `routingKey`.
This method takes input: ``v1/pulse-request.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["pulse"], *args, **kwargs) | python | async def pulse(self, *args, **kwargs):
"""
Publish a Pulse Message
Publish a message on pulse with the given `routingKey`.
This method takes input: ``v1/pulse-request.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["pulse"], *args, **kwargs) | Publish a Pulse Message
Publish a message on pulse with the given `routingKey`.
This method takes input: ``v1/pulse-request.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/notify.py#L53-L64 |
jart/fabulous | fabulous/logs.py | basicConfig | def basicConfig(level=logging.WARNING, transient_level=logging.NOTSET):
"""Shortcut for setting up transient logging
I am a replica of ``logging.basicConfig`` which installs a
transient logging handler to stderr.
"""
fmt = "%(asctime)s [%(levelname)s] [%(name)s:%(lineno)d] %(message)s"
logging.root.setLevel(transient_level) # <--- IMPORTANT
hand = TransientStreamHandler(level=level)
hand.setFormatter(logging.Formatter(fmt))
logging.root.addHandler(hand) | python | def basicConfig(level=logging.WARNING, transient_level=logging.NOTSET):
"""Shortcut for setting up transient logging
I am a replica of ``logging.basicConfig`` which installs a
transient logging handler to stderr.
"""
fmt = "%(asctime)s [%(levelname)s] [%(name)s:%(lineno)d] %(message)s"
logging.root.setLevel(transient_level) # <--- IMPORTANT
hand = TransientStreamHandler(level=level)
hand.setFormatter(logging.Formatter(fmt))
logging.root.addHandler(hand) | Shortcut for setting up transient logging
I am a replica of ``logging.basicConfig`` which installs a
transient logging handler to stderr. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/logs.py#L122-L132 |
taskcluster/taskcluster-client.py | taskcluster/aio/auth.py | Auth.resetAccessToken | async def resetAccessToken(self, *args, **kwargs):
"""
Reset `accessToken`
Reset a clients `accessToken`, this will revoke the existing
`accessToken`, generate a new `accessToken` and return it from this
call.
There is no way to retrieve an existing `accessToken`, so if you loose it
you must reset the accessToken to acquire it again.
This method gives output: ``v1/create-client-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetAccessToken"], *args, **kwargs) | python | async def resetAccessToken(self, *args, **kwargs):
"""
Reset `accessToken`
Reset a clients `accessToken`, this will revoke the existing
`accessToken`, generate a new `accessToken` and return it from this
call.
There is no way to retrieve an existing `accessToken`, so if you loose it
you must reset the accessToken to acquire it again.
This method gives output: ``v1/create-client-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetAccessToken"], *args, **kwargs) | Reset `accessToken`
Reset a clients `accessToken`, this will revoke the existing
`accessToken`, generate a new `accessToken` and return it from this
call.
There is no way to retrieve an existing `accessToken`, so if you loose it
you must reset the accessToken to acquire it again.
This method gives output: ``v1/create-client-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/auth.py#L134-L150 |
taskcluster/taskcluster-client.py | taskcluster/aio/auth.py | Auth.role | async def role(self, *args, **kwargs):
"""
Get Role
Get information about a single role, including the set of scopes that the
role expands to.
This method gives output: ``v1/get-role-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["role"], *args, **kwargs) | python | async def role(self, *args, **kwargs):
"""
Get Role
Get information about a single role, including the set of scopes that the
role expands to.
This method gives output: ``v1/get-role-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["role"], *args, **kwargs) | Get Role
Get information about a single role, including the set of scopes that the
role expands to.
This method gives output: ``v1/get-role-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/auth.py#L260-L272 |
taskcluster/taskcluster-client.py | taskcluster/aio/auth.py | Auth.azureAccounts | async def azureAccounts(self, *args, **kwargs):
"""
List Accounts Managed by Auth
Retrieve a list of all Azure accounts managed by Taskcluster Auth.
This method gives output: ``v1/azure-account-list-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["azureAccounts"], *args, **kwargs) | python | async def azureAccounts(self, *args, **kwargs):
"""
List Accounts Managed by Auth
Retrieve a list of all Azure accounts managed by Taskcluster Auth.
This method gives output: ``v1/azure-account-list-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["azureAccounts"], *args, **kwargs) | List Accounts Managed by Auth
Retrieve a list of all Azure accounts managed by Taskcluster Auth.
This method gives output: ``v1/azure-account-list-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/auth.py#L457-L468 |
taskcluster/taskcluster-client.py | taskcluster/aio/auth.py | Auth.authenticateHawk | async def authenticateHawk(self, *args, **kwargs):
"""
Authenticate Hawk Request
Validate the request signature given on input and return list of scopes
that the authenticating client has.
This method is used by other services that wish rely on Taskcluster
credentials for authentication. This way we can use Hawk without having
the secret credentials leave this service.
This method takes input: ``v1/authenticate-hawk-request.json#``
This method gives output: ``v1/authenticate-hawk-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["authenticateHawk"], *args, **kwargs) | python | async def authenticateHawk(self, *args, **kwargs):
"""
Authenticate Hawk Request
Validate the request signature given on input and return list of scopes
that the authenticating client has.
This method is used by other services that wish rely on Taskcluster
credentials for authentication. This way we can use Hawk without having
the secret credentials leave this service.
This method takes input: ``v1/authenticate-hawk-request.json#``
This method gives output: ``v1/authenticate-hawk-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["authenticateHawk"], *args, **kwargs) | Authenticate Hawk Request
Validate the request signature given on input and return list of scopes
that the authenticating client has.
This method is used by other services that wish rely on Taskcluster
credentials for authentication. This way we can use Hawk without having
the secret credentials leave this service.
This method takes input: ``v1/authenticate-hawk-request.json#``
This method gives output: ``v1/authenticate-hawk-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/auth.py#L588-L606 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.listHookGroups | async def listHookGroups(self, *args, **kwargs):
"""
List hook groups
This endpoint will return a list of all hook groups with at least one hook.
This method gives output: ``v1/list-hook-groups-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listHookGroups"], *args, **kwargs) | python | async def listHookGroups(self, *args, **kwargs):
"""
List hook groups
This endpoint will return a list of all hook groups with at least one hook.
This method gives output: ``v1/list-hook-groups-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listHookGroups"], *args, **kwargs) | List hook groups
This endpoint will return a list of all hook groups with at least one hook.
This method gives output: ``v1/list-hook-groups-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L54-L65 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.listHooks | async def listHooks(self, *args, **kwargs):
"""
List hooks in a given group
This endpoint will return a list of all the hook definitions within a
given hook group.
This method gives output: ``v1/list-hooks-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listHooks"], *args, **kwargs) | python | async def listHooks(self, *args, **kwargs):
"""
List hooks in a given group
This endpoint will return a list of all the hook definitions within a
given hook group.
This method gives output: ``v1/list-hooks-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["listHooks"], *args, **kwargs) | List hooks in a given group
This endpoint will return a list of all the hook definitions within a
given hook group.
This method gives output: ``v1/list-hooks-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L67-L79 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.hook | async def hook(self, *args, **kwargs):
"""
Get hook definition
This endpoint will return the hook definition for the given `hookGroupId`
and hookId.
This method gives output: ``v1/hook-definition.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["hook"], *args, **kwargs) | python | async def hook(self, *args, **kwargs):
"""
Get hook definition
This endpoint will return the hook definition for the given `hookGroupId`
and hookId.
This method gives output: ``v1/hook-definition.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["hook"], *args, **kwargs) | Get hook definition
This endpoint will return the hook definition for the given `hookGroupId`
and hookId.
This method gives output: ``v1/hook-definition.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L81-L93 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.updateHook | async def updateHook(self, *args, **kwargs):
"""
Update a hook
This endpoint will update an existing hook. All fields except
`hookGroupId` and `hookId` can be modified.
This method takes input: ``v1/create-hook-request.json#``
This method gives output: ``v1/hook-definition.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateHook"], *args, **kwargs) | python | async def updateHook(self, *args, **kwargs):
"""
Update a hook
This endpoint will update an existing hook. All fields except
`hookGroupId` and `hookId` can be modified.
This method takes input: ``v1/create-hook-request.json#``
This method gives output: ``v1/hook-definition.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateHook"], *args, **kwargs) | Update a hook
This endpoint will update an existing hook. All fields except
`hookGroupId` and `hookId` can be modified.
This method takes input: ``v1/create-hook-request.json#``
This method gives output: ``v1/hook-definition.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L130-L144 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.removeHook | async def removeHook(self, *args, **kwargs):
"""
Delete a hook
This endpoint will remove a hook definition.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeHook"], *args, **kwargs) | python | async def removeHook(self, *args, **kwargs):
"""
Delete a hook
This endpoint will remove a hook definition.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeHook"], *args, **kwargs) | Delete a hook
This endpoint will remove a hook definition.
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L146-L155 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.triggerHook | async def triggerHook(self, *args, **kwargs):
"""
Trigger a hook
This endpoint will trigger the creation of a task from a hook definition.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHook"], *args, **kwargs) | python | async def triggerHook(self, *args, **kwargs):
"""
Trigger a hook
This endpoint will trigger the creation of a task from a hook definition.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHook"], *args, **kwargs) | Trigger a hook
This endpoint will trigger the creation of a task from a hook definition.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L157-L174 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.getTriggerToken | async def getTriggerToken(self, *args, **kwargs):
"""
Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getTriggerToken"], *args, **kwargs) | python | async def getTriggerToken(self, *args, **kwargs):
"""
Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["getTriggerToken"], *args, **kwargs) | Get a trigger token
Retrieve a unique secret token for triggering the specified hook. This
token can be deactivated with `resetTriggerToken`.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L176-L188 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.resetTriggerToken | async def resetTriggerToken(self, *args, **kwargs):
"""
Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetTriggerToken"], *args, **kwargs) | python | async def resetTriggerToken(self, *args, **kwargs):
"""
Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["resetTriggerToken"], *args, **kwargs) | Reset a trigger token
Reset the token for triggering a given hook. This invalidates token that
may have been issued via getTriggerToken with a new token.
This method gives output: ``v1/trigger-token-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L190-L202 |
taskcluster/taskcluster-client.py | taskcluster/aio/hooks.py | Hooks.triggerHookWithToken | async def triggerHookWithToken(self, *args, **kwargs):
"""
Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHookWithToken"], *args, **kwargs) | python | async def triggerHookWithToken(self, *args, **kwargs):
"""
Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["triggerHookWithToken"], *args, **kwargs) | Trigger a hook with a token
This endpoint triggers a defined hook with a valid token.
The HTTP payload must match the hooks `triggerSchema`. If it does, it is
provided as the `payload` property of the JSON-e context used to render the
task template.
This method takes input: ``v1/trigger-hook.json#``
This method gives output: ``v1/trigger-hook-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/hooks.py#L204-L221 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.createWorkerType | async def createWorkerType(self, *args, **kwargs):
"""
Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs) | python | async def createWorkerType(self, *args, **kwargs):
"""
Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createWorkerType"], *args, **kwargs) | Create new Worker Type
Create a worker type. A worker type contains all the configuration
needed for the provisioner to manage the instances. Each worker type
knows which regions and which instance types are allowed for that
worker type. Remember that Capacity is the number of concurrent tasks
that can be run on a given EC2 resource and that Utility is the relative
performance rate between different instance types. There is no way to
configure different regions to have different sets of instance types
so ensure that all instance types are available in all regions.
This function is idempotent.
Once a worker type is in the provisioner, a back ground process will
begin creating instances for it based on its capacity bounds and its
pending task count from the Queue. It is the worker's responsibility
to shut itself down. The provisioner has a limit (currently 96hours)
for all instances to prevent zombie instances from running indefinitely.
The provisioner will ensure that all instances created are tagged with
aws resource tags containing the provisioner id and the worker type.
If provided, the secrets in the global, region and instance type sections
are available using the secrets api. If specified, the scopes provided
will be used to generate a set of temporary credentials available with
the other secrets.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L67-L102 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.updateWorkerType | async def updateWorkerType(self, *args, **kwargs):
"""
Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs) | python | async def updateWorkerType(self, *args, **kwargs):
"""
Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["updateWorkerType"], *args, **kwargs) | Update Worker Type
Provide a new copy of a worker type to replace the existing one.
This will overwrite the existing worker type definition if there
is already a worker type of that name. This method will return a
200 response along with a copy of the worker type definition created
Note that if you are using the result of a GET on the worker-type
end point that you will need to delete the lastModified and workerType
keys from the object returned, since those fields are not allowed
the request body for this method
Otherwise, all input requirements and actions are the same as the
create method.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-worker-type-request.json#``
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L104-L127 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.workerType | async def workerType(self, *args, **kwargs):
"""
Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["workerType"], *args, **kwargs) | python | async def workerType(self, *args, **kwargs):
"""
Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["workerType"], *args, **kwargs) | Get Worker Type
Retrieve a copy of the requested worker type definition.
This copy contains a lastModified field as well as the worker
type name. As such, it will require manipulation to be able to
use the results of this method to submit date to the update
method.
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-worker-type-response.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L146-L161 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.createSecret | async def createSecret(self, *args, **kwargs):
"""
Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createSecret"], *args, **kwargs) | python | async def createSecret(self, *args, **kwargs):
"""
Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["createSecret"], *args, **kwargs) | Create new Secret
Insert a secret into the secret storage. The supplied secrets will
be provided verbatime via `getSecret`, while the supplied scopes will
be converted into credentials by `getSecret`.
This method is not ordinarily used in production; instead, the provisioner
creates a new secret directly for each spot bid.
This method takes input: ``http://schemas.taskcluster.net/aws-provisioner/v1/create-secret-request.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L199-L215 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.removeSecret | async def removeSecret(self, *args, **kwargs):
"""
Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeSecret"], *args, **kwargs) | python | async def removeSecret(self, *args, **kwargs):
"""
Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable``
"""
return await self._makeApiCall(self.funcinfo["removeSecret"], *args, **kwargs) | Remove a Secret
Remove a secret. After this call, a call to `getSecret` with the given
token will return no information.
It is very important that the consumer of a
secret delete the secret from storage before handing over control
to untrusted processes to prevent credential and/or secret leakage.
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L251-L265 |
taskcluster/taskcluster-client.py | taskcluster/aio/awsprovisioner.py | AwsProvisioner.getLaunchSpecs | async def getLaunchSpecs(self, *args, **kwargs):
"""
Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getLaunchSpecs"], *args, **kwargs) | python | async def getLaunchSpecs(self, *args, **kwargs):
"""
Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental``
"""
return await self._makeApiCall(self.funcinfo["getLaunchSpecs"], *args, **kwargs) | Get All Launch Specifications for WorkerType
This method returns a preview of all possible launch specifications
that this worker type definition could submit to EC2. It is used to
test worker types, nothing more
**This API end-point is experimental and may be subject to change without warning.**
This method gives output: ``http://schemas.taskcluster.net/aws-provisioner/v1/get-launch-specs-response.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/awsprovisioner.py#L267-L282 |
taskcluster/taskcluster-client.py | taskcluster/aio/asyncutils.py | makeHttpRequest | async def makeHttpRequest(method, url, payload, headers, retries=utils.MAX_RETRIES, session=None):
""" Make an HTTP request and retry it until success, return request """
retry = -1
response = None
implicit = False
if session is None:
implicit = True
session = aiohttp.ClientSession()
def cleanup():
if implicit:
loop = asyncio.get_event_loop()
loop.run_until_complete(session.close())
try:
while True:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
snooze = float(retry * retry) / 10.0
log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
await asyncio.sleep(snooze)
# Seek payload to start, if it is a file
if hasattr(payload, 'seek'):
payload.seek(0)
log.debug('Making attempt %d', retry)
try:
with async_timeout.timeout(60):
response = await makeSingleHttpRequest(method, url, payload, headers, session)
except aiohttp.ClientError as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise rerr
except ValueError as rerr:
log.warn('ValueError from aiohttp: redirect to non-http or https')
raise rerr
except RuntimeError as rerr:
log.warn('RuntimeError from aiohttp: session closed')
raise rerr
# Handle non 2xx status code and retry if possible
status = response.status
if 500 <= status and status < 600 and retry < retries:
if retry < retries:
log.warn('Retrying because of: %d status' % status)
continue
else:
raise exceptions.TaskclusterRestFailure("Unknown Server Error", superExc=None)
return response
finally:
cleanup()
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!" | python | async def makeHttpRequest(method, url, payload, headers, retries=utils.MAX_RETRIES, session=None):
""" Make an HTTP request and retry it until success, return request """
retry = -1
response = None
implicit = False
if session is None:
implicit = True
session = aiohttp.ClientSession()
def cleanup():
if implicit:
loop = asyncio.get_event_loop()
loop.run_until_complete(session.close())
try:
while True:
retry += 1
# if this isn't the first retry then we sleep
if retry > 0:
snooze = float(retry * retry) / 10.0
log.info('Sleeping %0.2f seconds for exponential backoff', snooze)
await asyncio.sleep(snooze)
# Seek payload to start, if it is a file
if hasattr(payload, 'seek'):
payload.seek(0)
log.debug('Making attempt %d', retry)
try:
with async_timeout.timeout(60):
response = await makeSingleHttpRequest(method, url, payload, headers, session)
except aiohttp.ClientError as rerr:
if retry < retries:
log.warn('Retrying because of: %s' % rerr)
continue
# raise a connection exception
raise rerr
except ValueError as rerr:
log.warn('ValueError from aiohttp: redirect to non-http or https')
raise rerr
except RuntimeError as rerr:
log.warn('RuntimeError from aiohttp: session closed')
raise rerr
# Handle non 2xx status code and retry if possible
status = response.status
if 500 <= status and status < 600 and retry < retries:
if retry < retries:
log.warn('Retrying because of: %d status' % status)
continue
else:
raise exceptions.TaskclusterRestFailure("Unknown Server Error", superExc=None)
return response
finally:
cleanup()
# This code-path should be unreachable
assert False, "Error from last retry should have been raised!" | Make an HTTP request and retry it until success, return request | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/aio/asyncutils.py#L21-L76 |
jart/fabulous | fabulous/text.py | resolve_font | def resolve_font(name):
"""Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass
"""
if os.path.exists(name):
return os.path.abspath(name)
fonts = get_font_files()
if name in fonts:
return fonts[name]
raise FontNotFound("Can't find %r :'( Try adding it to ~/.fonts" % name) | python | def resolve_font(name):
"""Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass
"""
if os.path.exists(name):
return os.path.abspath(name)
fonts = get_font_files()
if name in fonts:
return fonts[name]
raise FontNotFound("Can't find %r :'( Try adding it to ~/.fonts" % name) | Turns font names into absolute filenames
This is case sensitive. The extension should be omitted.
For example::
>>> path = resolve_font('NotoSans-Bold')
>>> fontdir = os.path.join(os.path.dirname(__file__), 'fonts')
>>> noto_path = os.path.join(fontdir, 'NotoSans-Bold.ttf')
>>> noto_path = os.path.abspath(noto_path)
>>> assert path == noto_path
Absolute paths are allowed::
>>> resolve_font(noto_path) == noto_path
True
Raises :exc:`FontNotFound` on failure::
>>> try:
... resolve_font('blahahaha')
... assert False
... except FontNotFound:
... pass | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/text.py#L143-L176 |
jart/fabulous | fabulous/text.py | get_font_files | def get_font_files():
"""Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True
"""
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts
os.path.abspath(os.path.join(os.path.dirname(__file__), 'fonts')),
]
result = {}
for root in roots:
for path, dirs, names in os.walk(root):
for name in names:
if name.endswith(('.ttf', '.otf')):
result[name[:-4]] = os.path.join(path, name)
return result | python | def get_font_files():
"""Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True
"""
roots = [
'/usr/share/fonts/truetype', # where ubuntu puts fonts
'/usr/share/fonts', # where fedora puts fonts
os.path.expanduser('~/.fonts'), # custom user fonts
os.path.abspath(os.path.join(os.path.dirname(__file__), 'fonts')),
]
result = {}
for root in roots:
for path, dirs, names in os.walk(root):
for name in names:
if name.endswith(('.ttf', '.otf')):
result[name[:-4]] = os.path.join(path, name)
return result | Returns a list of all font files we could find
Returned as a list of dir/files tuples::
get_font_files() -> {'FontName': '/abs/FontName.ttf', ...]
For example::
>>> fonts = get_font_files()
>>> 'NotoSans-Bold' in fonts
True
>>> fonts['NotoSans-Bold'].endswith('/NotoSans-Bold.ttf')
True | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/text.py#L180-L208 |
jart/fabulous | fabulous/text.py | main | def main():
"""Main function for :command:`fabulous-text`."""
import optparse
parser = optparse.OptionParser()
parser.add_option(
"-l", "--list", dest="list", action="store_true", default=False,
help=("List available fonts"))
parser.add_option(
"-S", "--skew", dest="skew", type="int", default=None,
help=("Apply skew effect (measured in pixels) to make it look "
"extra cool. For example, Fabulous' logo logo is skewed "
"by 5 pixels. Default: %default"))
parser.add_option(
"-C", "--color", dest="color", default="#0099ff",
help=("Color of your text. This can be specified as you would "
"using HTML/CSS. Default: %default"))
parser.add_option(
"-B", "--term-color", dest="term_color", default=None,
help=("If you terminal background isn't black, please change "
"this value to the proper background so semi-transparent "
"pixels will blend properly."))
parser.add_option(
"-F", "--font", dest="font", default='NotoSans-Bold',
help=("Name of font file, or absolute path to one. Use the --list "
"flag to see what fonts are available. Fabulous bundles the "
"NotoSans-Bold and NotoEmoji-Regular fonts, which are guaranteed "
"to work. Default: %default"))
parser.add_option(
"-Z", "--size", dest="fsize", type="int", default=23,
help=("Size of font in points. Default: %default"))
parser.add_option(
"-s", "--shadow", dest="shadow", action="store_true", default=False,
help=("Size of font in points. Default: %default"))
(options, args) = parser.parse_args(args=sys.argv[1:])
if options.list:
print("\n".join(sorted(get_font_files())))
return
if options.term_color:
utils.term.bgcolor = options.term_color
text = " ".join(args)
if not isinstance(text, unicode):
text = text.decode('utf-8')
for line in text.split("\n"):
fab_text = Text(line, skew=options.skew, color=options.color,
font=options.font, fsize=options.fsize,
shadow=options.shadow)
for chunk in fab_text:
printy(chunk) | python | def main():
"""Main function for :command:`fabulous-text`."""
import optparse
parser = optparse.OptionParser()
parser.add_option(
"-l", "--list", dest="list", action="store_true", default=False,
help=("List available fonts"))
parser.add_option(
"-S", "--skew", dest="skew", type="int", default=None,
help=("Apply skew effect (measured in pixels) to make it look "
"extra cool. For example, Fabulous' logo logo is skewed "
"by 5 pixels. Default: %default"))
parser.add_option(
"-C", "--color", dest="color", default="#0099ff",
help=("Color of your text. This can be specified as you would "
"using HTML/CSS. Default: %default"))
parser.add_option(
"-B", "--term-color", dest="term_color", default=None,
help=("If you terminal background isn't black, please change "
"this value to the proper background so semi-transparent "
"pixels will blend properly."))
parser.add_option(
"-F", "--font", dest="font", default='NotoSans-Bold',
help=("Name of font file, or absolute path to one. Use the --list "
"flag to see what fonts are available. Fabulous bundles the "
"NotoSans-Bold and NotoEmoji-Regular fonts, which are guaranteed "
"to work. Default: %default"))
parser.add_option(
"-Z", "--size", dest="fsize", type="int", default=23,
help=("Size of font in points. Default: %default"))
parser.add_option(
"-s", "--shadow", dest="shadow", action="store_true", default=False,
help=("Size of font in points. Default: %default"))
(options, args) = parser.parse_args(args=sys.argv[1:])
if options.list:
print("\n".join(sorted(get_font_files())))
return
if options.term_color:
utils.term.bgcolor = options.term_color
text = " ".join(args)
if not isinstance(text, unicode):
text = text.decode('utf-8')
for line in text.split("\n"):
fab_text = Text(line, skew=options.skew, color=options.color,
font=options.font, fsize=options.fsize,
shadow=options.shadow)
for chunk in fab_text:
printy(chunk) | Main function for :command:`fabulous-text`. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/text.py#L211-L258 |
taskcluster/taskcluster-client.py | taskcluster/purgecache.py | PurgeCache.purgeCache | def purgeCache(self, *args, **kwargs):
"""
Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable``
"""
return self._makeApiCall(self.funcinfo["purgeCache"], *args, **kwargs) | python | def purgeCache(self, *args, **kwargs):
"""
Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable``
"""
return self._makeApiCall(self.funcinfo["purgeCache"], *args, **kwargs) | Purge Worker Cache
Publish a purge-cache message to purge caches named `cacheName` with
`provisionerId` and `workerType` in the routing-key. Workers should
be listening for this message and purge caches when they see it.
This method takes input: ``v1/purge-cache-request.json#``
This method is ``stable`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/purgecache.py#L40-L53 |
lepture/safe | safe/__init__.py | is_asdf | def is_asdf(raw):
"""If the password is in the order on keyboard."""
reverse = raw[::-1]
asdf = ''.join(ASDF)
return raw in asdf or reverse in asdf | python | def is_asdf(raw):
"""If the password is in the order on keyboard."""
reverse = raw[::-1]
asdf = ''.join(ASDF)
return raw in asdf or reverse in asdf | If the password is in the order on keyboard. | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L72-L78 |
lepture/safe | safe/__init__.py | is_by_step | def is_by_step(raw):
"""If the password is alphabet step by step."""
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
if ord(raw[i]) - ord(raw[i-1]) != delta:
return False
return True | python | def is_by_step(raw):
"""If the password is alphabet step by step."""
# make sure it is unicode
delta = ord(raw[1]) - ord(raw[0])
for i in range(2, len(raw)):
if ord(raw[i]) - ord(raw[i-1]) != delta:
return False
return True | If the password is alphabet step by step. | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L81-L90 |
lepture/safe | safe/__init__.py | is_common_password | def is_common_password(raw, freq=0):
"""If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/
"""
frequent = WORDS.get(raw, 0)
if freq:
return frequent > freq
return bool(frequent) | python | def is_common_password(raw, freq=0):
"""If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/
"""
frequent = WORDS.get(raw, 0)
if freq:
return frequent > freq
return bool(frequent) | If the password is common used.
10k top passwords: https://xato.net/passwords/more-top-worst-passwords/ | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L93-L101 |
lepture/safe | safe/__init__.py | check | def check(raw, length=8, freq=0, min_types=3, level=STRONG):
"""Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password.
"""
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
types += 1
if UPPER.search(raw):
types += 1
if NUMBER.search(raw):
types += 1
if MARKS.search(raw):
types += 1
if types < 2:
return Strength(level <= SIMPLE, 'simple', 'password is too simple')
if types < min_types:
return Strength(level <= MEDIUM, 'medium',
'password is good enough, but not strong')
return Strength(True, 'strong', 'password is perfect') | python | def check(raw, length=8, freq=0, min_types=3, level=STRONG):
"""Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password.
"""
raw = to_unicode(raw)
if level > STRONG:
level = STRONG
if len(raw) < length:
return Strength(False, 'terrible', 'password is too short')
if is_asdf(raw) or is_by_step(raw):
return Strength(False, 'simple', 'password has a pattern')
if is_common_password(raw, freq=freq):
return Strength(False, 'simple', 'password is too common')
types = 0
if LOWER.search(raw):
types += 1
if UPPER.search(raw):
types += 1
if NUMBER.search(raw):
types += 1
if MARKS.search(raw):
types += 1
if types < 2:
return Strength(level <= SIMPLE, 'simple', 'password is too simple')
if types < min_types:
return Strength(level <= MEDIUM, 'medium',
'password is good enough, but not strong')
return Strength(True, 'strong', 'password is perfect') | Check the safety level of the password.
:param raw: raw text password.
:param length: minimal length of the password.
:param freq: minimum frequency.
:param min_types: minimum character family.
:param level: minimum level to validate a password. | https://github.com/lepture/safe/blob/038a72e59557caf97c1b93f66124a8f014eb032b/safe/__init__.py#L142-L185 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._make_request | def _make_request(self, url, method="get", data=None, extra_headers=None):
"""Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
attempts = 0
while attempts < 1:
# Authenticate first if not authenticated already
if not self._is_authenticated:
self._authenticate()
# Make the request and check for authentication errors
# This allows us to catch session timeouts for long standing connections
try:
return self._send_request(url, method, data, extra_headers)
except HTTPError as e:
if e.response.status_code == 403:
logger.info("Authenticated session against NetMRI timed out. Retrying.")
self._is_authenticated = False
attempts += 1
else:
# re-raise other HTTP errors
raise | python | def _make_request(self, url, method="get", data=None, extra_headers=None):
"""Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
attempts = 0
while attempts < 1:
# Authenticate first if not authenticated already
if not self._is_authenticated:
self._authenticate()
# Make the request and check for authentication errors
# This allows us to catch session timeouts for long standing connections
try:
return self._send_request(url, method, data, extra_headers)
except HTTPError as e:
if e.response.status_code == 403:
logger.info("Authenticated session against NetMRI timed out. Retrying.")
self._is_authenticated = False
attempts += 1
else:
# re-raise other HTTP errors
raise | Prepares the request, checks for authentication and retries in case of issues
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L82-L110 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._send_request | def _send_request(self, url, method="get", data=None, extra_headers=None):
"""Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers)
if not data or "password" not in data:
logger.debug("Sending {method} request to {url} with data {data}".format(
method=method.upper(), url=url, data=data)
)
r = self.session.request(method, url, headers=headers, data=data)
r.raise_for_status()
return r.json() | python | def _send_request(self, url, method="get", data=None, extra_headers=None):
"""Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict
"""
headers = {'Content-type': 'application/json'}
if isinstance(extra_headers, dict):
headers.update(extra_headers)
if not data or "password" not in data:
logger.debug("Sending {method} request to {url} with data {data}".format(
method=method.upper(), url=url, data=data)
)
r = self.session.request(method, url, headers=headers, data=data)
r.raise_for_status()
return r.json() | Performs a given request and returns a json object
Args:
url (str): URL of the request
method (str): Any of "get", "post", "delete"
data (any): Possible extra data to send with the request
extra_headers (dict): Possible extra headers to send along in the request
Returns:
dict | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L112-L133 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._get_api_version | def _get_api_version(self):
"""Fetches the most recent API version
Returns:
str
"""
url = "{base_url}/api/server_info".format(base_url=self._base_url())
server_info = self._make_request(url=url, method="get")
return server_info["latest_api_version"] | python | def _get_api_version(self):
"""Fetches the most recent API version
Returns:
str
"""
url = "{base_url}/api/server_info".format(base_url=self._base_url())
server_info = self._make_request(url=url, method="get")
return server_info["latest_api_version"] | Fetches the most recent API version
Returns:
str | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L135-L143 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._authenticate | def _authenticate(self):
""" Perform an authentication against NetMRI"""
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by using _send_request
logger.debug("Authenticating against NetMRI")
self._send_request(url, method="post", data=data)
self._is_authenticated = True | python | def _authenticate(self):
""" Perform an authentication against NetMRI"""
url = "{base_url}/api/authenticate".format(base_url=self._base_url())
data = json.dumps({'username': self.username, "password": self.password})
# Bypass authentication check in make_request by using _send_request
logger.debug("Authenticating against NetMRI")
self._send_request(url, method="post", data=data)
self._is_authenticated = True | Perform an authentication against NetMRI | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L145-L152 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._controller_name | def _controller_name(self, objtype):
"""Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name
"""
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
return objtype[:-1] + 'ies'
if objtype[-1] in 'sx' or objtype[-2:] in ['sh', 'ch']:
return objtype + 'es'
if objtype.endswith('an'):
return objtype[:-2] + 'en'
return objtype + 's' | python | def _controller_name(self, objtype):
"""Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name
"""
# would be better to use inflect.pluralize here, but would add a dependency
if objtype.endswith('y'):
return objtype[:-1] + 'ies'
if objtype[-1] in 'sx' or objtype[-2:] in ['sh', 'ch']:
return objtype + 'es'
if objtype.endswith('an'):
return objtype[:-2] + 'en'
return objtype + 's' | Determines the controller name for the object's type
Args:
objtype (str): The object type
Returns:
A string with the controller name | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L154-L173 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._object_url | def _object_url(self, objtype, objid):
"""Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object
"""
return "{base_url}/api/{api_version}/{controller}/{obj_id}".format(
base_url=self._base_url(),
api_version=self.api_version,
controller=self._controller_name(objtype),
obj_id=objid
) | python | def _object_url(self, objtype, objid):
"""Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object
"""
return "{base_url}/api/{api_version}/{controller}/{obj_id}".format(
base_url=self._base_url(),
api_version=self.api_version,
controller=self._controller_name(objtype),
obj_id=objid
) | Generate the URL for the specified object
Args:
objtype (str): The object's type
objid (int): The objects ID
Returns:
A string containing the URL of the object | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L186-L201 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI._method_url | def _method_url(self, method_name):
"""Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method
"""
return "{base_url}/api/{api}/{method}".format(
base_url=self._base_url(),
api=self.api_version,
method=method_name
) | python | def _method_url(self, method_name):
"""Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method
"""
return "{base_url}/api/{api}/{method}".format(
base_url=self._base_url(),
api=self.api_version,
method=method_name
) | Generate the URL for the requested method
Args:
method_name (str): Name of the method
Returns:
A string containing the URL of the method | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L203-L216 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI.api_request | def api_request(self, method_name, params):
"""Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError
"""
url = self._method_url(method_name)
data = json.dumps(params)
return self._make_request(url=url, method="post", data=data) | python | def api_request(self, method_name, params):
"""Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError
"""
url = self._method_url(method_name)
data = json.dumps(params)
return self._make_request(url=url, method="post", data=data) | Execute an arbitrary method.
Args:
method_name (str): include the controller name: 'devices/search'
params (dict): the method parameters
Returns:
A dict with the response
Raises:
requests.exceptions.HTTPError | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L218-L231 |
infobloxopen/infoblox-netmri | infoblox_netmri/client.py | InfobloxNetMRI.show | def show(self, objtype, objid):
"""Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError
"""
url = self._object_url(objtype, int(objid))
return self._make_request(url, method="get") | python | def show(self, objtype, objid):
"""Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError
"""
url = self._object_url(objtype, int(objid))
return self._make_request(url, method="get") | Query for a specific resource by ID
Args:
objtype (str): object type, e.g. 'device', 'interface'
objid (int): object ID (DeviceID, etc.)
Returns:
A dict with that object
Raises:
requests.exceptions.HTTPError | https://github.com/infobloxopen/infoblox-netmri/blob/e633bef1584101228b6d403df1de0d3d8e88d5c0/infoblox_netmri/client.py#L233-L245 |
taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.irc | def irc(self, *args, **kwargs):
"""
Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["irc"], *args, **kwargs) | python | def irc(self, *args, **kwargs):
"""
Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["irc"], *args, **kwargs) | Post IRC Message
Post a message on IRC to a specific channel or user, or a specific user
on a specific channel.
Success of this API method does not imply the message was successfully
posted. This API method merely inserts the IRC message into a queue
that will be processed by a background process.
This allows us to re-send the message in face of connection issues.
However, if the user isn't online the message will be dropped without
error. We maybe improve this behavior in the future. For now just keep
in mind that IRC is a best-effort service.
This method takes input: ``v1/irc-request.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L66-L87 |
taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.addDenylistAddress | def addDenylistAddress(self, *args, **kwargs):
"""
Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["addDenylistAddress"], *args, **kwargs) | python | def addDenylistAddress(self, *args, **kwargs):
"""
Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["addDenylistAddress"], *args, **kwargs) | Denylist Given Address
Add the given address to the notification denylist. The address
can be of either of the three supported address type namely pulse, email
or IRC(user or channel). Addresses in the denylist will be ignored
by the notification service.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L89-L103 |
taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.deleteDenylistAddress | def deleteDenylistAddress(self, *args, **kwargs):
"""
Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["deleteDenylistAddress"], *args, **kwargs) | python | def deleteDenylistAddress(self, *args, **kwargs):
"""
Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["deleteDenylistAddress"], *args, **kwargs) | Delete Denylisted Address
Delete the specified address from the notification denylist.
This method takes input: ``v1/notification-address.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L105-L116 |
taskcluster/taskcluster-client.py | taskcluster/notify.py | Notify.list | def list(self, *args, **kwargs):
"""
List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["list"], *args, **kwargs) | python | def list(self, *args, **kwargs):
"""
List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental``
"""
return self._makeApiCall(self.funcinfo["list"], *args, **kwargs) | List Denylisted Notifications
Lists all the denylisted addresses.
By default this end-point will try to return up to 1000 addresses in one
request. But it **may return less**, even if more tasks are available.
It may also return a `continuationToken` even though there are no more
results. However, you can only be sure to have seen all results if you
keep calling `list` with the last `continuationToken` until you
get a result without a `continuationToken`.
If you are not interested in listing all the members at once, you may
use the query-string option `limit` to return fewer.
This method gives output: ``v1/notification-address-list.json#``
This method is ``experimental`` | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/notify.py#L118-L139 |
jart/fabulous | fabulous/compatibility.py | printy | def printy(s):
"""Python 2/3 compatible print-like function"""
if hasattr(s, 'as_utf8'):
if hasattr(sys.stdout, 'buffer'):
sys.stdout.buffer.write(s.as_utf8)
sys.stdout.buffer.write(b"\n")
else:
sys.stdout.write(s.as_utf8)
sys.stdout.write(b"\n")
else:
print(s) | python | def printy(s):
"""Python 2/3 compatible print-like function"""
if hasattr(s, 'as_utf8'):
if hasattr(sys.stdout, 'buffer'):
sys.stdout.buffer.write(s.as_utf8)
sys.stdout.buffer.write(b"\n")
else:
sys.stdout.write(s.as_utf8)
sys.stdout.write(b"\n")
else:
print(s) | Python 2/3 compatible print-like function | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/compatibility.py#L20-L30 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientCreated | def clientCreated(self, *args, **kwargs):
"""
Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientCreated(self, *args, **kwargs):
"""
Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-created',
'name': 'clientCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Client Created Messages
Message that a new client has been created.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L32-L54 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientUpdated | def clientUpdated(self, *args, **kwargs):
"""
Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientUpdated(self, *args, **kwargs):
"""
Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-updated',
'name': 'clientUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Client Updated Messages
Message that a new client has been updated.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L56-L78 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.clientDeleted | def clientDeleted(self, *args, **kwargs):
"""
Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def clientDeleted(self, *args, **kwargs):
"""
Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'client-deleted',
'name': 'clientDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/client-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Client Deleted Messages
Message that a new client has been deleted.
This exchange outputs: ``v1/client-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L80-L102 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleCreated | def roleCreated(self, *args, **kwargs):
"""
Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleCreated(self, *args, **kwargs):
"""
Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-created',
'name': 'roleCreated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Role Created Messages
Message that a new role has been created.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L104-L126 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleUpdated | def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleUpdated(self, *args, **kwargs):
"""
Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-updated',
'name': 'roleUpdated',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Role Updated Messages
Message that a new role has been updated.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L128-L150 |
taskcluster/taskcluster-client.py | taskcluster/authevents.py | AuthEvents.roleDeleted | def roleDeleted(self, *args, **kwargs):
"""
Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | python | def roleDeleted(self, *args, **kwargs):
"""
Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified.
"""
ref = {
'exchange': 'role-deleted',
'name': 'roleDeleted',
'routingKey': [
{
'multipleWords': True,
'name': 'reserved',
},
],
'schema': 'v1/role-message.json#',
}
return self._makeTopicExchange(ref, *args, **kwargs) | Role Deleted Messages
Message that a new role has been deleted.
This exchange outputs: ``v1/role-message.json#``This exchange takes the following keys:
* reserved: Space reserved for future routing-key entries, you should always match this entry with `#`. As automatically done by our tooling, if not specified. | https://github.com/taskcluster/taskcluster-client.py/blob/bcc95217f8bf80bed2ae5885a19fa0035da7ebc9/taskcluster/authevents.py#L152-L174 |
jart/fabulous | fabulous/utils.py | memoize | def memoize(function):
"""A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks.
"""
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache:
return cache[args]
result = function(*args)
cache[args] = result
return result
return function | python | def memoize(function):
"""A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks.
"""
cache = {}
@functools.wraps(function)
def _memoize(*args):
if args in cache:
return cache[args]
result = function(*args)
cache[args] = result
return result
return function | A very simple memoize decorator to optimize pure-ish functions
Don't use this unless you've examined the code and see the
potential risks. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/utils.py#L34-L48 |
jart/fabulous | fabulous/utils.py | TerminalInfo.dimensions | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
"""
try:
call = fcntl.ioctl(self.termfd, termios.TIOCGWINSZ, "\000" * 8)
except IOError:
return (79, 40)
else:
height, width = struct.unpack("hhhh", call)[:2]
return (width, height) | python | def dimensions(self):
"""Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``.
"""
try:
call = fcntl.ioctl(self.termfd, termios.TIOCGWINSZ, "\000" * 8)
except IOError:
return (79, 40)
else:
height, width = struct.unpack("hhhh", call)[:2]
return (width, height) | Returns terminal dimensions
Don't save this information for long periods of time because
the user might resize their terminal.
:return: Returns ``(width, height)``. If there's no terminal
to be found, we'll just return ``(79, 40)``. | https://github.com/jart/fabulous/blob/19903cf0a980b82f5928c3bec1f28b6bdd3785bd/fabulous/utils.py#L100-L115 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.