python_code
stringlengths 0
679k
| repo_name
stringlengths 9
41
| file_path
stringlengths 6
149
|
---|---|---|
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from swift.common import utils as swift_utils
from swift.common.http import is_success
from swift.common.middleware import acl as swift_acl
from swift.common.request_helpers import get_sys_meta_prefix
from swift.common.swob import HTTPNotFound, HTTPForbidden, HTTPUnauthorized
from swift.common.utils import config_read_reseller_options, list_from_csv
from swift.proxy.controllers.base import get_account_info
import functools
PROJECT_DOMAIN_ID_HEADER = 'x-account-project-domain-id'
PROJECT_DOMAIN_ID_SYSMETA_HEADER = \
get_sys_meta_prefix('account') + 'project-domain-id'
# a string that is unique w.r.t valid ids
UNKNOWN_ID = '_unknown'
class KeystoneAuth(object):
"""Swift middleware to Keystone authorization system.
In Swift's proxy-server.conf add this keystoneauth middleware and the
authtoken middleware to your pipeline. Make sure you have the authtoken
middleware before the keystoneauth middleware.
The authtoken middleware will take care of validating the user and
keystoneauth will authorize access.
The sample proxy-server.conf shows a sample pipeline that uses keystone.
:download:`proxy-server.conf-sample </../../etc/proxy-server.conf-sample>`
The authtoken middleware is shipped with keystonemiddleware - it
does not have any other dependencies than itself so you can either
install it by copying the file directly in your python path or by
installing keystonemiddleware.
If support is required for unvalidated users (as with anonymous
access) or for formpost/staticweb/tempurl middleware, authtoken will
need to be configured with ``delay_auth_decision`` set to true. See
the Keystone documentation for more detail on how to configure the
authtoken middleware.
In proxy-server.conf you will need to have the setting account
auto creation to true::
[app:proxy-server]
account_autocreate = true
And add a swift authorization filter section, such as::
[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = admin, swiftoperator
The user who is able to give ACL / create Containers permissions
will be the user with a role listed in the ``operator_roles``
setting which by default includes the admin and the swiftoperator
roles.
The keystoneauth middleware maps a Keystone project/tenant to an account
in Swift by adding a prefix (``AUTH_`` by default) to the tenant/project
id.. For example, if the project id is ``1234``, the path is
``/v1/AUTH_1234``.
If you need to have a different reseller_prefix to be able to
mix different auth servers you can configure the option
``reseller_prefix`` in your keystoneauth entry like this::
reseller_prefix = NEWAUTH
Don't forget to also update the Keystone service endpoint configuration to
use NEWAUTH in the path.
It is possible to have several accounts associated with the same project.
This is done by listing several prefixes as shown in the following
example::
reseller_prefix = AUTH, SERVICE
This means that for project id '1234', the paths '/v1/AUTH_1234' and
'/v1/SERVICE_1234' are associated with the project and are authorized
using roles that a user has with that project. The core use of this feature
is that it is possible to provide different rules for each account
prefix. The following parameters may be prefixed with the appropriate
prefix::
operator_roles
service_roles
For backward compatibility, if either of these parameters is specified
without a prefix then it applies to all reseller_prefixes. Here is an
example, using two prefixes::
reseller_prefix = AUTH, SERVICE
# The next three lines have identical effects (since the first applies
# to both prefixes).
operator_roles = admin, swiftoperator
AUTH_operator_roles = admin, swiftoperator
SERVICE_operator_roles = admin, swiftoperator
# The next line only applies to accounts with the SERVICE prefix
SERVICE_operator_roles = admin, some_other_role
X-Service-Token tokens are supported by the inclusion of the service_roles
configuration option. When present, this option requires that the
X-Service-Token header supply a token from a user who has a role listed
in service_roles. Here is an example configuration::
reseller_prefix = AUTH, SERVICE
AUTH_operator_roles = admin, swiftoperator
SERVICE_operator_roles = admin, swiftoperator
SERVICE_service_roles = service
The keystoneauth middleware supports cross-tenant access control using the
syntax ``<tenant>:<user>`` to specify a grantee in container Access Control
Lists (ACLs). For a request to be granted by an ACL, the grantee
``<tenant>`` must match the UUID of the tenant to which the request
X-Auth-Token is scoped and the grantee ``<user>`` must match the UUID of
the user authenticated by that token.
Note that names must no longer be used in cross-tenant ACLs because with
the introduction of domains in keystone names are no longer globally
unique.
For backwards compatibility, ACLs using names will be granted by
keystoneauth when it can be established that the grantee tenant,
the grantee user and the tenant being accessed are either not yet in a
domain (e.g. the X-Auth-Token has been obtained via the keystone v2
API) or are all in the default domain to which legacy accounts would
have been migrated. The default domain is identified by its UUID,
which by default has the value ``default``. This can be changed by
setting the ``default_domain_id`` option in the keystoneauth
configuration::
default_domain_id = default
The backwards compatible behavior can be disabled by setting the config
option ``allow_names_in_acls`` to false::
allow_names_in_acls = false
To enable this backwards compatibility, keystoneauth will attempt to
determine the domain id of a tenant when any new account is created,
and persist this as account metadata. If an account is created for a tenant
using a token with reselleradmin role that is not scoped on that tenant,
keystoneauth is unable to determine the domain id of the tenant;
keystoneauth will assume that the tenant may not be in the default domain
and therefore not match names in ACLs for that account.
By default, middleware higher in the WSGI pipeline may override auth
processing, useful for middleware such as tempurl and formpost. If you know
you're not going to use such middleware and you want a bit of extra
security you can disable this behaviour by setting the ``allow_overrides``
option to ``false``::
allow_overrides = false
:param app: The next WSGI app in the pipeline
:param conf: The dict of configuration values
"""
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.logger = swift_utils.get_logger(conf, log_route='keystoneauth')
self.reseller_prefixes, self.account_rules = \
config_read_reseller_options(conf,
dict(operator_roles=['admin',
'swiftoperator'],
service_roles=[],
project_reader_roles=[]))
self.reseller_admin_role = conf.get('reseller_admin_role',
'ResellerAdmin').lower()
self.system_reader_roles = {role.lower() for role in list_from_csv(
conf.get('system_reader_roles', ''))}
config_is_admin = conf.get('is_admin', "false").lower()
if swift_utils.config_true_value(config_is_admin):
self.logger.warning("The 'is_admin' option for keystoneauth is no "
"longer supported. Remove the 'is_admin' "
"option from your keystoneauth config")
config_overrides = conf.get('allow_overrides', 't').lower()
self.allow_overrides = swift_utils.config_true_value(config_overrides)
self.default_domain_id = conf.get('default_domain_id', 'default')
self.allow_names_in_acls = swift_utils.config_true_value(
conf.get('allow_names_in_acls', 'true'))
def __call__(self, environ, start_response):
env_identity = self._keystone_identity(environ)
# Check if one of the middleware like tempurl or formpost have
# set the swift.authorize_override environ and want to control the
# authentication
if (self.allow_overrides and
environ.get('swift.authorize_override', False)):
msg = 'Authorizing from an overriding middleware'
self.logger.debug(msg)
return self.app(environ, start_response)
if env_identity:
self.logger.debug('Using identity: %r', env_identity)
environ['REMOTE_USER'] = env_identity.get('tenant')
environ['keystone.identity'] = env_identity
environ['swift.authorize'] = functools.partial(
self.authorize, env_identity)
user_roles = (r.lower() for r in env_identity.get('roles', []))
if self.reseller_admin_role in user_roles:
environ['reseller_request'] = True
else:
self.logger.debug('Authorizing as anonymous')
environ['swift.authorize'] = self.authorize_anonymous
environ['swift.clean_acl'] = swift_acl.clean_acl
def keystone_start_response(status, response_headers, exc_info=None):
project_domain_id = None
for key, val in response_headers:
if key.lower() == PROJECT_DOMAIN_ID_SYSMETA_HEADER:
project_domain_id = val
break
if project_domain_id:
response_headers.append((PROJECT_DOMAIN_ID_HEADER,
project_domain_id))
return start_response(status, response_headers, exc_info)
return self.app(environ, keystone_start_response)
def _keystone_identity(self, environ):
"""Extract the identity from the Keystone auth component."""
if (environ.get('HTTP_X_IDENTITY_STATUS') != 'Confirmed'
or environ.get(
'HTTP_X_SERVICE_IDENTITY_STATUS') not in (None, 'Confirmed')):
return
roles = list_from_csv(environ.get('HTTP_X_ROLES', ''))
service_roles = list_from_csv(environ.get('HTTP_X_SERVICE_ROLES', ''))
identity = {'user': (environ.get('HTTP_X_USER_ID'),
environ.get('HTTP_X_USER_NAME')),
'tenant': (environ.get('HTTP_X_PROJECT_ID',
environ.get('HTTP_X_TENANT_ID')),
environ.get('HTTP_X_PROJECT_NAME',
environ.get('HTTP_X_TENANT_NAME'))),
'roles': roles,
'service_roles': service_roles}
token_info = environ.get('keystone.token_info', {})
auth_version = 0
user_domain = project_domain = (None, None)
if 'access' in token_info:
# ignore any domain id headers that authtoken may have set
auth_version = 2
elif 'token' in token_info:
auth_version = 3
user_domain = (environ.get('HTTP_X_USER_DOMAIN_ID'),
environ.get('HTTP_X_USER_DOMAIN_NAME'))
project_domain = (environ.get('HTTP_X_PROJECT_DOMAIN_ID'),
environ.get('HTTP_X_PROJECT_DOMAIN_NAME'))
identity['user_domain'] = user_domain
identity['project_domain'] = project_domain
identity['auth_version'] = auth_version
return identity
def _get_account_name(self, prefix, tenant_id):
return '%s%s' % (prefix, tenant_id)
def _account_matches_tenant(self, account, tenant_id):
"""Check if account belongs to a project/tenant"""
for prefix in self.reseller_prefixes:
if self._get_account_name(prefix, tenant_id) == account:
return True
return False
def _get_account_prefix(self, account):
"""Get the prefix of an account"""
# Empty prefix matches everything, so try to match others first
for prefix in [pre for pre in self.reseller_prefixes if pre != '']:
if account.startswith(prefix):
return prefix
if '' in self.reseller_prefixes:
return ''
return None
def _get_project_domain_id(self, environ):
info = get_account_info(environ, self.app, 'KS')
domain_id = info.get('sysmeta', {}).get('project-domain-id')
exists = (is_success(info.get('status', 0))
and info.get('account_really_exists', True))
return exists, domain_id
def _set_project_domain_id(self, req, path_parts, env_identity):
'''
Try to determine the project domain id and save it as
account metadata. Do this for a PUT or POST to the
account, and also for a container PUT in case that
causes the account to be auto-created.
'''
if PROJECT_DOMAIN_ID_SYSMETA_HEADER in req.headers:
return
version, account, container, obj = path_parts
method = req.method
if (obj or (container and method != 'PUT')
or method not in ['PUT', 'POST']):
return
tenant_id, tenant_name = env_identity['tenant']
exists, sysmeta_id = self._get_project_domain_id(req.environ)
req_has_id, req_id, new_id = False, None, None
if self._account_matches_tenant(account, tenant_id):
# domain id can be inferred from request (may be None)
req_has_id = True
req_id = env_identity['project_domain'][0]
if not exists:
# new account so set a domain id
new_id = req_id if req_has_id else UNKNOWN_ID
elif sysmeta_id is None and req_id == self.default_domain_id:
# legacy account, update if default domain id in req
new_id = req_id
elif sysmeta_id == UNKNOWN_ID and req_has_id:
# unknown domain, update if req confirms domain
new_id = req_id or ''
elif req_has_id and sysmeta_id != req_id:
self.logger.warning("Inconsistent project domain id: " +
"%s in token vs %s in account metadata."
% (req_id, sysmeta_id))
if new_id is not None:
req.headers[PROJECT_DOMAIN_ID_SYSMETA_HEADER] = new_id
def _is_name_allowed_in_acl(self, req, path_parts, identity):
if not self.allow_names_in_acls:
return False
user_domain_id = identity['user_domain'][0]
if user_domain_id and user_domain_id != self.default_domain_id:
return False
proj_domain_id = identity['project_domain'][0]
if proj_domain_id and proj_domain_id != self.default_domain_id:
return False
# request user and scoped project are both in default domain
tenant_id, tenant_name = identity['tenant']
version, account, container, obj = path_parts
if self._account_matches_tenant(account, tenant_id):
# account == scoped project, so account is also in default domain
allow = True
else:
# retrieve account project domain id from account sysmeta
exists, acc_domain_id = self._get_project_domain_id(req.environ)
allow = exists and acc_domain_id in [self.default_domain_id, None]
if allow:
self.logger.debug("Names allowed in acls.")
return allow
def _authorize_cross_tenant(self, user_id, user_name,
tenant_id, tenant_name, roles,
allow_names=True):
"""Check cross-tenant ACLs.
Match tenant:user, tenant and user could be its id, name or '*'
:param user_id: The user id from the identity token.
:param user_name: The user name from the identity token.
:param tenant_id: The tenant ID from the identity token.
:param tenant_name: The tenant name from the identity token.
:param roles: The given container ACL.
:param allow_names: If True then attempt to match tenant and user names
as well as id's.
:returns: matched string if tenant(name/id/*):user(name/id/*) matches
the given ACL.
None otherwise.
"""
tenant_match = [tenant_id, '*']
user_match = [user_id, '*']
if allow_names:
tenant_match = tenant_match + [tenant_name]
user_match = user_match + [user_name]
for tenant in tenant_match:
for user in user_match:
s = '%s:%s' % (tenant, user)
if s in roles:
return s
return None
def authorize(self, env_identity, req):
# Cleanup - make sure that a previously set swift_owner setting is
# cleared now. This might happen for example with COPY requests.
req.environ.pop('swift_owner', None)
tenant_id, tenant_name = env_identity['tenant']
user_id, user_name = env_identity['user']
referrers, roles = swift_acl.parse_acl(getattr(req, 'acl', None))
# allow OPTIONS requests to proceed as normal
if req.method == 'OPTIONS':
return
try:
part = req.split_path(1, 4, True)
version, account, container, obj = part
except ValueError:
return HTTPNotFound(request=req)
self._set_project_domain_id(req, part, env_identity)
user_roles = [r.lower() for r in env_identity.get('roles', [])]
user_service_roles = [r.lower() for r in env_identity.get(
'service_roles', [])]
# Give unconditional access to a user with the reseller_admin role.
if self.reseller_admin_role in user_roles:
msg = 'User %s has reseller admin authorizing'
self.logger.debug(msg, tenant_id)
req.environ['swift_owner'] = True
return
# Being in system_reader_roles is almost as good as reseller_admin.
if self.system_reader_roles.intersection(user_roles):
# Note that if a system reader is trying to write, we're letting
# the request fall on other access checks below. This way,
# a compliance auditor can write a log file as a normal member.
if req.method in ('GET', 'HEAD'):
msg = 'User %s has system reader authorizing'
self.logger.debug(msg, tenant_id)
# We aren't setting 'swift_owner' nor 'reseller_request'
# because they are only ever used for something that modifies
# the contents of the cluster (setting ACL, deleting accounts).
return
# If we are not reseller admin and user is trying to delete its own
# account then deny it.
if not container and not obj and req.method == 'DELETE':
# User is not allowed to issue a DELETE on its own account
msg = 'User %s:%s is not allowed to delete its own account'
self.logger.debug(msg, tenant_name, user_name)
return self.denied_response(req)
# cross-tenant authorization
matched_acl = None
if roles:
allow_names = self._is_name_allowed_in_acl(req, part, env_identity)
matched_acl = self._authorize_cross_tenant(user_id, user_name,
tenant_id, tenant_name,
roles, allow_names)
if matched_acl is not None:
log_msg = 'user %s allowed in ACL authorizing.'
self.logger.debug(log_msg, matched_acl)
return
acl_authorized = self._authorize_unconfirmed_identity(req, obj,
referrers,
roles)
if acl_authorized:
return
# Check if a user tries to access an account that does not match their
# token
if not self._account_matches_tenant(account, tenant_id):
log_msg = 'tenant mismatch: %s != %s'
self.logger.debug(log_msg, account, tenant_id)
return self.denied_response(req)
# Compare roles from tokens against the configuration options:
#
# X-Auth-Token role Has specified X-Service-Token role Grant
# in operator_roles? service_roles? in service_roles? swift_owner?
# ------------------ -------------- -------------------- ------------
# yes yes yes yes
# yes yes no no
# yes no don't care yes
# no don't care don't care no
# ------------------ -------------- -------------------- ------------
account_prefix = self._get_account_prefix(account)
operator_roles = self.account_rules[account_prefix]['operator_roles']
have_operator_role = set(operator_roles).intersection(
set(user_roles))
service_roles = self.account_rules[account_prefix]['service_roles']
have_service_role = set(service_roles).intersection(
set(user_service_roles))
allowed = False
if have_operator_role and (service_roles and have_service_role):
allowed = True
elif have_operator_role and not service_roles:
allowed = True
if allowed:
log_msg = 'allow user with role(s) %s as account admin'
self.logger.debug(log_msg, ','.join(have_operator_role.union(
have_service_role)))
req.environ['swift_owner'] = True
return
# The project_reader_roles is almost as good as operator_roles. But
# it does not work with service tokens and does not get 'swift_owner'.
# And, it only serves GET requests, obviously.
project_reader_roles = self.account_rules[account_prefix][
'project_reader_roles']
have_reader_role = set(project_reader_roles).intersection(
set(user_roles))
if have_reader_role:
if req.method in ('GET', 'HEAD'):
msg = 'User %s with role(s) %s has project reader authorizing'
self.logger.debug(msg, tenant_id,
','.join(project_reader_roles))
return
if acl_authorized is not None:
return self.denied_response(req)
# Check if we have the role in the userroles and allow it
for user_role in user_roles:
if user_role in (r.lower() for r in roles):
log_msg = 'user %s:%s allowed in ACL: %s authorizing'
self.logger.debug(log_msg, tenant_name, user_name,
user_role)
return
return self.denied_response(req)
def authorize_anonymous(self, req):
"""
Authorize an anonymous request.
:returns: None if authorization is granted, an error page otherwise.
"""
try:
part = req.split_path(1, 4, True)
version, account, container, obj = part
except ValueError:
return HTTPNotFound(request=req)
# allow OPTIONS requests to proceed as normal
if req.method == 'OPTIONS':
return
is_authoritative_authz = (account and
(self._get_account_prefix(account) in
self.reseller_prefixes))
if not is_authoritative_authz:
return self.denied_response(req)
referrers, roles = swift_acl.parse_acl(getattr(req, 'acl', None))
authorized = self._authorize_unconfirmed_identity(req, obj, referrers,
roles)
if not authorized:
return self.denied_response(req)
def _authorize_unconfirmed_identity(self, req, obj, referrers, roles):
""""
Perform authorization for access that does not require a
confirmed identity.
:returns: A boolean if authorization is granted or denied. None if
a determination could not be made.
"""
# Allow container sync.
if (req.environ.get('swift_sync_key')
and (req.environ['swift_sync_key'] ==
req.headers.get('x-container-sync-key', None))
and 'x-timestamp' in req.headers):
log_msg = 'allowing proxy %s for container-sync'
self.logger.debug(log_msg, req.remote_addr)
return True
# Check if referrer is allowed.
if swift_acl.referrer_allowed(req.referer, referrers):
if obj or '.rlistings' in roles:
log_msg = 'authorizing %s via referer ACL'
self.logger.debug(log_msg, req.referrer)
return True
return False
def denied_response(self, req):
"""Deny WSGI Response.
Returns a standard WSGI response callable with the status of 403 or 401
depending on whether the REMOTE_USER is set or not.
"""
if req.remote_user:
return HTTPForbidden(request=req)
else:
return HTTPUnauthorized(request=req)
def filter_factory(global_conf, **local_conf):
"""Returns a WSGI filter app for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
def auth_filter(app):
return KeystoneAuth(app, conf)
return auth_filter
| swift-master | swift/common/middleware/keystoneauth.py |
# Copyright (c) 2010-2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.constraints import check_account_format, valid_api_version
from swift.common.swob import HTTPMethodNotAllowed, Request
from swift.common.utils import get_logger, config_true_value
from swift.common.registry import register_swift_info
from swift.proxy.controllers.base import get_info
"""
=========
Read Only
=========
The ability to make an entire cluster or individual accounts read only is
implemented as pluggable middleware. When a cluster or an account is in read
only mode, requests that would result in writes to the cluser are not allowed.
A 405 is returned on such requests. "COPY", "DELETE", "POST", and
"PUT" are the HTTP methods that are considered writes.
-------------
Configuration
-------------
All configuration is optional.
============= ======= ====================================================
Option Default Description
------------- ------- ----------------------------------------------------
read_only false Set to 'true' to put the entire cluster in read only
mode.
allow_deletes false Set to 'true' to allow deletes.
============= ======= ====================================================
---------------------------
Marking Individual Accounts
---------------------------
If a system administrator wants to mark individual accounts as read only,
he/she can set X-Account-Sysmeta-Read-Only on an account to 'true'.
If a system administrator wants to allow writes to individual accounts,
when a cluster is in read only mode, he/she can set
X-Account-Sysmeta-Read-Only on an account to 'false'.
This header will be hidden from the user, because of the gatekeeper middleware,
and can only be set using a direct client to the account nodes.
"""
class ReadOnlyMiddleware(object):
"""
Middleware that make an entire cluster or individual accounts read only.
"""
def __init__(self, app, conf, logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='read_only')
self.read_only = config_true_value(conf.get('read_only'))
self.write_methods = {'COPY', 'POST', 'PUT'}
if not config_true_value(conf.get('allow_deletes')):
self.write_methods.add('DELETE')
def __call__(self, env, start_response):
req = Request(env)
if req.method not in self.write_methods:
return self.app(env, start_response)
try:
version, account, container, obj = req.split_path(2, 4, True)
if not valid_api_version(version):
raise ValueError
except ValueError:
return self.app(env, start_response)
if req.method == 'COPY' and 'Destination-Account' in req.headers:
dest_account = req.headers.get('Destination-Account')
account = check_account_format(req, dest_account)
if self.account_read_only(req, account):
msg = 'Writes are disabled for this account.'
return HTTPMethodNotAllowed(body=msg)(env, start_response)
return self.app(env, start_response)
def account_read_only(self, req, account):
"""
Check whether an account should be read-only.
This considers both the cluster-wide config value as well as the
per-account override in X-Account-Sysmeta-Read-Only.
"""
info = get_info(self.app, req.environ, account, swift_source='RO')
read_only = info.get('sysmeta', {}).get('read-only', '')
if not read_only:
return self.read_only
return config_true_value(read_only)
def filter_factory(global_conf, **local_conf):
"""
paste.deploy app factory for creating WSGI proxy apps.
"""
conf = global_conf.copy()
conf.update(local_conf)
if config_true_value(conf.get('read_only')):
register_swift_info('read_only')
def read_only_filter(app):
return ReadOnlyMiddleware(app, conf)
return read_only_filter
| swift-master | swift/common/middleware/read_only.py |
# Copyright (c) 2010-2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Symlink Middleware
Symlinks are objects stored in Swift that contain a reference to another
object (hereinafter, this is called "target object"). They are analogous to
symbolic links in Unix-like operating systems. The existence of a symlink
object does not affect the target object in any way. An important use case is
to use a path in one container to access an object in a different container,
with a different policy. This allows policy cost/performance trade-offs to be
made on individual objects.
Clients create a Swift symlink by performing a zero-length PUT request
with the header ``X-Symlink-Target: <container>/<object>``. For a cross-account
symlink, the header ``X-Symlink-Target-Account: <account>`` must be included.
If omitted, it is inserted automatically with the account of the symlink
object in the PUT request process.
Symlinks must be zero-byte objects. Attempting to PUT a symlink with a
non-empty request body will result in a 400-series error. Also, POST with
``X-Symlink-Target`` header always results in a 400-series error. The target
object need not exist at symlink creation time.
Clients may optionally include a ``X-Symlink-Target-Etag: <etag>`` header
during the PUT. If present, this will create a "static symlink" instead of a
"dynamic symlink". Static symlinks point to a specific object rather than a
specific name. They do this by using the value set in their
``X-Symlink-Target-Etag`` header when created to verify it still matches the
ETag of the object they're pointing at on a GET. In contrast to a dynamic
symlink the target object referenced in the ``X-Symlink-Target`` header must
exist and its ETag must match the ``X-Symlink-Target-Etag`` or the symlink
creation will return a client error.
A GET/HEAD request to a symlink will result in a request to the target
object referenced by the symlink's ``X-Symlink-Target-Account`` and
``X-Symlink-Target`` headers. The response of the GET/HEAD request will contain
a ``Content-Location`` header with the path location of the target object. A
GET/HEAD request to a symlink with the query parameter ``?symlink=get`` will
result in the request targeting the symlink itself.
A symlink can point to another symlink. Chained symlinks will be traversed
until the target is not a symlink. If the number of chained symlinks exceeds
the limit ``symloop_max`` an error response will be produced. The value of
``symloop_max`` can be defined in the symlink config section of
`proxy-server.conf`. If not specified, the default ``symloop_max`` value is 2.
If a value less than 1 is specified, the default value will be used.
If a static symlink (i.e. a symlink created with a ``X-Symlink-Target-Etag``
header) targets another static symlink, both of the ``X-Symlink-Target-Etag``
headers must match the target object for the GET to succeed. If a static
symlink targets a dynamic symlink (i.e. a symlink created without a
``X-Symlink-Target-Etag`` header) then the ``X-Symlink-Target-Etag`` header of
the static symlink must be the Etag of the zero-byte object. If a symlink with
a ``X-Symlink-Target-Etag`` targets a large object manifest it must match the
ETag of the manifest (e.g. the ETag as returned by ``multipart-manifest=get``
or value in the ``X-Manifest-Etag`` header).
A HEAD/GET request to a symlink object behaves as a normal HEAD/GET request
to the target object. Therefore issuing a HEAD request to the symlink will
return the target metadata, and issuing a GET request to the symlink will
return the data and metadata of the target object. To return the symlink
metadata (with its empty body) a GET/HEAD request with the ``?symlink=get``
query parameter must be sent to a symlink object.
A POST request to a symlink will result in a 307 Temporary Redirect response.
The response will contain a ``Location`` header with the path of the target
object as the value. The request is never redirected to the target object by
Swift. Nevertheless, the metadata in the POST request will be applied to the
symlink because object servers cannot know for sure if the current object is a
symlink or not in eventual consistency.
A symlink's ``Content-Type`` is completely independent from its target. As a
convenience Swift will automatically set the ``Content-Type`` on a symlink PUT
if not explicitly set by the client. If the client sends a
``X-Symlink-Target-Etag`` Swift will set the symlink's ``Content-Type`` to that
of the target, otherwise it will be set to ``application/symlink``. You can
review a symlink's ``Content-Type`` using the ``?symlink=get`` interface. You
can change a symlink's ``Content-Type`` using a POST request. The symlink's
``Content-Type`` will appear in the container listing.
A DELETE request to a symlink will delete the symlink itself. The target
object will not be deleted.
A COPY request, or a PUT request with a ``X-Copy-From`` header, to a symlink
will copy the target object. The same request to a symlink with the query
parameter ``?symlink=get`` will copy the symlink itself.
An OPTIONS request to a symlink will respond with the options for the symlink
only; the request will not be redirected to the target object. Please note that
if the symlink's target object is in another container with CORS settings, the
response will not reflect the settings.
Tempurls can be used to GET/HEAD symlink objects, but PUT is not allowed and
will result in a 400-series error. The GET/HEAD tempurls honor the scope of
the tempurl key. Container tempurl will only work on symlinks where the target
container is the same as the symlink. In case a symlink targets an object
in a different container, a GET/HEAD request will result in a 401 Unauthorized
error. The account level tempurl will allow cross-container symlinks, but not
cross-account symlinks.
If a symlink object is overwritten while it is in a versioned container, the
symlink object itself is versioned, not the referenced object.
A GET request with query parameter ``?format=json`` to a container which
contains symlinks will respond with additional information ``symlink_path``
for each symlink object in the container listing. The ``symlink_path`` value
is the target path of the symlink. Clients can differentiate symlinks and
other objects by this function. Note that responses in any other format
(e.g. ``?format=xml``) won't include ``symlink_path`` info. If a
``X-Symlink-Target-Etag`` header was included on the symlink, JSON container
listings will include that value in a ``symlink_etag`` key and the target
object's ``Content-Length`` will be included in the key ``symlink_bytes``.
If a static symlink targets a static large object manifest it will carry
forward the SLO's size and slo_etag in the container listing using the
``symlink_bytes`` and ``slo_etag`` keys. However, manifests created before
swift v2.12.0 (released Dec 2016) do not contain enough metadata to propagate
the extra SLO information to the listing. Clients may recreate the manifest
(COPY w/ ``?multipart-manfiest=get``) before creating a static symlink to add
the requisite metadata.
Errors
* PUT with the header ``X-Symlink-Target`` with non-zero Content-Length
will produce a 400 BadRequest error.
* POST with the header ``X-Symlink-Target`` will produce a
400 BadRequest error.
* GET/HEAD traversing more than ``symloop_max`` chained symlinks will
produce a 409 Conflict error.
* PUT/GET/HEAD on a symlink that inclues a ``X-Symlink-Target-Etag`` header
that does not match the target will poduce a 409 Conflict error.
* POSTs will produce a 307 Temporary Redirect error.
----------
Deployment
----------
Symlinks are enabled by adding the `symlink` middleware to the proxy server
WSGI pipeline and including a corresponding filter configuration section in the
`proxy-server.conf` file. The `symlink` middleware should be placed after
`slo`, `dlo` and `versioned_writes` middleware, but before `encryption`
middleware in the pipeline. See the `proxy-server.conf-sample` file for further
details. :ref:`Additional steps <symlink_container_sync_client_config>` are
required if the container sync feature is being used.
.. note::
Once you have deployed `symlink` middleware in your pipeline, you should
neither remove the `symlink` middleware nor downgrade swift to a version
earlier than symlinks being supported. Doing so may result in unexpected
container listing results in addition to symlink objects behaving like a
normal object.
.. _symlink_container_sync_client_config:
Container sync configuration
----------------------------
If container sync is being used then the `symlink` middleware
must be added to the container sync internal client pipeline. The following
configuration steps are required:
#. Create a custom internal client configuration file for container sync (if
one is not already in use) based on the sample file
`internal-client.conf-sample`. For example, copy
`internal-client.conf-sample` to `/etc/swift/container-sync-client.conf`.
#. Modify this file to include the `symlink` middleware in the pipeline in
the same way as described above for the proxy server.
#. Modify the container-sync section of all container server config files to
point to this internal client config file using the
``internal_client_conf_path`` option. For example::
internal_client_conf_path = /etc/swift/container-sync-client.conf
.. note::
These container sync configuration steps will be necessary for container
sync probe tests to pass if the `symlink` middleware is included in the
proxy pipeline of a test cluster.
"""
import json
import os
from cgi import parse_header
from swift.common.utils import get_logger, split_path, \
MD5_OF_EMPTY_STRING, close_if_possible, closing_if_possible, \
config_true_value, drain_and_close
from swift.common.registry import register_swift_info
from swift.common.constraints import check_account_format
from swift.common.wsgi import WSGIContext, make_subrequest, \
make_pre_authed_request
from swift.common.request_helpers import get_sys_meta_prefix, \
check_path_header, get_container_update_override_key, \
update_ignore_range_header
from swift.common.swob import Request, HTTPBadRequest, HTTPTemporaryRedirect, \
HTTPException, HTTPConflict, HTTPPreconditionFailed, wsgi_quote, \
wsgi_unquote, status_map, normalize_etag
from swift.common.http import is_success, HTTP_NOT_FOUND
from swift.common.exceptions import LinkIterError
from swift.common.header_key_dict import HeaderKeyDict
DEFAULT_SYMLOOP_MAX = 2
# Header values for symlink target path strings will be quoted values.
TGT_OBJ_SYMLINK_HDR = 'x-symlink-target'
TGT_ACCT_SYMLINK_HDR = 'x-symlink-target-account'
TGT_ETAG_SYMLINK_HDR = 'x-symlink-target-etag'
TGT_BYTES_SYMLINK_HDR = 'x-symlink-target-bytes'
TGT_OBJ_SYSMETA_SYMLINK_HDR = get_sys_meta_prefix('object') + 'symlink-target'
TGT_ACCT_SYSMETA_SYMLINK_HDR = \
get_sys_meta_prefix('object') + 'symlink-target-account'
TGT_ETAG_SYSMETA_SYMLINK_HDR = \
get_sys_meta_prefix('object') + 'symlink-target-etag'
TGT_BYTES_SYSMETA_SYMLINK_HDR = \
get_sys_meta_prefix('object') + 'symlink-target-bytes'
SYMLOOP_EXTEND = get_sys_meta_prefix('object') + 'symloop-extend'
ALLOW_RESERVED_NAMES = get_sys_meta_prefix('object') + 'allow-reserved-names'
def _validate_and_prep_request_headers(req):
"""
Validate that the value from x-symlink-target header is well formatted
and that the x-symlink-target-etag header (if present) does not contain
problematic characters. We assume the caller ensures that
x-symlink-target header is present in req.headers.
:param req: HTTP request object
:returns: a tuple, the full versioned path to the object (as a WSGI string)
and the X-Symlink-Target-Etag header value which may be None
:raise: HTTPPreconditionFailed if x-symlink-target value
is not well formatted.
:raise: HTTPBadRequest if the x-symlink-target value points to the request
path.
:raise: HTTPBadRequest if the x-symlink-target-etag value contains
a semicolon, double-quote, or backslash.
"""
# N.B. check_path_header doesn't assert the leading slash and
# copy middleware may accept the format. In the symlink, API
# says apparently to use "container/object" format so add the
# validation first, here.
error_body = 'X-Symlink-Target header must be of the form ' \
'<container name>/<object name>'
if wsgi_unquote(req.headers[TGT_OBJ_SYMLINK_HDR]).startswith('/'):
raise HTTPPreconditionFailed(
body=error_body,
request=req, content_type='text/plain')
# check container and object format
container, obj = check_path_header(
req, TGT_OBJ_SYMLINK_HDR, 2,
error_body)
req.headers[TGT_OBJ_SYMLINK_HDR] = wsgi_quote('%s/%s' % (container, obj))
# Check account format if it exists
account = check_account_format(
req, wsgi_unquote(req.headers[TGT_ACCT_SYMLINK_HDR])) \
if TGT_ACCT_SYMLINK_HDR in req.headers else None
# Extract request path
_junk, req_acc, req_cont, req_obj = req.split_path(4, 4, True)
if account:
req.headers[TGT_ACCT_SYMLINK_HDR] = wsgi_quote(account)
else:
account = req_acc
# Check if symlink targets the symlink itself or not
if (account, container, obj) == (req_acc, req_cont, req_obj):
raise HTTPBadRequest(
body='Symlink cannot target itself',
request=req, content_type='text/plain')
etag = normalize_etag(req.headers.get(TGT_ETAG_SYMLINK_HDR, None))
if etag and any(c in etag for c in ';"\\'):
# See cgi.parse_header for why the above chars are problematic
raise HTTPBadRequest(
body='Bad %s format' % TGT_ETAG_SYMLINK_HDR.title(),
request=req, content_type='text/plain')
if not (etag or req.headers.get('Content-Type')):
req.headers['Content-Type'] = 'application/symlink'
return '/v1/%s/%s/%s' % (account, container, obj), etag
def symlink_usermeta_to_sysmeta(headers):
"""
Helper function to translate from client-facing X-Symlink-* headers
to cluster-facing X-Object-Sysmeta-Symlink-* headers.
:param headers: request headers dict. Note that the headers dict
will be updated directly.
"""
# To preseve url-encoded value in the symlink header, use raw value
for user_hdr, sysmeta_hdr in (
(TGT_OBJ_SYMLINK_HDR, TGT_OBJ_SYSMETA_SYMLINK_HDR),
(TGT_ACCT_SYMLINK_HDR, TGT_ACCT_SYSMETA_SYMLINK_HDR)):
if user_hdr in headers:
headers[sysmeta_hdr] = headers.pop(user_hdr)
def symlink_sysmeta_to_usermeta(headers):
"""
Helper function to translate from cluster-facing
X-Object-Sysmeta-Symlink-* headers to client-facing X-Symlink-* headers.
:param headers: request headers dict. Note that the headers dict
will be updated directly.
"""
for user_hdr, sysmeta_hdr in (
(TGT_OBJ_SYMLINK_HDR, TGT_OBJ_SYSMETA_SYMLINK_HDR),
(TGT_ACCT_SYMLINK_HDR, TGT_ACCT_SYSMETA_SYMLINK_HDR),
(TGT_ETAG_SYMLINK_HDR, TGT_ETAG_SYSMETA_SYMLINK_HDR),
(TGT_BYTES_SYMLINK_HDR, TGT_BYTES_SYSMETA_SYMLINK_HDR)):
if sysmeta_hdr in headers:
headers[user_hdr] = headers.pop(sysmeta_hdr)
class SymlinkContainerContext(WSGIContext):
def __init__(self, wsgi_app, logger):
super(SymlinkContainerContext, self).__init__(wsgi_app)
self.logger = logger
def handle_container(self, req, start_response):
"""
Handle container requests.
:param req: a :class:`~swift.common.swob.Request`
:param start_response: start_response function
:return: Response Iterator after start_response called.
"""
app_resp = self._app_call(req.environ)
if req.method == 'GET' and is_success(self._get_status_int()):
app_resp = self._process_json_resp(app_resp, req)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return app_resp
def _process_json_resp(self, resp_iter, req):
"""
Iterate through json body looking for symlinks and modify its content
:return: modified json body
"""
with closing_if_possible(resp_iter):
resp_body = b''.join(resp_iter)
body_json = json.loads(resp_body)
swift_version, account, _junk = split_path(req.path, 2, 3, True)
new_body = json.dumps(
[self._extract_symlink_path_json(obj_dict, swift_version, account)
for obj_dict in body_json]).encode('ascii')
self.update_content_length(len(new_body))
return [new_body]
def _extract_symlink_path_json(self, obj_dict, swift_version, account):
"""
Extract the symlink info from the hash value
:return: object dictionary with additional key:value pairs when object
is a symlink. i.e. new symlink_path, symlink_etag and
symlink_bytes keys
"""
if 'hash' in obj_dict:
hash_value, meta = parse_header(obj_dict['hash'])
obj_dict['hash'] = hash_value
target = None
for key in meta:
if key == 'symlink_target':
target = meta[key]
elif key == 'symlink_target_account':
account = meta[key]
elif key == 'symlink_target_etag':
obj_dict['symlink_etag'] = meta[key]
elif key == 'symlink_target_bytes':
obj_dict['symlink_bytes'] = int(meta[key])
else:
# make sure to add all other (key, values) back in place
obj_dict['hash'] += '; %s=%s' % (key, meta[key])
else:
if target:
obj_dict['symlink_path'] = os.path.join(
'/', swift_version, account, target)
return obj_dict
class SymlinkObjectContext(WSGIContext):
def __init__(self, wsgi_app, logger, symloop_max):
super(SymlinkObjectContext, self).__init__(wsgi_app)
self.symloop_max = symloop_max
self.logger = logger
# N.B. _loop_count and _last_target_path are used to keep
# the statement in the _recursive_get. Hence they should not be touched
# from other resources.
self._loop_count = 0
self._last_target_path = None
def handle_get_head_symlink(self, req):
"""
Handle get/head request when client sent parameter ?symlink=get
:param req: HTTP GET or HEAD object request with param ?symlink=get
:returns: Response Iterator
"""
resp = self._app_call(req.environ)
response_header_dict = HeaderKeyDict(self._response_headers)
symlink_sysmeta_to_usermeta(response_header_dict)
self._response_headers = list(response_header_dict.items())
return resp
def handle_get_head(self, req):
"""
Handle get/head request and in case the response is a symlink,
redirect request to target object.
:param req: HTTP GET or HEAD object request
:returns: Response Iterator
"""
update_ignore_range_header(req, TGT_OBJ_SYSMETA_SYMLINK_HDR)
try:
return self._recursive_get_head(req)
except LinkIterError:
errmsg = 'Too many levels of symbolic links, ' \
'maximum allowed is %d' % self.symloop_max
raise HTTPConflict(body=errmsg, request=req,
content_type='text/plain')
def _recursive_get_head(self, req, target_etag=None,
follow_softlinks=True, orig_req=None):
if not orig_req:
orig_req = req
resp = self._app_call(req.environ)
def build_traversal_req(symlink_target):
"""
:returns: new request for target path if it's symlink otherwise
None
"""
version, account, _junk = req.split_path(2, 3, True)
account = self._response_header_value(
TGT_ACCT_SYSMETA_SYMLINK_HDR) or wsgi_quote(account)
target_path = os.path.join(
'/', version, account,
symlink_target.lstrip('/'))
self._last_target_path = target_path
subreq_headers = dict(req.headers)
if self._response_header_value(ALLOW_RESERVED_NAMES):
# this symlink's sysmeta says it can point to reserved names,
# we're infering that some piece of middleware had previously
# authorized this request because users can't access reserved
# names directly
subreq_meth = make_pre_authed_request
subreq_headers['X-Backend-Allow-Reserved-Names'] = 'true'
else:
subreq_meth = make_subrequest
new_req = subreq_meth(orig_req.environ, path=target_path,
method=req.method, headers=subreq_headers,
swift_source='SYM')
new_req.headers.pop('X-Backend-Storage-Policy-Index', None)
return new_req
symlink_target = self._response_header_value(
TGT_OBJ_SYSMETA_SYMLINK_HDR)
resp_etag = self._response_header_value(
TGT_ETAG_SYSMETA_SYMLINK_HDR)
if symlink_target and (resp_etag or follow_softlinks):
# Should be a zero-byte object
drain_and_close(resp)
found_etag = resp_etag or self._response_header_value('etag')
if target_etag and target_etag != found_etag:
raise HTTPConflict(
body='X-Symlink-Target-Etag headers do not match',
headers={
'Content-Type': 'text/plain',
'Content-Location': self._last_target_path})
if self._loop_count >= self.symloop_max:
raise LinkIterError()
# format: /<account name>/<container name>/<object name>
new_req = build_traversal_req(symlink_target)
if not config_true_value(
self._response_header_value(SYMLOOP_EXTEND)):
self._loop_count += 1
return self._recursive_get_head(new_req, target_etag=resp_etag,
orig_req=req)
else:
final_etag = self._response_header_value('etag')
if final_etag and target_etag and target_etag != final_etag:
# do *not* drain; we don't know how big this is
close_if_possible(resp)
body = ('Object Etag %r does not match '
'X-Symlink-Target-Etag header %r')
raise HTTPConflict(
body=body % (final_etag, target_etag),
headers={
'Content-Type': 'text/plain',
'Content-Location': self._last_target_path})
if self._last_target_path:
# Content-Location will be applied only when one or more
# symlink recursion occurred.
# In this case, Content-Location is applied to show which
# object path caused the error response.
# To preserve '%2F'(= quote('/')) in X-Symlink-Target
# header value as it is, Content-Location value comes from
# TGT_OBJ_SYMLINK_HDR, not req.path
self._response_headers.extend(
[('Content-Location', self._last_target_path)])
return resp
def _validate_etag_and_update_sysmeta(self, req, symlink_target_path,
etag):
if req.environ.get('swift.symlink_override'):
req.headers[TGT_ETAG_SYSMETA_SYMLINK_HDR] = etag
req.headers[TGT_BYTES_SYSMETA_SYMLINK_HDR] = \
req.headers[TGT_BYTES_SYMLINK_HDR]
return
# next we'll make sure the E-Tag matches a real object
new_req = make_subrequest(
req.environ, path=wsgi_quote(symlink_target_path), method='HEAD',
swift_source='SYM')
if req.allow_reserved_names:
new_req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
self._last_target_path = symlink_target_path
resp = self._recursive_get_head(new_req, target_etag=etag,
follow_softlinks=False)
if self._get_status_int() == HTTP_NOT_FOUND:
raise HTTPConflict(
body='X-Symlink-Target does not exist',
request=req,
headers={
'Content-Type': 'text/plain',
'Content-Location': self._last_target_path})
if not is_success(self._get_status_int()):
drain_and_close(resp)
raise status_map[self._get_status_int()](request=req)
response_headers = HeaderKeyDict(self._response_headers)
# carry forward any etag update params (e.g. "slo_etag"), we'll append
# symlink_target_* params to this header after this method returns
override_header = get_container_update_override_key('etag')
if override_header in response_headers and \
override_header not in req.headers:
sep, params = response_headers[override_header].partition(';')[1:]
req.headers[override_header] = MD5_OF_EMPTY_STRING + sep + params
# It's troublesome that there's so much leakage with SLO
if 'X-Object-Sysmeta-Slo-Etag' in response_headers and \
override_header not in req.headers:
req.headers[override_header] = '%s; slo_etag=%s' % (
MD5_OF_EMPTY_STRING,
response_headers['X-Object-Sysmeta-Slo-Etag'])
req.headers[TGT_BYTES_SYSMETA_SYMLINK_HDR] = (
response_headers.get('x-object-sysmeta-slo-size') or
response_headers['Content-Length'])
req.headers[TGT_ETAG_SYSMETA_SYMLINK_HDR] = etag
if not req.headers.get('Content-Type'):
req.headers['Content-Type'] = response_headers['Content-Type']
def handle_put(self, req):
"""
Handle put request when it contains X-Symlink-Target header.
Symlink headers are validated and moved to sysmeta namespace.
:param req: HTTP PUT object request
:returns: Response Iterator
"""
if req.content_length is None:
has_body = (req.body_file.read(1) != b'')
else:
has_body = (req.content_length != 0)
if has_body:
raise HTTPBadRequest(
body='Symlink requests require a zero byte body',
request=req,
content_type='text/plain')
symlink_target_path, etag = _validate_and_prep_request_headers(req)
if etag:
self._validate_etag_and_update_sysmeta(
req, symlink_target_path, etag)
# N.B. TGT_ETAG_SYMLINK_HDR was converted as part of verifying it
symlink_usermeta_to_sysmeta(req.headers)
# Store info in container update that this object is a symlink.
# We have a design decision to use etag space to store symlink info for
# object listing because it's immutable unless the object is
# overwritten. This may impact the downgrade scenario that the symlink
# info can appear as the suffix in the hash value of object
# listing result for clients.
# To create override etag easily, we have a constraint that the symlink
# must be 0 byte so we can add etag of the empty string + symlink info
# here, simply (if no other override etag was provided). Note that this
# override etag may be encrypted in the container db by encryption
# middleware.
etag_override = [
req.headers.get(get_container_update_override_key('etag'),
MD5_OF_EMPTY_STRING),
'symlink_target=%s' % req.headers[TGT_OBJ_SYSMETA_SYMLINK_HDR]
]
if TGT_ACCT_SYSMETA_SYMLINK_HDR in req.headers:
etag_override.append(
'symlink_target_account=%s' %
req.headers[TGT_ACCT_SYSMETA_SYMLINK_HDR])
if TGT_ETAG_SYSMETA_SYMLINK_HDR in req.headers:
# if _validate_etag_and_update_sysmeta or a middleware sets
# TGT_ETAG_SYSMETA_SYMLINK_HDR then they need to also set
# TGT_BYTES_SYSMETA_SYMLINK_HDR. If they forget, they get a
# KeyError traceback and client gets a ServerError
etag_override.extend([
'symlink_target_etag=%s' %
req.headers[TGT_ETAG_SYSMETA_SYMLINK_HDR],
'symlink_target_bytes=%s' %
req.headers[TGT_BYTES_SYSMETA_SYMLINK_HDR],
])
req.headers[get_container_update_override_key('etag')] = \
'; '.join(etag_override)
return self._app_call(req.environ)
def handle_post(self, req):
"""
Handle post request. If POSTing to a symlink, a HTTPTemporaryRedirect
error message is returned to client.
Clients that POST to symlinks should understand that the POST is not
redirected to the target object like in a HEAD/GET request. POSTs to a
symlink will be handled just like a normal object by the object server.
It cannot reject it because it may not have symlink state when the POST
lands. The object server has no knowledge of what is a symlink object
is. On the other hand, on POST requests, the object server returns all
sysmeta of the object. This method uses that sysmeta to determine if
the stored object is a symlink or not.
:param req: HTTP POST object request
:raises: HTTPTemporaryRedirect if POSTing to a symlink.
:returns: Response Iterator
"""
if TGT_OBJ_SYMLINK_HDR in req.headers:
raise HTTPBadRequest(
body='A PUT request is required to set a symlink target',
request=req,
content_type='text/plain')
resp = self._app_call(req.environ)
if not is_success(self._get_status_int()):
return resp
tgt_co = self._response_header_value(TGT_OBJ_SYSMETA_SYMLINK_HDR)
if tgt_co:
version, account, _junk = req.split_path(2, 3, True)
target_acc = self._response_header_value(
TGT_ACCT_SYSMETA_SYMLINK_HDR) or wsgi_quote(account)
location_hdr = os.path.join(
'/', version, target_acc, tgt_co)
headers = {'location': location_hdr}
tgt_etag = self._response_header_value(
TGT_ETAG_SYSMETA_SYMLINK_HDR)
if tgt_etag:
headers[TGT_ETAG_SYMLINK_HDR] = tgt_etag
req.environ['swift.leave_relative_location'] = True
errmsg = 'The requested POST was applied to a symlink. POST ' +\
'directly to the target to apply requested metadata.'
for key, value in self._response_headers:
if key.lower().startswith('x-object-sysmeta-'):
headers[key] = value
raise HTTPTemporaryRedirect(
body=errmsg, headers=headers)
else:
return resp
def handle_object(self, req, start_response):
"""
Handle object requests.
:param req: a :class:`~swift.common.swob.Request`
:param start_response: start_response function
:returns: Response Iterator after start_response has been called
"""
if req.method in ('GET', 'HEAD'):
if req.params.get('symlink') == 'get':
resp = self.handle_get_head_symlink(req)
else:
resp = self.handle_get_head(req)
elif req.method == 'PUT' and (TGT_OBJ_SYMLINK_HDR in req.headers):
resp = self.handle_put(req)
elif req.method == 'POST':
resp = self.handle_post(req)
else:
# DELETE and OPTIONS reqs for a symlink and
# PUT reqs without X-Symlink-Target behave like any other object
resp = self._app_call(req.environ)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
class SymlinkMiddleware(object):
"""
Middleware that implements symlinks.
Symlinks are objects stored in Swift that contain a reference to another
object (i.e., the target object). An important use case is to use a path in
one container to access an object in a different container, with a
different policy. This allows policy cost/performance trade-offs to be made
on individual objects.
"""
def __init__(self, app, conf, symloop_max):
self.app = app
self.conf = conf
self.logger = get_logger(self.conf, log_route='symlink')
self.symloop_max = symloop_max
def __call__(self, env, start_response):
req = Request(env)
try:
version, acc, cont, obj = req.split_path(3, 4, True)
is_cont_or_obj_req = True
except ValueError:
is_cont_or_obj_req = False
if not is_cont_or_obj_req:
return self.app(env, start_response)
try:
if obj:
# object context
context = SymlinkObjectContext(self.app, self.logger,
self.symloop_max)
return context.handle_object(req, start_response)
else:
# container context
context = SymlinkContainerContext(self.app, self.logger)
return context.handle_container(req, start_response)
except HTTPException as err_resp:
return err_resp(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
symloop_max = int(conf.get('symloop_max', DEFAULT_SYMLOOP_MAX))
if symloop_max < 1:
symloop_max = int(DEFAULT_SYMLOOP_MAX)
register_swift_info('symlink', symloop_max=symloop_max, static_links=True)
def symlink_mw(app):
return SymlinkMiddleware(app, conf, symloop_max)
return symlink_mw
| swift-master | swift/common/middleware/symlink.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift import gettext_ as _
from swift.common.swob import Request, HTTPServerError
from swift.common.utils import get_logger, generate_trans_id, close_if_possible
from swift.common.wsgi import WSGIContext
class BadResponseLength(Exception):
pass
def enforce_byte_count(inner_iter, nbytes):
"""
Enforces that inner_iter yields exactly <nbytes> bytes before
exhaustion.
If inner_iter fails to do so, BadResponseLength is raised.
:param inner_iter: iterable of bytestrings
:param nbytes: number of bytes expected
"""
try:
bytes_left = nbytes
for chunk in inner_iter:
if bytes_left >= len(chunk):
yield chunk
bytes_left -= len(chunk)
else:
yield chunk[:bytes_left]
raise BadResponseLength(
"Too many bytes; truncating after %d bytes "
"with at least %d surplus bytes remaining" % (
nbytes, len(chunk) - bytes_left))
if bytes_left:
raise BadResponseLength('Expected another %d bytes' % (
bytes_left,))
finally:
close_if_possible(inner_iter)
class CatchErrorsContext(WSGIContext):
def __init__(self, app, logger, trans_id_suffix=''):
super(CatchErrorsContext, self).__init__(app)
self.logger = logger
self.trans_id_suffix = trans_id_suffix
def handle_request(self, env, start_response):
trans_id_suffix = self.trans_id_suffix
trans_id_extra = env.get('HTTP_X_TRANS_ID_EXTRA')
if trans_id_extra:
trans_id_suffix += '-' + trans_id_extra[:32]
trans_id = generate_trans_id(trans_id_suffix)
env['swift.trans_id'] = trans_id
self.logger.txn_id = trans_id
try:
# catch any errors in the pipeline
resp = self._app_call(env)
except: # noqa
self.logger.exception(_('Error: An error occurred'))
resp = HTTPServerError(request=Request(env),
body=b'An error occurred',
content_type='text/plain')
resp.headers['X-Trans-Id'] = trans_id
resp.headers['X-Openstack-Request-Id'] = trans_id
return resp(env, start_response)
# If the app specified a Content-Length, enforce that it sends that
# many bytes.
#
# If an app gives too few bytes, then the client will wait for the
# remainder before sending another HTTP request on the same socket;
# since no more bytes are coming, this will result in either an
# infinite wait or a timeout. In this case, we want to raise an
# exception to signal to the WSGI server that it should close the
# TCP connection.
#
# If an app gives too many bytes, then we can deadlock with the
# client; if the client reads its N bytes and then sends a large-ish
# request (enough to fill TCP buffers), it'll block until we read
# some of the request. However, we won't read the request since
# we'll be trying to shove the rest of our oversized response out
# the socket. In that case, we truncate the response body at N bytes
# and raise an exception to stop any more bytes from being
# generated and also to kill the TCP connection.
if env['REQUEST_METHOD'] == 'HEAD':
resp = enforce_byte_count(resp, 0)
elif self._response_headers:
content_lengths = [val for header, val in self._response_headers
if header.lower() == "content-length"]
if len(content_lengths) == 1:
try:
content_length = int(content_lengths[0])
except ValueError:
pass
else:
resp = enforce_byte_count(resp, content_length)
# make sure the response has the trans_id
if self._response_headers is None:
self._response_headers = []
self._response_headers.append(('X-Trans-Id', trans_id))
self._response_headers.append(('X-Openstack-Request-Id', trans_id))
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
class CatchErrorMiddleware(object):
"""
Middleware that provides high-level error handling and ensures that a
transaction id will be set for every request.
"""
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='catch-errors')
self.trans_id_suffix = conf.get('trans_id_suffix', '')
def __call__(self, env, start_response):
"""
If used, this should be the first middleware in pipeline.
"""
context = CatchErrorsContext(self.app,
self.logger,
self.trans_id_suffix)
return context.handle_request(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def except_filter(app):
return CatchErrorMiddleware(app, conf)
return except_filter
| swift-master | swift/common/middleware/catch_errors.py |
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Middleware that will provide Dynamic Large Object (DLO) support.
---------------
Using ``swift``
---------------
The quickest way to try out this feature is use the ``swift`` Swift Tool
included with the `python-swiftclient`_ library. You can use the ``-S``
option to specify the segment size to use when splitting a large file. For
example::
swift upload test_container -S 1073741824 large_file
This would split the large_file into 1G segments and begin uploading those
segments in parallel. Once all the segments have been uploaded, ``swift`` will
then create the manifest file so the segments can be downloaded as one.
So now, the following ``swift`` command would download the entire large
object::
swift download test_container large_file
``swift`` command uses a strict convention for its segmented object
support. In the above example it will upload all the segments into a
second container named test_container_segments. These segments will
have names like large_file/1290206778.25/21474836480/00000000,
large_file/1290206778.25/21474836480/00000001, etc.
The main benefit for using a separate container is that the main container
listings will not be polluted with all the segment names. The reason for using
the segment name format of <name>/<timestamp>/<size>/<segment> is so that an
upload of a new file with the same name won't overwrite the contents of the
first until the last moment when the manifest file is updated.
``swift`` will manage these segment files for you, deleting old segments on
deletes and overwrites, etc. You can override this behavior with the
``--leave-segments`` option if desired; this is useful if you want to have
multiple versions of the same large object available.
.. _`python-swiftclient`: http://github.com/openstack/python-swiftclient
----------
Direct API
----------
You can also work with the segments and manifests directly with HTTP
requests instead of having ``swift`` do that for you. You can just
upload the segments like you would any other object and the manifest
is just a zero-byte (not enforced) file with an extra
``X-Object-Manifest`` header.
All the object segments need to be in the same container, have a common object
name prefix, and sort in the order in which they should be concatenated.
Object names are sorted lexicographically as UTF-8 byte strings.
They don't have to be in the same container as the manifest file will be, which
is useful to keep container listings clean as explained above with ``swift``.
The manifest file is simply a zero-byte (not enforced) file with the extra
``X-Object-Manifest: <container>/<prefix>`` header, where ``<container>`` is
the container the object segments are in and ``<prefix>`` is the common prefix
for all the segments.
It is best to upload all the segments first and then create or update the
manifest. In this way, the full object won't be available for downloading
until the upload is complete. Also, you can upload a new set of segments to
a second location and then update the manifest to point to this new location.
During the upload of the new segments, the original manifest will still be
available to download the first set of segments.
.. note::
When updating a manifest object using a POST request, a
``X-Object-Manifest`` header must be included for the object to
continue to behave as a manifest object.
The manifest file should have no content. However, this is not enforced.
If the manifest path itself conforms to container/prefix specified in
``X-Object-Manifest``, and if manifest has some content/data in it, it
would also be considered as segment and manifest's content will be part of
the concatenated GET response. The order of concatenation follows the usual
DLO logic which is - the order of concatenation adheres to order returned
when segment names are sorted.
Here's an example using ``curl`` with tiny 1-byte segments::
# First, upload the segments
curl -X PUT -H 'X-Auth-Token: <token>' \
http://<storage_url>/container/myobject/00000001 --data-binary '1'
curl -X PUT -H 'X-Auth-Token: <token>' \
http://<storage_url>/container/myobject/00000002 --data-binary '2'
curl -X PUT -H 'X-Auth-Token: <token>' \
http://<storage_url>/container/myobject/00000003 --data-binary '3'
# Next, create the manifest file
curl -X PUT -H 'X-Auth-Token: <token>' \
-H 'X-Object-Manifest: container/myobject/' \
http://<storage_url>/container/myobject --data-binary ''
# And now we can download the segments as a single object
curl -H 'X-Auth-Token: <token>' \
http://<storage_url>/container/myobject
"""
import json
import six
from swift.common import constraints
from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.http import is_success
from swift.common.swob import Request, Response, HTTPException, \
HTTPRequestedRangeNotSatisfiable, HTTPBadRequest, HTTPConflict, \
str_to_wsgi, wsgi_to_str, wsgi_quote, wsgi_unquote, normalize_etag
from swift.common.utils import get_logger, \
RateLimitedIterator, quote, close_if_possible, closing_if_possible, \
drain_and_close, md5
from swift.common.request_helpers import SegmentedIterable, \
update_ignore_range_header
from swift.common.wsgi import WSGIContext, make_subrequest, load_app_config
class GetContext(WSGIContext):
def __init__(self, dlo, logger):
super(GetContext, self).__init__(dlo.app)
self.dlo = dlo
self.logger = logger
def _get_container_listing(self, req, version, account, container,
prefix, marker=''):
'''
:param version: whatever
:param account: native
:param container: native
:param prefix: native
:param marker: native
'''
con_req = make_subrequest(
req.environ,
path=wsgi_quote('/'.join([
'', str_to_wsgi(version),
str_to_wsgi(account), str_to_wsgi(container)])),
method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent=('%(orig)s ' + 'DLO MultipartGET'), swift_source='DLO')
con_req.query_string = 'prefix=%s' % quote(prefix)
if marker:
con_req.query_string += '&marker=%s' % quote(marker)
con_resp = con_req.get_response(self.dlo.app)
if not is_success(con_resp.status_int):
if req.method == 'HEAD':
con_resp.body = b''
return con_resp, None
with closing_if_possible(con_resp.app_iter):
return None, json.loads(b''.join(con_resp.app_iter))
def _segment_listing_iterator(self, req, version, account, container,
prefix, segments, first_byte=None,
last_byte=None):
'''
:param req: upstream request
:param version: native
:param account: native
:param container: native
:param prefix: native
:param segments: array of dicts, with native strings
:param first_byte: number
:param last_byte: number
'''
# It's sort of hokey that this thing takes in the first page of
# segments as an argument, but we need to compute the etag and content
# length from the first page, and it's better to have a hokey
# interface than to make redundant requests.
if first_byte is None:
first_byte = 0
if last_byte is None:
last_byte = float("inf")
while True:
for segment in segments:
seg_length = int(segment['bytes'])
if first_byte >= seg_length:
# don't need any bytes from this segment
first_byte = max(first_byte - seg_length, -1)
last_byte = max(last_byte - seg_length, -1)
continue
elif last_byte < 0:
# no bytes are needed from this or any future segment
break
seg_name = segment['name']
if six.PY2:
seg_name = seg_name.encode("utf-8")
# We deliberately omit the etag and size here;
# SegmentedIterable will check size and etag if
# specified, but we don't want it to. DLOs only care
# that the objects' names match the specified prefix.
# SegmentedIterable will instead check that the data read
# from each segment matches the response headers.
_path = "/".join(["", version, account, container, seg_name])
_first = None if first_byte <= 0 else first_byte
_last = None if last_byte >= seg_length - 1 else last_byte
yield {
'path': _path,
'first_byte': _first,
'last_byte': _last
}
first_byte = max(first_byte - seg_length, -1)
last_byte = max(last_byte - seg_length, -1)
if len(segments) < constraints.CONTAINER_LISTING_LIMIT:
# a short page means that we're done with the listing
break
elif last_byte < 0:
break
marker = segments[-1]['name']
error_response, segments = self._get_container_listing(
req, version, account, container, prefix, marker)
if error_response:
# we've already started sending the response body to the
# client, so all we can do is raise an exception to make the
# WSGI server close the connection early
close_if_possible(error_response.app_iter)
raise ListingIterError(
"Got status %d listing container /%s/%s" %
(error_response.status_int, account, container))
def get_or_head_response(self, req, x_object_manifest):
'''
:param req: user's request
:param x_object_manifest: as unquoted, native string
'''
response_headers = self._response_headers
container, obj_prefix = x_object_manifest.split('/', 1)
version, account, _junk = req.split_path(2, 3, True)
version = wsgi_to_str(version)
account = wsgi_to_str(account)
error_response, segments = self._get_container_listing(
req, version, account, container, obj_prefix)
if error_response:
return error_response
have_complete_listing = len(segments) < \
constraints.CONTAINER_LISTING_LIMIT
first_byte = last_byte = None
actual_content_length = None
content_length_for_swob_range = None
if req.range and len(req.range.ranges) == 1:
content_length_for_swob_range = sum(o['bytes'] for o in segments)
# This is a hack to handle suffix byte ranges (e.g. "bytes=-5"),
# which we can't honor unless we have a complete listing.
_junk, range_end = req.range.ranges_for_length(float("inf"))[0]
# If this is all the segments, we know whether or not this
# range request is satisfiable.
#
# Alternately, we may not have all the segments, but this range
# falls entirely within the first page's segments, so we know
# that it is satisfiable.
if (have_complete_listing
or range_end < content_length_for_swob_range):
byteranges = req.range.ranges_for_length(
content_length_for_swob_range)
if not byteranges:
headers = {'Accept-Ranges': 'bytes'}
if have_complete_listing:
headers['Content-Range'] = 'bytes */%d' % (
content_length_for_swob_range, )
return HTTPRequestedRangeNotSatisfiable(
request=req, headers=headers)
first_byte, last_byte = byteranges[0]
# For some reason, swob.Range.ranges_for_length adds 1 to the
# last byte's position.
last_byte -= 1
actual_content_length = last_byte - first_byte + 1
else:
# The range may or may not be satisfiable, but we can't tell
# based on just one page of listing, and we're not going to go
# get more pages because that would use up too many resources,
# so we ignore the Range header and return the whole object.
actual_content_length = None
content_length_for_swob_range = None
req.range = None
else:
req.range = None
response_headers = [
(h, v) for h, v in response_headers
if h.lower() not in ("content-length", "content-range")]
if content_length_for_swob_range is not None:
# Here, we have to give swob a big-enough content length so that
# it can compute the actual content length based on the Range
# header. This value will not be visible to the client; swob will
# substitute its own Content-Length.
#
# Note: if the manifest points to at least CONTAINER_LISTING_LIMIT
# segments, this may be less than the sum of all the segments'
# sizes. However, it'll still be greater than the last byte in the
# Range header, so it's good enough for swob.
response_headers.append(('Content-Length',
str(content_length_for_swob_range)))
elif have_complete_listing:
actual_content_length = sum(o['bytes'] for o in segments)
response_headers.append(('Content-Length',
str(actual_content_length)))
if have_complete_listing:
response_headers = [(h, v) for h, v in response_headers
if h.lower() != "etag"]
etag = md5(usedforsecurity=False)
for seg_dict in segments:
etag.update(normalize_etag(seg_dict['hash']).encode('utf8'))
response_headers.append(('Etag', '"%s"' % etag.hexdigest()))
app_iter = None
if req.method == 'GET':
listing_iter = RateLimitedIterator(
self._segment_listing_iterator(
req, version, account, container, obj_prefix, segments,
first_byte=first_byte, last_byte=last_byte),
self.dlo.rate_limit_segments_per_sec,
limit_after=self.dlo.rate_limit_after_segment)
app_iter = SegmentedIterable(
req, self.dlo.app, listing_iter, ua_suffix="DLO MultipartGET",
swift_source="DLO", name=req.path, logger=self.logger,
max_get_time=self.dlo.max_get_time,
response_body_length=actual_content_length)
try:
app_iter.validate_first_segment()
except HTTPException as err_resp:
return err_resp
except (SegmentError, ListingIterError):
return HTTPConflict(request=req)
resp = Response(request=req, headers=response_headers,
conditional_response=True,
app_iter=app_iter)
return resp
def handle_request(self, req, start_response):
"""
Take a GET or HEAD request, and if it is for a dynamic large object
manifest, return an appropriate response.
Otherwise, simply pass it through.
"""
update_ignore_range_header(req, 'X-Object-Manifest')
resp_iter = self._app_call(req.environ)
# make sure this response is for a dynamic large object manifest
for header, value in self._response_headers:
if (header.lower() == 'x-object-manifest'):
content_length = self._response_header_value('content-length')
if content_length is not None and int(content_length) < 1024:
# Go ahead and consume small bodies
drain_and_close(resp_iter)
close_if_possible(resp_iter)
response = self.get_or_head_response(
req, wsgi_to_str(wsgi_unquote(value)))
return response(req.environ, start_response)
# Not a dynamic large object manifest; just pass it through.
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return resp_iter
class DynamicLargeObject(object):
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='dlo')
# DLO functionality used to live in the proxy server, not middleware,
# so let's try to go find config values in the proxy's config section
# to ease cluster upgrades.
self._populate_config_from_old_location(conf)
self.max_get_time = int(conf.get('max_get_time', '86400'))
self.rate_limit_after_segment = int(conf.get(
'rate_limit_after_segment', '10'))
self.rate_limit_segments_per_sec = int(conf.get(
'rate_limit_segments_per_sec', '1'))
def _populate_config_from_old_location(self, conf):
if ('rate_limit_after_segment' in conf or
'rate_limit_segments_per_sec' in conf or
'max_get_time' in conf or
'__file__' not in conf):
return
proxy_conf = load_app_config(conf['__file__'])
for setting in ('rate_limit_after_segment',
'rate_limit_segments_per_sec',
'max_get_time'):
if setting in proxy_conf:
conf[setting] = proxy_conf[setting]
def __call__(self, env, start_response):
"""
WSGI entry point
"""
req = Request(env)
try:
vrs, account, container, obj = req.split_path(4, 4, True)
is_obj_req = True
except ValueError:
is_obj_req = False
if not is_obj_req:
return self.app(env, start_response)
if ((req.method == 'GET' or req.method == 'HEAD') and
req.params.get('multipart-manifest') != 'get'):
return GetContext(self, self.logger).\
handle_request(req, start_response)
elif req.method == 'PUT':
error_response = self._validate_x_object_manifest_header(req)
if error_response:
return error_response(env, start_response)
return self.app(env, start_response)
def _validate_x_object_manifest_header(self, req):
"""
Make sure that X-Object-Manifest is valid if present.
"""
if 'X-Object-Manifest' in req.headers:
value = req.headers['X-Object-Manifest']
container = prefix = None
try:
container, prefix = value.split('/', 1)
except ValueError:
pass
if not container or not prefix or '?' in value or '&' in value or \
prefix.startswith('/'):
return HTTPBadRequest(
request=req,
body=('X-Object-Manifest must be in the '
'format container/prefix'))
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def dlo_filter(app):
return DynamicLargeObject(app, conf)
return dlo_filter
| swift-master | swift/common/middleware/dlo.py |
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Middleware that will perform many operations on a single request.
---------------
Extract Archive
---------------
Expand tar files into a Swift account. Request must be a PUT with the
query parameter ``?extract-archive=format`` specifying the format of archive
file. Accepted formats are tar, tar.gz, and tar.bz2.
For a PUT to the following url::
/v1/AUTH_Account/$UPLOAD_PATH?extract-archive=tar.gz
UPLOAD_PATH is where the files will be expanded to. UPLOAD_PATH can be a
container, a pseudo-directory within a container, or an empty string. The
destination of a file in the archive will be built as follows::
/v1/AUTH_Account/$UPLOAD_PATH/$FILE_PATH
Where FILE_PATH is the file name from the listing in the tar file.
If the UPLOAD_PATH is an empty string, containers will be auto created
accordingly and files in the tar that would not map to any container (files
in the base directory) will be ignored.
Only regular files will be uploaded. Empty directories, symlinks, etc will
not be uploaded.
------------
Content Type
------------
If the content-type header is set in the extract-archive call, Swift will
assign that content-type to all the underlying files. The bulk middleware
will extract the archive file and send the internal files using PUT
operations using the same headers from the original request
(e.g. auth-tokens, content-Type, etc.). Notice that any middleware call
that follows the bulk middleware does not know if this was a bulk request
or if these were individual requests sent by the user.
In order to make Swift detect the content-type for the files based on the
file extension, the content-type in the extract-archive call should not be
set. Alternatively, it is possible to explicitly tell Swift to detect the
content type using this header::
X-Detect-Content-Type: true
For example::
curl -X PUT http://127.0.0.1/v1/AUTH_acc/cont/$?extract-archive=tar
-T backup.tar
-H "Content-Type: application/x-tar"
-H "X-Auth-Token: xxx"
-H "X-Detect-Content-Type: true"
------------------
Assigning Metadata
------------------
The tar file format (1) allows for UTF-8 key/value pairs to be associated
with each file in an archive. If a file has extended attributes, then tar
will store those as key/value pairs. The bulk middleware can read those
extended attributes and convert them to Swift object metadata. Attributes
starting with "user.meta" are converted to object metadata, and
"user.mime_type" is converted to Content-Type.
For example::
setfattr -n user.mime_type -v "application/python-setup" setup.py
setfattr -n user.meta.lunch -v "burger and fries" setup.py
setfattr -n user.meta.dinner -v "baked ziti" setup.py
setfattr -n user.stuff -v "whee" setup.py
Will get translated to headers::
Content-Type: application/python-setup
X-Object-Meta-Lunch: burger and fries
X-Object-Meta-Dinner: baked ziti
The bulk middleware will handle xattrs stored by both GNU and BSD tar (2).
Only xattrs ``user.mime_type`` and ``user.meta.*`` are processed. Other
attributes are ignored.
In addition to the extended attributes, the object metadata and the
x-delete-at/x-delete-after headers set in the request are also assigned to the
extracted objects.
Notes:
(1) The POSIX 1003.1-2001 (pax) format. The default format on GNU tar
1.27.1 or later.
(2) Even with pax-format tarballs, different encoders store xattrs slightly
differently; for example, GNU tar stores the xattr "user.userattribute" as
pax header "SCHILY.xattr.user.userattribute", while BSD tar (which uses
libarchive) stores it as "LIBARCHIVE.xattr.user.userattribute".
--------
Response
--------
The response from bulk operations functions differently from other Swift
responses. This is because a short request body sent from the client could
result in many operations on the proxy server and precautions need to be
made to prevent the request from timing out due to lack of activity. To
this end, the client will always receive a 200 OK response, regardless of
the actual success of the call. The body of the response must be parsed to
determine the actual success of the operation. In addition to this the
client may receive zero or more whitespace characters prepended to the
actual response body while the proxy server is completing the request.
The format of the response body defaults to text/plain but can be either
json or xml depending on the ``Accept`` header. Acceptable formats are
``text/plain``, ``application/json``, ``application/xml``, and ``text/xml``.
An example body is as follows::
{"Response Status": "201 Created",
"Response Body": "",
"Errors": [],
"Number Files Created": 10}
If all valid files were uploaded successfully the Response Status will be
201 Created. If any files failed to be created the response code
corresponds to the subrequest's error. Possible codes are 400, 401, 502 (on
server errors), etc. In both cases the response body will specify the
number of files successfully uploaded and a list of the files that failed.
There are proxy logs created for each file (which becomes a subrequest) in
the tar. The subrequest's proxy log will have a swift.source set to "EA"
the log's content length will reflect the unzipped size of the file. If
double proxy-logging is used the leftmost logger will not have a
swift.source set and the content length will reflect the size of the
payload sent to the proxy (the unexpanded size of the tar.gz).
-----------
Bulk Delete
-----------
Will delete multiple objects or containers from their account with a
single request. Responds to POST requests with query parameter
``?bulk-delete`` set. The request url is your storage url. The Content-Type
should be set to ``text/plain``. The body of the POST request will be a
newline separated list of url encoded objects to delete. You can delete
10,000 (configurable) objects per request. The objects specified in the
POST request body must be URL encoded and in the form::
/container_name/obj_name
or for a container (which must be empty at time of delete)::
/container_name
The response is similar to extract archive as in every response will be a
200 OK and you must parse the response body for actual results. An example
response is::
{"Number Not Found": 0,
"Response Status": "200 OK",
"Response Body": "",
"Errors": [],
"Number Deleted": 6}
If all items were successfully deleted (or did not exist), the Response
Status will be 200 OK. If any failed to delete, the response code
corresponds to the subrequest's error. Possible codes are 400, 401, 502 (on
server errors), etc. In all cases the response body will specify the number
of items successfully deleted, not found, and a list of those that failed.
The return body will be formatted in the way specified in the request's
``Accept`` header. Acceptable formats are ``text/plain``, ``application/json``,
``application/xml``, and ``text/xml``.
There are proxy logs created for each object or container (which becomes a
subrequest) that is deleted. The subrequest's proxy log will have a
swift.source set to "BD" the log's content length of 0. If double
proxy-logging is used the leftmost logger will not have a
swift.source set and the content length will reflect the size of the
payload sent to the proxy (the list of objects/containers to be deleted).
"""
import json
import six
import tarfile
from xml.sax import saxutils
from time import time
from eventlet import sleep
import zlib
from swift.common.swob import Request, HTTPBadGateway, \
HTTPCreated, HTTPBadRequest, HTTPNotFound, HTTPUnauthorized, HTTPOk, \
HTTPPreconditionFailed, HTTPRequestEntityTooLarge, HTTPNotAcceptable, \
HTTPLengthRequired, HTTPException, HTTPServerError, wsgify, \
bytes_to_wsgi, str_to_wsgi, wsgi_unquote, wsgi_quote, wsgi_to_str
from swift.common.utils import get_logger, StreamingPile
from swift.common.registry import register_swift_info
from swift.common import constraints
from swift.common.http import HTTP_UNAUTHORIZED, HTTP_NOT_FOUND, HTTP_CONFLICT
from swift.common.request_helpers import is_user_meta
from swift.common.wsgi import make_subrequest
class CreateContainerError(Exception):
def __init__(self, msg, status_int, status):
self.status_int = status_int
self.status = status
super(CreateContainerError, self).__init__(msg)
ACCEPTABLE_FORMATS = ['text/plain', 'application/json', 'application/xml',
'text/xml']
def get_response_body(data_format, data_dict, error_list, root_tag):
"""
Returns a properly formatted response body according to format.
Handles json and xml, otherwise will return text/plain.
Note: xml response does not include xml declaration.
:params data_format: resulting format
:params data_dict: generated data about results.
:params error_list: list of quoted filenames that failed
:params root_tag: the tag name to use for root elements when returning XML;
e.g. 'extract' or 'delete'
"""
if data_format == 'application/json':
data_dict['Errors'] = error_list
return json.dumps(data_dict).encode('ascii')
if data_format and data_format.endswith('/xml'):
output = ['<', root_tag, '>\n']
for key in sorted(data_dict):
xml_key = key.replace(' ', '_').lower()
output.extend([
'<', xml_key, '>',
saxutils.escape(str(data_dict[key])),
'</', xml_key, '>\n',
])
output.append('<errors>\n')
for name, status in error_list:
output.extend([
'<object><name>', saxutils.escape(name), '</name><status>',
saxutils.escape(status), '</status></object>\n',
])
output.extend(['</errors>\n</', root_tag, '>\n'])
if six.PY2:
return ''.join(output)
return ''.join(output).encode('utf-8')
output = []
for key in sorted(data_dict):
output.append('%s: %s\n' % (key, data_dict[key]))
output.append('Errors:\n')
output.extend(
'%s, %s\n' % (name, status)
for name, status in error_list)
if six.PY2:
return ''.join(output)
return ''.join(output).encode('utf-8')
def pax_key_to_swift_header(pax_key):
if (pax_key == u"SCHILY.xattr.user.mime_type" or
pax_key == u"LIBARCHIVE.xattr.user.mime_type"):
return "Content-Type"
elif pax_key.startswith(u"SCHILY.xattr.user.meta."):
useful_part = pax_key[len(u"SCHILY.xattr.user.meta."):]
if six.PY2:
return "X-Object-Meta-" + useful_part.encode("utf-8")
return str_to_wsgi("X-Object-Meta-" + useful_part)
elif pax_key.startswith(u"LIBARCHIVE.xattr.user.meta."):
useful_part = pax_key[len(u"LIBARCHIVE.xattr.user.meta."):]
if six.PY2:
return "X-Object-Meta-" + useful_part.encode("utf-8")
return str_to_wsgi("X-Object-Meta-" + useful_part)
else:
# You can get things like atime/mtime/ctime or filesystem ACLs in
# pax headers; those aren't really user metadata. The same goes for
# other, non-user metadata.
return None
class Bulk(object):
def __init__(self, app, conf, max_containers_per_extraction=10000,
max_failed_extractions=1000, max_deletes_per_request=10000,
max_failed_deletes=1000, yield_frequency=10,
delete_concurrency=2, retry_count=0, retry_interval=1.5,
logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='bulk')
self.max_containers = max_containers_per_extraction
self.max_failed_extractions = max_failed_extractions
self.max_failed_deletes = max_failed_deletes
self.max_deletes_per_request = max_deletes_per_request
self.yield_frequency = yield_frequency
self.delete_concurrency = min(1000, max(1, delete_concurrency))
self.retry_count = retry_count
self.retry_interval = retry_interval
self.max_path_length = constraints.MAX_OBJECT_NAME_LENGTH \
+ constraints.MAX_CONTAINER_NAME_LENGTH + 2
def create_container(self, req, container_path):
"""
Checks if the container exists and if not try to create it.
:params container_path: an unquoted path to a container to be created
:returns: True if created container, False if container exists
:raises CreateContainerError: when unable to create container
"""
head_cont_req = make_subrequest(
req.environ, method='HEAD', path=wsgi_quote(container_path),
headers={'X-Auth-Token': req.headers.get('X-Auth-Token')},
swift_source='EA')
resp = head_cont_req.get_response(self.app)
if resp.is_success:
return False
if resp.status_int == HTTP_NOT_FOUND:
create_cont_req = make_subrequest(
req.environ, method='PUT', path=wsgi_quote(container_path),
headers={'X-Auth-Token': req.headers.get('X-Auth-Token')},
swift_source='EA')
resp = create_cont_req.get_response(self.app)
if resp.is_success:
return True
raise CreateContainerError(
"Create Container Failed: " + container_path,
resp.status_int, resp.status)
def get_objs_to_delete(self, req):
"""
Will populate objs_to_delete with data from request input.
:params req: a Swob request
:returns: a list of the contents of req.body when separated by newline.
:raises HTTPException: on failures
"""
line = b''
data_remaining = True
objs_to_delete = []
if req.content_length is None and \
req.headers.get('transfer-encoding', '').lower() != 'chunked':
raise HTTPLengthRequired(request=req)
while data_remaining:
if b'\n' in line:
obj_to_delete, line = line.split(b'\n', 1)
if six.PY2:
obj_to_delete = wsgi_unquote(obj_to_delete.strip())
else:
# yeah, all this chaining is pretty terrible...
# but it gets even worse trying to use UTF-8 and
# errors='surrogateescape' when dealing with terrible
# input like b'\xe2%98\x83'
obj_to_delete = wsgi_to_str(wsgi_unquote(
bytes_to_wsgi(obj_to_delete.strip())))
objs_to_delete.append({'name': obj_to_delete})
else:
data = req.body_file.read(self.max_path_length)
if data:
line += data
else:
data_remaining = False
if six.PY2:
obj_to_delete = wsgi_unquote(line.strip())
else:
obj_to_delete = wsgi_to_str(wsgi_unquote(
bytes_to_wsgi(line.strip())))
if obj_to_delete:
objs_to_delete.append({'name': obj_to_delete})
if len(objs_to_delete) > self.max_deletes_per_request:
raise HTTPRequestEntityTooLarge(
'Maximum Bulk Deletes: %d per request' %
self.max_deletes_per_request)
if len(line) > self.max_path_length * 2:
raise HTTPBadRequest('Invalid File Name')
return objs_to_delete
def handle_delete_iter(self, req, objs_to_delete=None,
user_agent='BulkDelete', swift_source='BD',
out_content_type='text/plain'):
"""
A generator that can be assigned to a swob Response's app_iter which,
when iterated over, will delete the objects specified in request body.
Will occasionally yield whitespace while request is being processed.
When the request is completed will yield a response body that can be
parsed to determine success. See above documentation for details.
:params req: a swob Request
:params objs_to_delete: a list of dictionaries that specifies the
(native string) objects to be deleted. If None, uses
self.get_objs_to_delete to query request.
"""
last_yield = time()
if out_content_type and out_content_type.endswith('/xml'):
to_yield = b'<?xml version="1.0" encoding="UTF-8"?>\n'
else:
to_yield = b' '
separator = b''
failed_files = []
resp_dict = {'Response Status': HTTPOk().status,
'Response Body': '',
'Number Deleted': 0,
'Number Not Found': 0}
req.environ['eventlet.minimum_write_chunk_size'] = 0
try:
if not out_content_type:
raise HTTPNotAcceptable(request=req)
try:
vrs, account, _junk = req.split_path(2, 3, True)
except ValueError:
raise HTTPNotFound(request=req)
vrs = wsgi_to_str(vrs)
account = wsgi_to_str(account)
incoming_format = req.headers.get('Content-Type')
if incoming_format and \
not incoming_format.startswith('text/plain'):
# For now only accept newline separated object names
raise HTTPNotAcceptable(request=req)
if objs_to_delete is None:
objs_to_delete = self.get_objs_to_delete(req)
failed_file_response = {'type': HTTPBadRequest}
def delete_filter(predicate, objs_to_delete):
for obj_to_delete in objs_to_delete:
obj_name = obj_to_delete['name']
if not obj_name:
continue
if not predicate(obj_name):
continue
if obj_to_delete.get('error'):
if obj_to_delete['error']['code'] == HTTP_NOT_FOUND:
resp_dict['Number Not Found'] += 1
else:
failed_files.append([
wsgi_quote(str_to_wsgi(obj_name)),
obj_to_delete['error']['message']])
continue
delete_path = '/'.join(['', vrs, account,
obj_name.lstrip('/')])
if not constraints.check_utf8(delete_path):
failed_files.append([wsgi_quote(str_to_wsgi(obj_name)),
HTTPPreconditionFailed().status])
continue
yield (obj_name, delete_path,
obj_to_delete.get('version_id'))
def objs_then_containers(objs_to_delete):
# process all objects first
yield delete_filter(lambda name: '/' in name.strip('/'),
objs_to_delete)
# followed by containers
yield delete_filter(lambda name: '/' not in name.strip('/'),
objs_to_delete)
def do_delete(obj_name, delete_path, version_id):
delete_obj_req = make_subrequest(
req.environ, method='DELETE',
path=wsgi_quote(str_to_wsgi(delete_path)),
headers={'X-Auth-Token': req.headers.get('X-Auth-Token')},
body='', agent='%(orig)s ' + user_agent,
swift_source=swift_source)
if version_id is None:
delete_obj_req.params = {}
else:
delete_obj_req.params = {'version-id': version_id}
return (delete_obj_req.get_response(self.app), obj_name, 0)
with StreamingPile(self.delete_concurrency) as pile:
for names_to_delete in objs_then_containers(objs_to_delete):
for resp, obj_name, retry in pile.asyncstarmap(
do_delete, names_to_delete):
if last_yield + self.yield_frequency < time():
last_yield = time()
yield to_yield
to_yield, separator = b' ', b'\r\n\r\n'
self._process_delete(resp, pile, obj_name,
resp_dict, failed_files,
failed_file_response, retry)
if len(failed_files) >= self.max_failed_deletes:
# Abort, but drain off the in-progress deletes
for resp, obj_name, retry in pile:
if last_yield + self.yield_frequency < time():
last_yield = time()
yield to_yield
to_yield, separator = b' ', b'\r\n\r\n'
# Don't pass in the pile, as we shouldn't retry
self._process_delete(
resp, None, obj_name, resp_dict,
failed_files, failed_file_response, retry)
msg = 'Max delete failures exceeded'
raise HTTPBadRequest(msg)
if failed_files:
resp_dict['Response Status'] = \
failed_file_response['type']().status
elif not (resp_dict['Number Deleted'] or
resp_dict['Number Not Found']):
resp_dict['Response Status'] = HTTPBadRequest().status
resp_dict['Response Body'] = 'Invalid bulk delete.'
except HTTPException as err:
resp_dict['Response Status'] = err.status
resp_dict['Response Body'] = err.body.decode('utf-8')
except Exception:
self.logger.exception('Error in bulk delete.')
resp_dict['Response Status'] = HTTPServerError().status
yield separator + get_response_body(out_content_type,
resp_dict, failed_files, 'delete')
def handle_extract_iter(self, req, compress_type,
out_content_type='text/plain'):
"""
A generator that can be assigned to a swob Response's app_iter which,
when iterated over, will extract and PUT the objects pulled from the
request body. Will occasionally yield whitespace while request is being
processed. When the request is completed will yield a response body
that can be parsed to determine success. See above documentation for
details.
:params req: a swob Request
:params compress_type: specifying the compression type of the tar.
Accepts '', 'gz', or 'bz2'
"""
resp_dict = {'Response Status': HTTPCreated().status,
'Response Body': '', 'Number Files Created': 0}
failed_files = []
last_yield = time()
if out_content_type and out_content_type.endswith('/xml'):
to_yield = b'<?xml version="1.0" encoding="UTF-8"?>\n'
else:
to_yield = b' '
separator = b''
containers_accessed = set()
req.environ['eventlet.minimum_write_chunk_size'] = 0
try:
if not out_content_type:
raise HTTPNotAcceptable(request=req)
if req.content_length is None and \
req.headers.get('transfer-encoding',
'').lower() != 'chunked':
raise HTTPLengthRequired(request=req)
try:
vrs, account, extract_base = req.split_path(2, 3, True)
except ValueError:
raise HTTPNotFound(request=req)
extract_base = extract_base or ''
extract_base = extract_base.rstrip('/')
tar = tarfile.open(mode='r|' + compress_type,
fileobj=req.body_file)
failed_response_type = HTTPBadRequest
containers_created = 0
while True:
if last_yield + self.yield_frequency < time():
last_yield = time()
yield to_yield
to_yield, separator = b' ', b'\r\n\r\n'
tar_info = tar.next()
if tar_info is None or \
len(failed_files) >= self.max_failed_extractions:
break
if tar_info.isfile():
obj_path = tar_info.name
if not six.PY2:
obj_path = obj_path.encode('utf-8', 'surrogateescape')
obj_path = bytes_to_wsgi(obj_path)
if obj_path.startswith('./'):
obj_path = obj_path[2:]
obj_path = obj_path.lstrip('/')
if extract_base:
obj_path = extract_base + '/' + obj_path
if '/' not in obj_path:
continue # ignore base level file
destination = '/'.join(
['', vrs, account, obj_path])
container = obj_path.split('/', 1)[0]
if not constraints.check_utf8(wsgi_to_str(destination)):
failed_files.append(
[wsgi_quote(obj_path[:self.max_path_length]),
HTTPPreconditionFailed().status])
continue
if tar_info.size > constraints.MAX_FILE_SIZE:
failed_files.append([
wsgi_quote(obj_path[:self.max_path_length]),
HTTPRequestEntityTooLarge().status])
continue
container_failure = None
if container not in containers_accessed:
cont_path = '/'.join(['', vrs, account, container])
try:
if self.create_container(req, cont_path):
containers_created += 1
if containers_created > self.max_containers:
raise HTTPBadRequest(
'More than %d containers to create '
'from tar.' % self.max_containers)
except CreateContainerError as err:
# the object PUT to this container still may
# succeed if acls are set
container_failure = [
wsgi_quote(cont_path[:self.max_path_length]),
err.status]
if err.status_int == HTTP_UNAUTHORIZED:
raise HTTPUnauthorized(request=req)
except ValueError:
failed_files.append([
wsgi_quote(obj_path[:self.max_path_length]),
HTTPBadRequest().status])
continue
tar_file = tar.extractfile(tar_info)
create_headers = {
'Content-Length': tar_info.size,
'X-Auth-Token': req.headers.get('X-Auth-Token'),
}
# Copy some whitelisted headers to the subrequest
for k, v in req.headers.items():
if ((k.lower() in ('x-delete-at', 'x-delete-after'))
or is_user_meta('object', k)):
create_headers[k] = v
create_obj_req = make_subrequest(
req.environ, method='PUT',
path=wsgi_quote(destination),
headers=create_headers,
agent='%(orig)s BulkExpand', swift_source='EA')
create_obj_req.environ['wsgi.input'] = tar_file
for pax_key, pax_value in tar_info.pax_headers.items():
header_name = pax_key_to_swift_header(pax_key)
if header_name:
# Both pax_key and pax_value are unicode
# strings; the key is already UTF-8 encoded, but
# we still have to encode the value.
create_obj_req.headers[header_name] = \
pax_value.encode("utf-8")
resp = create_obj_req.get_response(self.app)
containers_accessed.add(container)
if resp.is_success:
resp_dict['Number Files Created'] += 1
else:
if container_failure:
failed_files.append(container_failure)
if resp.status_int == HTTP_UNAUTHORIZED:
failed_files.append([
wsgi_quote(obj_path[:self.max_path_length]),
HTTPUnauthorized().status])
raise HTTPUnauthorized(request=req)
if resp.status_int // 100 == 5:
failed_response_type = HTTPBadGateway
failed_files.append([
wsgi_quote(obj_path[:self.max_path_length]),
resp.status])
if failed_files:
resp_dict['Response Status'] = failed_response_type().status
elif not resp_dict['Number Files Created']:
resp_dict['Response Status'] = HTTPBadRequest().status
resp_dict['Response Body'] = 'Invalid Tar File: No Valid Files'
except HTTPException as err:
resp_dict['Response Status'] = err.status
resp_dict['Response Body'] = err.body.decode('utf-8')
except (tarfile.TarError, zlib.error) as tar_error:
resp_dict['Response Status'] = HTTPBadRequest().status
resp_dict['Response Body'] = 'Invalid Tar File: %s' % tar_error
except Exception:
self.logger.exception('Error in extract archive.')
resp_dict['Response Status'] = HTTPServerError().status
yield separator + get_response_body(
out_content_type, resp_dict, failed_files, 'extract')
def _process_delete(self, resp, pile, obj_name, resp_dict,
failed_files, failed_file_response, retry=0):
if resp.status_int // 100 == 2:
resp_dict['Number Deleted'] += 1
elif resp.status_int == HTTP_NOT_FOUND:
resp_dict['Number Not Found'] += 1
elif resp.status_int == HTTP_UNAUTHORIZED:
failed_files.append([wsgi_quote(str_to_wsgi(obj_name)),
HTTPUnauthorized().status])
elif resp.status_int == HTTP_CONFLICT and pile and \
self.retry_count > 0 and self.retry_count > retry:
retry += 1
sleep(self.retry_interval ** retry)
delete_obj_req = Request.blank(resp.environ['PATH_INFO'],
resp.environ)
def _retry(req, app, obj_name, retry):
return req.get_response(app), obj_name, retry
pile.spawn(_retry, delete_obj_req, self.app, obj_name, retry)
else:
if resp.status_int // 100 == 5:
failed_file_response['type'] = HTTPBadGateway
failed_files.append([wsgi_quote(str_to_wsgi(obj_name)),
resp.status])
@wsgify
def __call__(self, req):
extract_type = req.params.get('extract-archive')
resp = None
if extract_type is not None and req.method == 'PUT':
archive_type = {
'tar': '', 'tar.gz': 'gz',
'tar.bz2': 'bz2'}.get(extract_type.lower().strip('.'))
if archive_type is not None:
resp = HTTPOk(request=req)
try:
out_content_type = req.accept.best_match(
ACCEPTABLE_FORMATS)
except ValueError:
out_content_type = None # Ignore invalid header
if out_content_type:
resp.content_type = out_content_type
resp.app_iter = self.handle_extract_iter(
req, archive_type, out_content_type=out_content_type)
else:
resp = HTTPBadRequest("Unsupported archive format")
if 'bulk-delete' in req.params and req.method in ['POST', 'DELETE']:
resp = HTTPOk(request=req)
try:
out_content_type = req.accept.best_match(ACCEPTABLE_FORMATS)
except ValueError:
out_content_type = None # Ignore invalid header
if out_content_type:
resp.content_type = out_content_type
resp.app_iter = self.handle_delete_iter(
req, out_content_type=out_content_type)
return resp or self.app
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
max_containers_per_extraction = \
int(conf.get('max_containers_per_extraction', 10000))
max_failed_extractions = int(conf.get('max_failed_extractions', 1000))
max_deletes_per_request = int(conf.get('max_deletes_per_request', 10000))
max_failed_deletes = int(conf.get('max_failed_deletes', 1000))
yield_frequency = int(conf.get('yield_frequency', 10))
delete_concurrency = min(1000, max(1, int(
conf.get('delete_concurrency', 2))))
retry_count = int(conf.get('delete_container_retry_count', 0))
retry_interval = 1.5
register_swift_info(
'bulk_upload',
max_containers_per_extraction=max_containers_per_extraction,
max_failed_extractions=max_failed_extractions)
register_swift_info(
'bulk_delete',
max_deletes_per_request=max_deletes_per_request,
max_failed_deletes=max_failed_deletes)
def bulk_filter(app):
return Bulk(
app, conf,
max_containers_per_extraction=max_containers_per_extraction,
max_failed_extractions=max_failed_extractions,
max_deletes_per_request=max_deletes_per_request,
max_failed_deletes=max_failed_deletes,
yield_frequency=yield_frequency,
delete_concurrency=delete_concurrency,
retry_count=retry_count,
retry_interval=retry_interval)
return bulk_filter
| swift-master | swift/common/middleware/bulk.py |
# Copyright (c) 2010-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This StaticWeb WSGI middleware will serve container data as a static web site
with index file and error file resolution and optional file listings. This mode
is normally only active for anonymous requests. When using keystone for
authentication set ``delay_auth_decision = true`` in the authtoken middleware
configuration in your ``/etc/swift/proxy-server.conf`` file. If you want to
use it with authenticated requests, set the ``X-Web-Mode: true`` header on the
request.
The ``staticweb`` filter should be added to the pipeline in your
``/etc/swift/proxy-server.conf`` file just after any auth middleware. Also, the
configuration section for the ``staticweb`` middleware itself needs to be
added. For example::
[DEFAULT]
...
[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging cache ratelimit tempauth
staticweb proxy-logging proxy-server
...
[filter:staticweb]
use = egg:swift#staticweb
Any publicly readable containers (for example, ``X-Container-Read: .r:*``, see
:ref:`acls` for more information on this) will be checked for
X-Container-Meta-Web-Index and X-Container-Meta-Web-Error header values::
X-Container-Meta-Web-Index <index.name>
X-Container-Meta-Web-Error <error.name.suffix>
If X-Container-Meta-Web-Index is set, any <index.name> files will be served
without having to specify the <index.name> part. For instance, setting
``X-Container-Meta-Web-Index: index.html`` will be able to serve the object
.../pseudo/path/index.html with just .../pseudo/path or .../pseudo/path/
If X-Container-Meta-Web-Error is set, any errors (currently just 401
Unauthorized and 404 Not Found) will instead serve the
.../<status.code><error.name.suffix> object. For instance, setting
``X-Container-Meta-Web-Error: error.html`` will serve .../404error.html for
requests for paths not found.
For pseudo paths that have no <index.name>, this middleware can serve HTML file
listings if you set the ``X-Container-Meta-Web-Listings: true`` metadata item
on the container.
If listings are enabled, the listings can have a custom style sheet by setting
the X-Container-Meta-Web-Listings-CSS header. For instance, setting
``X-Container-Meta-Web-Listings-CSS: listing.css`` will make listings link to
the .../listing.css style sheet. If you "view source" in your browser on a
listing page, you will see the well defined document structure that can be
styled.
By default, the listings will be rendered with a label of
"Listing of /v1/account/container/path". This can be altered by
setting a ``X-Container-Meta-Web-Listings-Label: <label>``. For example,
if the label is set to "example.com", a label of
"Listing of example.com/path" will be used instead.
The content-type of directory marker objects can be modified by setting
the ``X-Container-Meta-Web-Directory-Type`` header. If the header is not set,
application/directory is used by default. Directory marker objects are
0-byte objects that represent directories to create a simulated hierarchical
structure.
Example usage of this middleware via ``swift``:
Make the container publicly readable::
swift post -r '.r:*' container
You should be able to get objects directly, but no index.html resolution or
listings.
Set an index file directive::
swift post -m 'web-index:index.html' container
You should be able to hit paths that have an index.html without needing to
type the index.html part.
Turn on listings::
swift post -r '.r:*,.rlistings' container
swift post -m 'web-listings: true' container
Now you should see object listings for paths and pseudo paths that have no
index.html.
Enable a custom listings style sheet::
swift post -m 'web-listings-css:listings.css' container
Set an error file::
swift post -m 'web-error:error.html' container
Now 401's should load 401error.html, 404's should load 404error.html, etc.
Set Content-Type of directory marker object::
swift post -m 'web-directory-type:text/directory' container
Now 0-byte objects with a content-type of text/directory will be treated
as directories rather than objects.
"""
import json
import six
import time
from six.moves.urllib.parse import urlparse
from swift.common.request_helpers import html_escape
from swift.common.utils import human_readable, split_path, config_true_value, \
quote, get_logger
from swift.common.registry import register_swift_info
from swift.common.wsgi import make_env, WSGIContext
from swift.common.http import is_success, is_redirection, HTTP_NOT_FOUND
from swift.common.swob import Response, HTTPMovedPermanently, HTTPNotFound, \
Request, wsgi_quote, wsgi_to_str, str_to_wsgi
from swift.proxy.controllers.base import get_container_info
class _StaticWebContext(WSGIContext):
"""
The Static Web WSGI middleware filter; serves container data as a
static web site. See `staticweb`_ for an overview.
This _StaticWebContext is used by StaticWeb with each request
that might need to be handled to make keeping contextual
information about the request a bit simpler than storing it in
the WSGI env.
:param staticweb: The staticweb middleware object in use.
:param version: A WSGI string representation of the swift api version.
:param account: A WSGI string representation of the account name.
:param container: A WSGI string representation of the container name.
:param obj: A WSGI string representation of the object name.
"""
def __init__(self, staticweb, version, account, container, obj):
WSGIContext.__init__(self, staticweb.app)
self.version = version
self.account = account
self.container = container
self.obj = obj
self.app = staticweb.app
self.url_scheme = staticweb.url_scheme
self.url_host = staticweb.url_host
self.agent = '%(orig)s StaticWeb'
# Results from the last call to self._get_container_info.
self._index = self._error = self._listings = self._listings_css = \
self._dir_type = self._listings_label = None
def _error_response(self, response, env, start_response):
"""
Sends the error response to the remote client, possibly resolving a
custom error response body based on x-container-meta-web-error.
:param response: The error response we should default to sending.
:param env: The original request WSGI environment.
:param start_response: The WSGI start_response hook.
"""
if not self._error:
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return response
save_response_status = self._response_status
save_response_headers = self._response_headers
save_response_exc_info = self._response_exc_info
resp = self._app_call(make_env(
env, 'GET', '/%s/%s/%s/%s%s' % (
self.version, self.account, self.container,
self._get_status_int(), self._error),
self.agent, swift_source='SW'))
if is_success(self._get_status_int()):
start_response(save_response_status, self._response_headers,
self._response_exc_info)
return resp
start_response(save_response_status, save_response_headers,
save_response_exc_info)
return response
def _get_container_info(self, env):
"""
Retrieves x-container-meta-web-index, x-container-meta-web-error,
x-container-meta-web-listings, x-container-meta-web-listings-css,
and x-container-meta-web-directory-type from memcache or from the
cluster and stores the result in memcache and in self._index,
self._error, self._listings, self._listings_css and self._dir_type.
:param env: The WSGI environment dict.
:return: The container_info dict.
"""
self._index = self._error = self._listings = self._listings_css = \
self._dir_type = None
container_info = get_container_info(
env, self.app, swift_source='SW')
if is_success(container_info['status']):
meta = container_info.get('meta', {})
self._index = meta.get('web-index', '').strip()
self._error = meta.get('web-error', '').strip()
self._listings = meta.get('web-listings', '').strip()
self._listings_label = meta.get('web-listings-label', '').strip()
self._listings_css = meta.get('web-listings-css', '').strip()
self._dir_type = meta.get('web-directory-type', '').strip()
return container_info
def _listing(self, env, start_response, prefix=None):
"""
Sends an HTML object listing to the remote client.
:param env: The original WSGI environment dict.
:param start_response: The original WSGI start_response hook.
:param prefix: Any WSGI-str prefix desired for the container listing.
"""
label = wsgi_to_str(env['PATH_INFO'])
if self._listings_label:
groups = wsgi_to_str(env['PATH_INFO']).split('/')
label = '{0}/{1}'.format(self._listings_label,
'/'.join(groups[4:]))
if not config_true_value(self._listings):
body = '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 ' \
'Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">\n' \
'<html>\n' \
'<head>\n' \
'<title>Listing of %s</title>\n' % html_escape(label)
if self._listings_css:
body += ' <link rel="stylesheet" type="text/css" ' \
'href="%s" />\n' % self._build_css_path(prefix or '')
else:
body += ' <style type="text/css">\n' \
' h1 {font-size: 1em; font-weight: bold;}\n' \
' p {font-size: 2}\n' \
' </style>\n'
body += '</head>\n<body>' \
' <h1>Web Listing Disabled</h1>' \
' <p>The owner of this web site has disabled web listing.' \
' <p>If you are the owner of this web site, you can enable' \
' web listing by setting X-Container-Meta-Web-Listings.</p>'
if self._index:
body += '<h1>Index File Not Found</h1>' \
' <p>The owner of this web site has set ' \
' <b>X-Container-Meta-Web-Index: %s</b>. ' \
' However, this file is not found.</p>' % self._index
body += ' </body>\n</html>\n'
resp = HTTPNotFound(body=body)(env, self._start_response)
return self._error_response(resp, env, start_response)
tmp_env = make_env(
env, 'GET', '/%s/%s/%s' % (
self.version, self.account, self.container),
self.agent, swift_source='SW')
tmp_env['QUERY_STRING'] = 'delimiter=/'
if prefix:
tmp_env['QUERY_STRING'] += '&prefix=%s' % wsgi_quote(prefix)
else:
prefix = ''
resp = self._app_call(tmp_env)
if not is_success(self._get_status_int()):
return self._error_response(resp, env, start_response)
listing = None
body = b''.join(resp)
if body:
listing = json.loads(body)
if prefix and not listing:
resp = HTTPNotFound()(env, self._start_response)
return self._error_response(resp, env, start_response)
headers = {'Content-Type': 'text/html; charset=UTF-8'}
body = '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 ' \
'Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">\n' \
'<html>\n' \
' <head>\n' \
' <title>Listing of %s</title>\n' % \
html_escape(label)
if self._listings_css:
body += ' <link rel="stylesheet" type="text/css" ' \
'href="%s" />\n' % (self._build_css_path(prefix))
else:
body += ' <style type="text/css">\n' \
' h1 {font-size: 1em; font-weight: bold;}\n' \
' th {text-align: left; padding: 0px 1em 0px 1em;}\n' \
' td {padding: 0px 1em 0px 1em;}\n' \
' a {text-decoration: none;}\n' \
' </style>\n'
body += ' </head>\n' \
' <body>\n' \
' <h1 id="title">Listing of %s</h1>\n' \
' <table id="listing">\n' \
' <tr id="heading">\n' \
' <th class="colname">Name</th>\n' \
' <th class="colsize">Size</th>\n' \
' <th class="coldate">Date</th>\n' \
' </tr>\n' % html_escape(label)
if prefix:
body += ' <tr id="parent" class="item">\n' \
' <td class="colname"><a href="../">../</a></td>\n' \
' <td class="colsize"> </td>\n' \
' <td class="coldate"> </td>\n' \
' </tr>\n'
for item in listing:
if 'subdir' in item:
subdir = item['subdir'] if six.PY3 else \
item['subdir'].encode('utf-8')
if prefix:
subdir = subdir[len(wsgi_to_str(prefix)):]
body += ' <tr class="item subdir">\n' \
' <td class="colname"><a href="%s">%s</a></td>\n' \
' <td class="colsize"> </td>\n' \
' <td class="coldate"> </td>\n' \
' </tr>\n' % \
(quote(subdir), html_escape(subdir))
for item in listing:
if 'name' in item:
name = item['name'] if six.PY3 else \
item['name'].encode('utf-8')
if prefix:
name = name[len(wsgi_to_str(prefix)):]
content_type = item['content_type'] if six.PY3 else \
item['content_type'].encode('utf-8')
bytes = human_readable(item['bytes'])
last_modified = (
html_escape(item['last_modified'] if six.PY3 else
item['last_modified'].encode('utf-8')).
split('.')[0].replace('T', ' '))
body += ' <tr class="item %s">\n' \
' <td class="colname"><a href="%s">%s</a></td>\n' \
' <td class="colsize">%s</td>\n' \
' <td class="coldate">%s</td>\n' \
' </tr>\n' % \
(' '.join('type-' + html_escape(t.lower())
for t in content_type.split('/')),
quote(name), html_escape(name),
bytes, last_modified)
body += ' </table>\n' \
' </body>\n' \
'</html>\n'
resp = Response(headers=headers, body=body)
return resp(env, start_response)
def _build_css_path(self, prefix=''):
"""
Constructs a relative path from a given prefix within the container.
URLs and paths starting with '/' are not modified.
:param prefix: The prefix for the container listing.
"""
if self._listings_css.startswith(('/', 'http://', 'https://')):
css_path = quote(self._listings_css, ':/')
else:
css_path = '../' * prefix.count('/') + quote(self._listings_css)
return css_path
def _redirect_with_slash(self, env_, start_response):
env = {}
env.update(env_)
if self.url_scheme:
env['wsgi.url_scheme'] = self.url_scheme
if self.url_host:
env['HTTP_HOST'] = self.url_host
resp = HTTPMovedPermanently(
location=wsgi_quote(env['PATH_INFO'] + '/'))
return resp(env, start_response)
def handle_container(self, env, start_response):
"""
Handles a possible static web request for a container.
:param env: The original WSGI environment dict.
:param start_response: The original WSGI start_response hook.
"""
container_info = self._get_container_info(env)
req = Request(env)
req.acl = container_info['read_acl']
# we checked earlier that swift.authorize is set in env
aresp = env['swift.authorize'](req)
if aresp:
resp = aresp(env, self._start_response)
return self._error_response(resp, env, start_response)
if not self._listings and not self._index:
if config_true_value(env.get('HTTP_X_WEB_MODE', 'f')):
return HTTPNotFound()(env, start_response)
return self.app(env, start_response)
if not env['PATH_INFO'].endswith('/'):
return self._redirect_with_slash(env, start_response)
if not self._index:
return self._listing(env, start_response)
tmp_env = dict(env)
tmp_env['HTTP_USER_AGENT'] = \
'%s StaticWeb' % env.get('HTTP_USER_AGENT')
tmp_env['swift.source'] = 'SW'
tmp_env['PATH_INFO'] += str_to_wsgi(self._index)
resp = self._app_call(tmp_env)
status_int = self._get_status_int()
if status_int == HTTP_NOT_FOUND:
return self._listing(env, start_response)
elif not is_success(self._get_status_int()) and \
not is_redirection(self._get_status_int()):
return self._error_response(resp, env, start_response)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
def handle_object(self, env, start_response):
"""
Handles a possible static web request for an object. This object could
resolve into an index or listing request.
:param env: The original WSGI environment dict.
:param start_response: The original WSGI start_response hook.
"""
tmp_env = dict(env)
tmp_env['HTTP_USER_AGENT'] = \
'%s StaticWeb' % env.get('HTTP_USER_AGENT')
tmp_env['swift.source'] = 'SW'
resp = self._app_call(tmp_env)
status_int = self._get_status_int()
self._get_container_info(env)
if is_success(status_int) or is_redirection(status_int):
# Treat directory marker objects as not found
if not self._dir_type:
self._dir_type = 'application/directory'
content_length = self._response_header_value('content-length')
content_length = int(content_length) if content_length else 0
if self._response_header_value('content-type') == self._dir_type \
and content_length <= 1:
status_int = HTTP_NOT_FOUND
else:
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
if status_int != HTTP_NOT_FOUND:
# Retaining the previous code's behavior of not using custom error
# pages for non-404 errors.
self._error = None
return self._error_response(resp, env, start_response)
if not self._listings and not self._index:
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
status_int = HTTP_NOT_FOUND
if self._index:
tmp_env = dict(env)
tmp_env['HTTP_USER_AGENT'] = \
'%s StaticWeb' % env.get('HTTP_USER_AGENT')
tmp_env['swift.source'] = 'SW'
if not tmp_env['PATH_INFO'].endswith('/'):
tmp_env['PATH_INFO'] += '/'
tmp_env['PATH_INFO'] += str_to_wsgi(self._index)
resp = self._app_call(tmp_env)
status_int = self._get_status_int()
if is_success(status_int) or is_redirection(status_int):
if not env['PATH_INFO'].endswith('/'):
return self._redirect_with_slash(env, start_response)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
if status_int == HTTP_NOT_FOUND:
if not env['PATH_INFO'].endswith('/'):
tmp_env = make_env(
env, 'GET', '/%s/%s/%s' % (
self.version, self.account, self.container),
self.agent, swift_source='SW')
tmp_env['QUERY_STRING'] = 'limit=1&delimiter=/&prefix=%s' % (
quote(wsgi_to_str(self.obj) + '/'), )
resp = self._app_call(tmp_env)
body = b''.join(resp)
if not is_success(self._get_status_int()) or not body or \
not json.loads(body):
resp = HTTPNotFound()(env, self._start_response)
return self._error_response(resp, env, start_response)
return self._redirect_with_slash(env, start_response)
return self._listing(env, start_response, self.obj)
class StaticWeb(object):
"""
The Static Web WSGI middleware filter; serves container data as a static
web site. See `staticweb`_ for an overview.
The proxy logs created for any subrequests made will have swift.source set
to "SW".
:param app: The next WSGI application/filter in the paste.deploy pipeline.
:param conf: The filter configuration dict.
"""
def __init__(self, app, conf):
#: The next WSGI application/filter in the paste.deploy pipeline.
self.app = app
#: The filter configuration dict. Only used in tests.
self.conf = conf
self.logger = get_logger(conf, log_route='staticweb')
# We expose a more general "url_base" parameter in case we want
# to incorporate the path prefix later. Currently it is discarded.
url_base = conf.get('url_base', None)
self.url_scheme = None
self.url_host = None
if url_base:
parsed = urlparse(url_base)
self.url_scheme = parsed.scheme
self.url_host = parsed.netloc
def __call__(self, env, start_response):
"""
Main hook into the WSGI paste.deploy filter/app pipeline.
:param env: The WSGI environment dict.
:param start_response: The WSGI start_response hook.
"""
env['staticweb.start_time'] = time.time()
if 'swift.authorize' not in env:
self.logger.warning(
'No authentication middleware authorized request yet. '
'Skipping staticweb')
return self.app(env, start_response)
try:
(version, account, container, obj) = \
split_path(env['PATH_INFO'], 2, 4, True)
except ValueError:
return self.app(env, start_response)
if env['REQUEST_METHOD'] not in ('HEAD', 'GET'):
return self.app(env, start_response)
if env.get('REMOTE_USER') and \
not config_true_value(env.get('HTTP_X_WEB_MODE', 'f')):
return self.app(env, start_response)
if not container:
return self.app(env, start_response)
context = _StaticWebContext(self, version, account, container, obj)
if obj:
return context.handle_object(env, start_response)
return context.handle_container(env, start_response)
def filter_factory(global_conf, **local_conf):
"""Returns a Static Web WSGI filter for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info('staticweb')
def staticweb_filter(app):
return StaticWeb(app, conf)
return staticweb_filter
| swift-master | swift/common/middleware/staticweb.py |
# Copyright (c) 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
FormPost Middleware
Translates a browser form post into a regular Swift object PUT.
The format of the form is::
<form action="<swift-url>" method="POST"
enctype="multipart/form-data">
<input type="hidden" name="redirect" value="<redirect-url>" />
<input type="hidden" name="max_file_size" value="<bytes>" />
<input type="hidden" name="max_file_count" value="<count>" />
<input type="hidden" name="expires" value="<unix-timestamp>" />
<input type="hidden" name="signature" value="<hmac>" />
<input type="file" name="file1" /><br />
<input type="submit" />
</form>
Optionally, if you want the uploaded files to be temporary you can set
x-delete-at or x-delete-after attributes by adding one of these as a
form input::
<input type="hidden" name="x_delete_at" value="<unix-timestamp>" />
<input type="hidden" name="x_delete_after" value="<seconds>" />
If you want to specify the content type or content encoding of the files you
can set content-encoding or content-type by adding them to the form input::
<input type="hidden" name="content-type" value="text/html" />
<input type="hidden" name="content-encoding" value="gzip" />
The above example applies these parameters to all uploaded files. You can also
set the content-type and content-encoding on a per-file basis by adding the
parameters to each part of the upload.
The <swift-url> is the URL of the Swift destination, such as::
https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix
The name of each file uploaded will be appended to the <swift-url>
given. So, you can upload directly to the root of container with a
url like::
https://swift-cluster.example.com/v1/AUTH_account/container/
Optionally, you can include an object prefix to better separate
different users' uploads, such as::
https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix
Note the form method must be POST and the enctype must be set as
"multipart/form-data".
The redirect attribute is the URL to redirect the browser to after the upload
completes. This is an optional parameter. If you are uploading the form via an
XMLHttpRequest the redirect should not be included. The URL will have status
and message query parameters added to it, indicating the HTTP status code for
the upload (2xx is success) and a possible message for further information if
there was an error (such as "max_file_size exceeded").
The max_file_size attribute must be included and indicates the
largest single file upload that can be done, in bytes.
The max_file_count attribute must be included and indicates the
maximum number of files that can be uploaded with the form. Include
additional ``<input type="file" name="filexx" />`` attributes if
desired.
The expires attribute is the Unix timestamp before which the form
must be submitted before it is invalidated.
The signature attribute is the HMAC signature of the form. Here is
sample code for computing the signature::
import hmac
from hashlib import sha512
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://srv.com/some-page' # set to '' if redirect not in form
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha512).hexdigest()
The key is the value of either the account (X-Account-Meta-Temp-URL-Key,
X-Account-Meta-Temp-Url-Key-2) or the container
(X-Container-Meta-Temp-URL-Key, X-Container-Meta-Temp-Url-Key-2) TempURL keys.
Be certain to use the full path, from the /v1/ onward.
Note that x_delete_at and x_delete_after are not used in signature generation
as they are both optional attributes.
The command line tool ``swift-form-signature`` may be used (mostly
just when testing) to compute expires and signature.
Also note that the file attributes must be after the other attributes
in order to be processed correctly. If attributes come after the
file, they won't be sent with the subrequest (there is no way to
parse all the attributes on the server-side without reading the whole
thing into memory -- to service many requests, some with large files,
there just isn't enough memory on the server, so attributes following
the file are simply ignored).
"""
__all__ = ['FormPost', 'filter_factory', 'READ_CHUNK_SIZE', 'MAX_VALUE_LENGTH']
import hmac
import hashlib
from time import time
import six
from six.moves.urllib.parse import quote
from swift.common.constraints import valid_api_version
from swift.common.exceptions import MimeInvalid
from swift.common.middleware.tempurl import get_tempurl_keys_from_metadata
from swift.common.digest import get_allowed_digests, \
extract_digest_and_algorithm, DEFAULT_ALLOWED_DIGESTS
from swift.common.utils import streq_const_time, parse_content_disposition, \
parse_mime_headers, iter_multipart_mime_documents, reiterate, \
closing_if_possible, get_logger
from swift.common.registry import register_swift_info
from swift.common.wsgi import make_pre_authed_env
from swift.common.swob import HTTPUnauthorized, wsgi_to_str, str_to_wsgi
from swift.common.http import is_success
from swift.proxy.controllers.base import get_account_info, get_container_info
#: The size of data to read from the form at any given time.
READ_CHUNK_SIZE = 4096
#: The maximum size of any attribute's value. Any additional data will be
#: truncated.
MAX_VALUE_LENGTH = 4096
class FormInvalid(Exception):
pass
class FormUnauthorized(Exception):
pass
class _CappedFileLikeObject(object):
"""
A file-like object wrapping another file-like object that raises
an EOFError if the amount of data read exceeds a given
max_file_size.
:param fp: The file-like object to wrap.
:param max_file_size: The maximum bytes to read before raising an
EOFError.
"""
def __init__(self, fp, max_file_size):
self.fp = fp
self.max_file_size = max_file_size
self.amount_read = 0
self.file_size_exceeded = False
def read(self, size=None):
ret = self.fp.read(size)
self.amount_read += len(ret)
if self.amount_read > self.max_file_size:
self.file_size_exceeded = True
raise EOFError('max_file_size exceeded')
return ret
def readline(self):
ret = self.fp.readline()
self.amount_read += len(ret)
if self.amount_read > self.max_file_size:
self.file_size_exceeded = True
raise EOFError('max_file_size exceeded')
return ret
class FormPost(object):
"""
FormPost Middleware
See above for a full description.
The proxy logs created for any subrequests made will have swift.source set
to "FP".
:param app: The next WSGI filter or app in the paste.deploy
chain.
:param conf: The configuration dict for the middleware.
"""
def __init__(self, app, conf, logger=None):
#: The next WSGI application/filter in the paste.deploy pipeline.
self.app = app
#: The filter configuration dict.
self.conf = conf
self.logger = logger or get_logger(conf, log_route='formpost')
# Defaulting to SUPPORTED_DIGESTS just so we don't completely
# deprecate sha1 yet. We'll change this to DEFAULT_ALLOWED_DIGESTS
# later.
self.allowed_digests = conf.get(
'allowed_digests', DEFAULT_ALLOWED_DIGESTS.split())
def __call__(self, env, start_response):
"""
Main hook into the WSGI paste.deploy filter/app pipeline.
:param env: The WSGI environment dict.
:param start_response: The WSGI start_response hook.
:returns: Response as per WSGI.
"""
if env['REQUEST_METHOD'] == 'POST':
try:
content_type, attrs = \
parse_content_disposition(env.get('CONTENT_TYPE') or '')
if content_type == 'multipart/form-data' and \
'boundary' in attrs:
http_user_agent = "%s FormPost" % (
env.get('HTTP_USER_AGENT', ''))
env['HTTP_USER_AGENT'] = http_user_agent.strip()
status, headers, body = self._translate_form(
env, attrs['boundary'])
start_response(status, headers)
return [body]
except MimeInvalid:
body = b'FormPost: invalid starting boundary'
start_response(
'400 Bad Request',
(('Content-Type', 'text/plain'),
('Content-Length', str(len(body)))))
return [body]
except (FormInvalid, EOFError) as err:
body = 'FormPost: %s' % err
if six.PY3:
body = body.encode('utf-8')
start_response(
'400 Bad Request',
(('Content-Type', 'text/plain'),
('Content-Length', str(len(body)))))
return [body]
except FormUnauthorized as err:
message = 'FormPost: %s' % str(err).title()
return HTTPUnauthorized(body=message)(
env, start_response)
return self.app(env, start_response)
def _translate_form(self, env, boundary):
"""
Translates the form data into subrequests and issues a
response.
:param env: The WSGI environment dict.
:param boundary: The MIME type boundary to look for.
:returns: status_line, headers_list, body
"""
keys = self._get_keys(env)
if six.PY3:
boundary = boundary.encode('utf-8')
status = message = ''
attributes = {}
file_attributes = {}
subheaders = []
resp_body = None
file_count = 0
for fp in iter_multipart_mime_documents(
env['wsgi.input'], boundary, read_chunk_size=READ_CHUNK_SIZE):
hdrs = parse_mime_headers(fp)
disp, attrs = parse_content_disposition(
hdrs.get('Content-Disposition', ''))
if disp == 'form-data' and attrs.get('filename'):
file_count += 1
try:
if file_count > int(attributes.get('max_file_count') or 0):
status = '400 Bad Request'
message = 'max file count exceeded'
break
except ValueError:
raise FormInvalid('max_file_count not an integer')
file_attributes = attributes.copy()
file_attributes['filename'] = attrs['filename'] or 'filename'
if 'content-type' not in attributes and 'content-type' in hdrs:
file_attributes['content-type'] = \
hdrs['Content-Type'] or 'application/octet-stream'
if 'content-encoding' not in attributes and \
'content-encoding' in hdrs:
file_attributes['content-encoding'] = \
hdrs['Content-Encoding']
status, subheaders, resp_body = \
self._perform_subrequest(env, file_attributes, fp, keys)
status_code = int(status.split(' ', 1)[0])
if not is_success(status_code):
break
else:
data = b''
mxln = MAX_VALUE_LENGTH
while mxln:
chunk = fp.read(mxln)
if not chunk:
break
mxln -= len(chunk)
data += chunk
while fp.read(READ_CHUNK_SIZE):
pass
if six.PY3:
data = data.decode('utf-8')
if 'name' in attrs:
attributes[attrs['name'].lower()] = data.rstrip('\r\n--')
if not status:
status = '400 Bad Request'
message = 'no files to process'
status_code = int(status.split(' ', 1)[0])
headers = [(k, v) for k, v in subheaders
if k.lower().startswith('access-control')]
redirect = attributes.get('redirect')
if not redirect:
body = status
if message:
body = status + '\r\nFormPost: ' + message.title()
if six.PY3:
body = body.encode('utf-8')
if not is_success(status_code) and resp_body:
body = resp_body
headers.extend([('Content-Type', 'text/plain'),
('Content-Length', len(body))])
return status, headers, body
if '?' in redirect:
redirect += '&'
else:
redirect += '?'
redirect += 'status=%s&message=%s' % (quote(str(status_code)),
quote(message))
body = '<html><body><p><a href="%s">' \
'Click to continue...</a></p></body></html>' % redirect
if six.PY3:
body = body.encode('utf-8')
headers.extend(
[('Location', redirect), ('Content-Length', str(len(body)))])
return '303 See Other', headers, body
def _perform_subrequest(self, orig_env, attributes, fp, keys):
"""
Performs the subrequest and returns the response.
:param orig_env: The WSGI environment dict; will only be used
to form a new env for the subrequest.
:param attributes: dict of the attributes of the form so far.
:param fp: The file-like object containing the request body.
:param keys: The account keys to validate the signature with.
:returns: (status_line, headers_list)
"""
if not keys:
raise FormUnauthorized('invalid signature')
try:
max_file_size = int(attributes.get('max_file_size') or 0)
except ValueError:
raise FormInvalid('max_file_size not an integer')
subenv = make_pre_authed_env(orig_env, 'PUT', agent=None,
swift_source='FP')
if 'QUERY_STRING' in subenv:
del subenv['QUERY_STRING']
subenv['HTTP_TRANSFER_ENCODING'] = 'chunked'
subenv['wsgi.input'] = _CappedFileLikeObject(fp, max_file_size)
if not subenv['PATH_INFO'].endswith('/') and \
subenv['PATH_INFO'].count('/') < 4:
subenv['PATH_INFO'] += '/'
subenv['PATH_INFO'] += str_to_wsgi(
attributes['filename'] or 'filename')
if 'x_delete_at' in attributes:
try:
subenv['HTTP_X_DELETE_AT'] = int(attributes['x_delete_at'])
except ValueError:
raise FormInvalid('x_delete_at not an integer: '
'Unix timestamp required.')
if 'x_delete_after' in attributes:
try:
subenv['HTTP_X_DELETE_AFTER'] = int(
attributes['x_delete_after'])
except ValueError:
raise FormInvalid('x_delete_after not an integer: '
'Number of seconds required.')
if 'content-type' in attributes:
subenv['CONTENT_TYPE'] = \
attributes['content-type'] or 'application/octet-stream'
if 'content-encoding' in attributes:
subenv['HTTP_CONTENT_ENCODING'] = attributes['content-encoding']
try:
if int(attributes.get('expires') or 0) < time():
raise FormUnauthorized('form expired')
except ValueError:
raise FormInvalid('expired not an integer')
hmac_body = '%s\n%s\n%s\n%s\n%s' % (
wsgi_to_str(orig_env['PATH_INFO']),
attributes.get('redirect') or '',
attributes.get('max_file_size') or '0',
attributes.get('max_file_count') or '0',
attributes.get('expires') or '0')
if six.PY3:
hmac_body = hmac_body.encode('utf-8')
has_valid_sig = False
signature = attributes.get('signature', '')
try:
hash_name, signature = extract_digest_and_algorithm(signature)
except ValueError:
raise FormUnauthorized('invalid signature')
if hash_name not in self.allowed_digests:
raise FormUnauthorized('invalid signature')
hash_algorithm = getattr(hashlib, hash_name) if six.PY2 else hash_name
for key in keys:
# Encode key like in swift.common.utls.get_hmac.
if not isinstance(key, six.binary_type):
key = key.encode('utf8')
sig = hmac.new(key, hmac_body, hash_algorithm).hexdigest()
if streq_const_time(sig, signature):
has_valid_sig = True
if not has_valid_sig:
raise FormUnauthorized('invalid signature')
self.logger.increment('formpost.digests.%s' % hash_name)
substatus = [None]
subheaders = [None]
wsgi_input = subenv['wsgi.input']
def _start_response(status, headers, exc_info=None):
if wsgi_input.file_size_exceeded:
raise EOFError("max_file_size exceeded")
substatus[0] = status
subheaders[0] = headers
# reiterate to ensure the response started,
# but drop any data on the floor
resp = self.app(subenv, _start_response)
with closing_if_possible(reiterate(resp)):
body = b''.join(resp)
return substatus[0], subheaders[0], body
def _get_keys(self, env):
"""
Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values
for the account or container, or an empty list if none are set.
Returns 0-4 elements depending on how many keys are set in the
account's or container's metadata.
Also validate that the request
path indicates a valid container; if not, no keys will be returned.
:param env: The WSGI environment for the request.
:returns: list of tempurl keys
"""
parts = env['PATH_INFO'].split('/', 4)
if len(parts) < 4 or parts[0] or not valid_api_version(parts[1]) \
or not parts[2] or not parts[3]:
return []
account_info = get_account_info(env, self.app, swift_source='FP')
account_keys = get_tempurl_keys_from_metadata(account_info['meta'])
container_info = get_container_info(env, self.app, swift_source='FP')
container_keys = get_tempurl_keys_from_metadata(
container_info.get('meta', []))
return account_keys + container_keys
def filter_factory(global_conf, **local_conf):
"""Returns the WSGI filter for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
logger = get_logger(conf, log_route='formpost')
allowed_digests, deprecated_digests = get_allowed_digests(
conf.get('allowed_digests', '').split(), logger)
info = {'allowed_digests': sorted(allowed_digests)}
if deprecated_digests:
info['deprecated_digests'] = sorted(deprecated_digests)
register_swift_info('formpost', **info)
conf.update(info)
return lambda app: FormPost(app, conf)
| swift-master | swift/common/middleware/formpost.py |
# Copyright (c) 2010-2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from swift.common.wsgi import WSGIContext
def app_property(name):
return property(lambda self: getattr(self.app, name))
class RewriteContext(WSGIContext):
base_re = None
def __init__(self, app, requested, rewritten):
super(RewriteContext, self).__init__(app)
self.requested = requested
self.rewritten_re = re.compile(self.base_re % re.escape(rewritten))
def handle_request(self, env, start_response):
resp_iter = self._app_call(env)
for i, (header, value) in enumerate(self._response_headers):
if header.lower() in ('location', 'content-location'):
self._response_headers[i] = (header, self.rewritten_re.sub(
r'\1%s\2' % self.requested, value))
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp_iter
| swift-master | swift/common/middleware/__init__.py |
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Server side copy is a feature that enables users/clients to COPY objects
between accounts and containers without the need to download and then
re-upload objects, thus eliminating additional bandwidth consumption and
also saving time. This may be used when renaming/moving an object which
in Swift is a (COPY + DELETE) operation.
The server side copy middleware should be inserted in the pipeline after auth
and before the quotas and large object middlewares. If it is not present in the
pipeline in the proxy-server configuration file, it will be inserted
automatically. There is no configurable option provided to turn off server
side copy.
--------
Metadata
--------
* All metadata of source object is preserved during object copy.
* One can also provide additional metadata during PUT/COPY request. This will
over-write any existing conflicting keys.
* Server side copy can also be used to change content-type of an existing
object.
-----------
Object Copy
-----------
* The destination container must exist before requesting copy of the object.
* When several replicas exist, the system copies from the most recent replica.
That is, the copy operation behaves as though the X-Newest header is in the
request.
* The request to copy an object should have no body (i.e. content-length of the
request must be zero).
There are two ways in which an object can be copied:
1. Send a PUT request to the new object (destination/target) with an additional
header named ``X-Copy-From`` specifying the source object
(in '/container/object' format). Example::
curl -i -X PUT http://<storage_url>/container1/destination_obj
-H 'X-Auth-Token: <token>'
-H 'X-Copy-From: /container2/source_obj'
-H 'Content-Length: 0'
2. Send a COPY request with an existing object in URL with an additional header
named ``Destination`` specifying the destination/target object
(in '/container/object' format). Example::
curl -i -X COPY http://<storage_url>/container2/source_obj
-H 'X-Auth-Token: <token>'
-H 'Destination: /container1/destination_obj'
-H 'Content-Length: 0'
Note that if the incoming request has some conditional headers (e.g. ``Range``,
``If-Match``), the *source* object will be evaluated for these headers (i.e. if
PUT with both ``X-Copy-From`` and ``Range``, Swift will make a partial copy to
the destination object).
-------------------------
Cross Account Object Copy
-------------------------
Objects can also be copied from one account to another account if the user
has the necessary permissions (i.e. permission to read from container
in source account and permission to write to container in destination account).
Similar to examples mentioned above, there are two ways to copy objects across
accounts:
1. Like the example above, send PUT request to copy object but with an
additional header named ``X-Copy-From-Account`` specifying the source
account. Example::
curl -i -X PUT http://<host>:<port>/v1/AUTH_test1/container/destination_obj
-H 'X-Auth-Token: <token>'
-H 'X-Copy-From: /container/source_obj'
-H 'X-Copy-From-Account: AUTH_test2'
-H 'Content-Length: 0'
2. Like the previous example, send a COPY request but with an additional header
named ``Destination-Account`` specifying the name of destination account.
Example::
curl -i -X COPY http://<host>:<port>/v1/AUTH_test2/container/source_obj
-H 'X-Auth-Token: <token>'
-H 'Destination: /container/destination_obj'
-H 'Destination-Account: AUTH_test1'
-H 'Content-Length: 0'
-------------------
Large Object Copy
-------------------
The best option to copy a large object is to copy segments individually.
To copy the manifest object of a large object, add the query parameter to
the copy request::
?multipart-manifest=get
If a request is sent without the query parameter, an attempt will be made to
copy the whole object but will fail if the object size is
greater than 5GB.
"""
from swift.common.utils import get_logger, config_true_value, FileLikeIter, \
close_if_possible
from swift.common.swob import Request, HTTPPreconditionFailed, \
HTTPRequestEntityTooLarge, HTTPBadRequest, HTTPException, \
wsgi_quote, wsgi_unquote
from swift.common.http import HTTP_MULTIPLE_CHOICES, is_success, HTTP_OK
from swift.common.constraints import check_account_format, MAX_FILE_SIZE
from swift.common.request_helpers import copy_header_subset, remove_items, \
is_sys_meta, is_sys_or_user_meta, is_object_transient_sysmeta, \
check_path_header, OBJECT_SYSMETA_CONTAINER_UPDATE_OVERRIDE_PREFIX
from swift.common.wsgi import WSGIContext, make_subrequest
def _check_copy_from_header(req):
"""
Validate that the value from x-copy-from header is
well formatted. We assume the caller ensures that
x-copy-from header is present in req.headers.
:param req: HTTP request object
:returns: A tuple with container name and object name
:raise HTTPPreconditionFailed: if x-copy-from value
is not well formatted.
"""
return check_path_header(req, 'X-Copy-From', 2,
'X-Copy-From header must be of the form '
'<container name>/<object name>')
def _check_destination_header(req):
"""
Validate that the value from destination header is
well formatted. We assume the caller ensures that
destination header is present in req.headers.
:param req: HTTP request object
:returns: A tuple with container name and object name
:raise HTTPPreconditionFailed: if destination value
is not well formatted.
"""
return check_path_header(req, 'Destination', 2,
'Destination header must be of the form '
'<container name>/<object name>')
def _copy_headers(src, dest):
"""
Will copy desired headers from src to dest.
:params src: an instance of collections.Mapping
:params dest: an instance of collections.Mapping
"""
for k, v in src.items():
if (is_sys_or_user_meta('object', k) or
is_object_transient_sysmeta(k) or
k.lower() == 'x-delete-at'):
dest[k] = v
class ServerSideCopyWebContext(WSGIContext):
def __init__(self, app, logger):
super(ServerSideCopyWebContext, self).__init__(app)
self.app = app
self.logger = logger
def get_source_resp(self, req):
sub_req = make_subrequest(
req.environ, path=wsgi_quote(req.path_info), headers=req.headers,
swift_source='SSC')
return sub_req.get_response(self.app)
def send_put_req(self, req, additional_resp_headers, start_response):
app_resp = self._app_call(req.environ)
self._adjust_put_response(req, additional_resp_headers)
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
def _adjust_put_response(self, req, additional_resp_headers):
if is_success(self._get_status_int()):
for header, value in additional_resp_headers.items():
self._response_headers.append((header, value))
def handle_OPTIONS_request(self, req, start_response):
app_resp = self._app_call(req.environ)
if is_success(self._get_status_int()):
for i, (header, value) in enumerate(self._response_headers):
if header.lower() == 'allow' and 'COPY' not in value:
self._response_headers[i] = ('Allow', value + ', COPY')
if header.lower() == 'access-control-allow-methods' and \
'COPY' not in value:
self._response_headers[i] = \
('Access-Control-Allow-Methods', value + ', COPY')
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
class ServerSideCopyMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route="copy")
def __call__(self, env, start_response):
req = Request(env)
try:
(version, account, container, obj) = req.split_path(4, 4, True)
is_obj_req = True
except ValueError:
is_obj_req = False
if not is_obj_req:
# If obj component is not present in req, do not proceed further.
return self.app(env, start_response)
try:
# In some cases, save off original request method since it gets
# mutated into PUT during handling. This way logging can display
# the method the client actually sent.
if req.method == 'PUT' and req.headers.get('X-Copy-From'):
return self.handle_PUT(req, start_response)
elif req.method == 'COPY':
req.environ['swift.orig_req_method'] = req.method
return self.handle_COPY(req, start_response,
account, container, obj)
elif req.method == 'OPTIONS':
# Does not interfere with OPTIONS response from
# (account,container) servers and /info response.
return self.handle_OPTIONS(req, start_response)
except HTTPException as e:
return e(req.environ, start_response)
return self.app(env, start_response)
def handle_COPY(self, req, start_response, account, container, obj):
if not req.headers.get('Destination'):
return HTTPPreconditionFailed(request=req,
body='Destination header required'
)(req.environ, start_response)
dest_account = account
if 'Destination-Account' in req.headers:
dest_account = wsgi_unquote(req.headers.get('Destination-Account'))
dest_account = check_account_format(req, dest_account)
req.headers['X-Copy-From-Account'] = wsgi_quote(account)
account = dest_account
del req.headers['Destination-Account']
dest_container, dest_object = _check_destination_header(req)
source = '/%s/%s' % (container, obj)
container = dest_container
obj = dest_object
# re-write the existing request as a PUT instead of creating a new one
req.method = 'PUT'
# As this the path info is updated with destination container,
# the proxy server app will use the right object controller
# implementation corresponding to the container's policy type.
ver, _junk = req.split_path(1, 2, rest_with_last=True)
req.path_info = '/%s/%s/%s/%s' % (
ver, dest_account, dest_container, dest_object)
req.headers['Content-Length'] = 0
req.headers['X-Copy-From'] = wsgi_quote(source)
del req.headers['Destination']
return self.handle_PUT(req, start_response)
def _get_source_object(self, ssc_ctx, source_path, req):
source_req = req.copy_get()
# make sure the source request uses it's container_info
source_req.headers.pop('X-Backend-Storage-Policy-Index', None)
source_req.path_info = source_path
source_req.headers['X-Newest'] = 'true'
# in case we are copying an SLO manifest, set format=raw parameter
params = source_req.params
if params.get('multipart-manifest') == 'get':
params['format'] = 'raw'
source_req.params = params
source_resp = ssc_ctx.get_source_resp(source_req)
if source_resp.content_length is None:
# This indicates a transfer-encoding: chunked source object,
# which currently only happens because there are more than
# CONTAINER_LISTING_LIMIT segments in a segmented object. In
# this case, we're going to refuse to do the server-side copy.
close_if_possible(source_resp.app_iter)
return HTTPRequestEntityTooLarge(request=req)
if source_resp.content_length > MAX_FILE_SIZE:
close_if_possible(source_resp.app_iter)
return HTTPRequestEntityTooLarge(request=req)
return source_resp
def _create_response_headers(self, source_path, source_resp, sink_req):
resp_headers = dict()
acct, path = source_path.split('/', 3)[2:4]
resp_headers['X-Copied-From-Account'] = wsgi_quote(acct)
resp_headers['X-Copied-From'] = wsgi_quote(path)
if 'last-modified' in source_resp.headers:
resp_headers['X-Copied-From-Last-Modified'] = \
source_resp.headers['last-modified']
if 'X-Object-Version-Id' in source_resp.headers:
resp_headers['X-Copied-From-Version-Id'] = \
source_resp.headers['X-Object-Version-Id']
# Existing sys and user meta of source object is added to response
# headers in addition to the new ones.
_copy_headers(sink_req.headers, resp_headers)
return resp_headers
def handle_PUT(self, req, start_response):
if req.content_length:
return HTTPBadRequest(body='Copy requests require a zero byte '
'body', request=req,
content_type='text/plain')(req.environ,
start_response)
# Form the path of source object to be fetched
ver, acct, _rest = req.split_path(2, 3, True)
src_account_name = req.headers.get('X-Copy-From-Account')
if src_account_name:
src_account_name = check_account_format(
req, wsgi_unquote(src_account_name))
else:
src_account_name = acct
src_container_name, src_obj_name = _check_copy_from_header(req)
source_path = '/%s/%s/%s/%s' % (ver, src_account_name,
src_container_name, src_obj_name)
# GET the source object, bail out on error
ssc_ctx = ServerSideCopyWebContext(self.app, self.logger)
source_resp = self._get_source_object(ssc_ctx, source_path, req)
if source_resp.status_int >= HTTP_MULTIPLE_CHOICES:
return source_resp(source_resp.environ, start_response)
# Create a new Request object based on the original request instance.
# This will preserve original request environ including headers.
sink_req = Request.blank(req.path_info, environ=req.environ)
def is_object_sysmeta(k):
return is_sys_meta('object', k)
if config_true_value(req.headers.get('x-fresh-metadata', 'false')):
# x-fresh-metadata only applies to copy, not post-as-copy: ignore
# existing user metadata, update existing sysmeta with new
copy_header_subset(source_resp, sink_req, is_object_sysmeta)
copy_header_subset(req, sink_req, is_object_sysmeta)
else:
# First copy existing sysmeta, user meta and other headers from the
# source to the sink, apart from headers that are conditionally
# copied below and timestamps.
exclude_headers = ('x-static-large-object', 'x-object-manifest',
'etag', 'content-type', 'x-timestamp',
'x-backend-timestamp')
copy_header_subset(source_resp, sink_req,
lambda k: k.lower() not in exclude_headers)
# now update with original req headers
sink_req.headers.update(req.headers)
params = sink_req.params
params_updated = False
if params.get('multipart-manifest') == 'get':
if 'X-Static-Large-Object' in source_resp.headers:
params['multipart-manifest'] = 'put'
if 'X-Object-Manifest' in source_resp.headers:
del params['multipart-manifest']
sink_req.headers['X-Object-Manifest'] = \
source_resp.headers['X-Object-Manifest']
params_updated = True
if 'version-id' in params:
del params['version-id']
params_updated = True
if params_updated:
sink_req.params = params
# Set swift.source, data source, content length and etag
# for the PUT request
sink_req.environ['swift.source'] = 'SSC'
sink_req.environ['wsgi.input'] = FileLikeIter(source_resp.app_iter)
sink_req.content_length = source_resp.content_length
if (source_resp.status_int == HTTP_OK and
'X-Static-Large-Object' not in source_resp.headers and
('X-Object-Manifest' not in source_resp.headers or
req.params.get('multipart-manifest') == 'get')):
# copy source etag so that copied content is verified, unless:
# - not a 200 OK response: source etag may not match the actual
# content, for example with a 206 Partial Content response to a
# ranged request
# - SLO manifest: etag cannot be specified in manifest PUT; SLO
# generates its own etag value which may differ from source
# - SLO: etag in SLO response is not hash of actual content
# - DLO: etag in DLO response is not hash of actual content
sink_req.headers['Etag'] = source_resp.etag
else:
# since we're not copying the source etag, make sure that any
# container update override values are not copied.
remove_items(sink_req.headers, lambda k: k.startswith(
OBJECT_SYSMETA_CONTAINER_UPDATE_OVERRIDE_PREFIX.title()))
# We no longer need these headers
sink_req.headers.pop('X-Copy-From', None)
sink_req.headers.pop('X-Copy-From-Account', None)
# If the copy request does not explicitly override content-type,
# use the one present in the source object.
if not req.headers.get('content-type'):
sink_req.headers['Content-Type'] = \
source_resp.headers['Content-Type']
# Create response headers for PUT response
resp_headers = self._create_response_headers(source_path,
source_resp, sink_req)
put_resp = ssc_ctx.send_put_req(sink_req, resp_headers, start_response)
close_if_possible(source_resp.app_iter)
return put_resp
def handle_OPTIONS(self, req, start_response):
return ServerSideCopyWebContext(self.app, self.logger).\
handle_OPTIONS_request(req, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def copy_filter(app):
return ServerSideCopyMiddleware(app, conf)
return copy_filter
| swift-master | swift/common/middleware/copy.py |
# Copyright (c) 2010-2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from swift import gettext_ as _
import eventlet
from swift.common.utils import cache_from_env, get_logger
from swift.common.registry import register_swift_info
from swift.proxy.controllers.base import get_account_info, get_container_info
from swift.common.constraints import valid_api_version
from swift.common.memcached import MemcacheConnectionError
from swift.common.swob import Request, Response
def interpret_conf_limits(conf, name_prefix, info=None):
"""
Parses general parms for rate limits looking for things that
start with the provided name_prefix within the provided conf
and returns lists for both internal use and for /info
:param conf: conf dict to parse
:param name_prefix: prefix of config parms to look for
:param info: set to return extra stuff for /info registration
"""
conf_limits = []
for conf_key in conf:
if conf_key.startswith(name_prefix):
cont_size = int(conf_key[len(name_prefix):])
rate = float(conf[conf_key])
conf_limits.append((cont_size, rate))
conf_limits.sort()
ratelimits = []
conf_limits_info = list(conf_limits)
while conf_limits:
cur_size, cur_rate = conf_limits.pop(0)
if conf_limits:
next_size, next_rate = conf_limits[0]
slope = (float(next_rate) - float(cur_rate)) \
/ (next_size - cur_size)
def new_scope(cur_size, slope, cur_rate):
# making new scope for variables
return lambda x: (x - cur_size) * slope + cur_rate
line_func = new_scope(cur_size, slope, cur_rate)
else:
line_func = lambda x: cur_rate
ratelimits.append((cur_size, cur_rate, line_func))
if info is None:
return ratelimits
else:
return ratelimits, conf_limits_info
def get_maxrate(ratelimits, size):
"""
Returns number of requests allowed per second for given size.
"""
last_func = None
if size:
size = int(size)
for ratesize, rate, func in ratelimits:
if size < ratesize:
break
last_func = func
if last_func:
return last_func(size)
return None
class MaxSleepTimeHitError(Exception):
pass
class RateLimitMiddleware(object):
"""
Rate limiting middleware
Rate limits requests on both an Account and Container level. Limits are
configurable.
"""
BLACK_LIST_SLEEP = 1
def __init__(self, app, conf, logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='ratelimit')
self.memcache_client = None
self.account_ratelimit = float(conf.get('account_ratelimit', 0))
self.max_sleep_time_seconds = \
float(conf.get('max_sleep_time_seconds', 60))
self.log_sleep_time_seconds = \
float(conf.get('log_sleep_time_seconds', 0))
self.clock_accuracy = int(conf.get('clock_accuracy', 1000))
self.rate_buffer_seconds = int(conf.get('rate_buffer_seconds', 5))
self.ratelimit_whitelist = \
[acc.strip() for acc in
conf.get('account_whitelist', '').split(',') if acc.strip()]
if self.ratelimit_whitelist:
self.logger.warning('Option account_whitelist is deprecated. Use '
'an internal client to POST a `X-Account-'
'Sysmeta-Global-Write-Ratelimit: WHITELIST` '
'header to the specific accounts instead.')
self.ratelimit_blacklist = \
[acc.strip() for acc in
conf.get('account_blacklist', '').split(',') if acc.strip()]
if self.ratelimit_blacklist:
self.logger.warning('Option account_blacklist is deprecated. Use '
'an internal client to POST a `X-Account-'
'Sysmeta-Global-Write-Ratelimit: BLACKLIST` '
'header to the specific accounts instead.')
self.container_ratelimits = interpret_conf_limits(
conf, 'container_ratelimit_')
self.container_listing_ratelimits = interpret_conf_limits(
conf, 'container_listing_ratelimit_')
def get_container_size(self, env):
rv = 0
container_info = get_container_info(
env, self.app, swift_source='RL')
if isinstance(container_info, dict):
rv = container_info.get(
'object_count', container_info.get('container_size', 0))
return rv
def get_ratelimitable_key_tuples(self, req, account_name,
container_name=None, obj_name=None,
global_ratelimit=None):
"""
Returns a list of key (used in memcache), ratelimit tuples. Keys
should be checked in order.
:param req: swob request
:param account_name: account name from path
:param container_name: container name from path
:param obj_name: object name from path
:param global_ratelimit: this account has an account wide
ratelimit on all writes combined
"""
keys = []
# COPYs are not limited
if self.account_ratelimit and \
account_name and container_name and not obj_name and \
req.method in ('PUT', 'DELETE'):
keys.append(("ratelimit/%s" % account_name,
self.account_ratelimit))
if account_name and container_name and obj_name and \
req.method in ('PUT', 'DELETE', 'POST', 'COPY'):
container_size = self.get_container_size(req.environ)
container_rate = get_maxrate(
self.container_ratelimits, container_size)
if container_rate:
keys.append((
"ratelimit/%s/%s" % (account_name, container_name),
container_rate))
if account_name and container_name and not obj_name and \
req.method == 'GET':
container_size = self.get_container_size(req.environ)
container_rate = get_maxrate(
self.container_listing_ratelimits, container_size)
if container_rate:
keys.append((
"ratelimit_listing/%s/%s" % (account_name, container_name),
container_rate))
if account_name and req.method in ('PUT', 'DELETE', 'POST', 'COPY'):
if global_ratelimit:
try:
global_ratelimit = float(global_ratelimit)
if global_ratelimit > 0:
keys.append((
"ratelimit/global-write/%s" % account_name,
global_ratelimit))
except ValueError:
pass
return keys
def _get_sleep_time(self, key, max_rate):
"""
Returns the amount of time (a float in seconds) that the app
should sleep.
:param key: a memcache key
:param max_rate: maximum rate allowed in requests per second
:raises MaxSleepTimeHitError: if max sleep time is exceeded.
"""
try:
now_m = int(round(time.time() * self.clock_accuracy))
time_per_request_m = int(round(self.clock_accuracy / max_rate))
running_time_m = self.memcache_client.incr(
key, delta=time_per_request_m)
need_to_sleep_m = 0
if (now_m - running_time_m >
self.rate_buffer_seconds * self.clock_accuracy):
next_avail_time = int(now_m + time_per_request_m)
self.memcache_client.set(key, str(next_avail_time),
serialize=False)
else:
need_to_sleep_m = \
max(running_time_m - now_m - time_per_request_m, 0)
max_sleep_m = self.max_sleep_time_seconds * self.clock_accuracy
if max_sleep_m - need_to_sleep_m <= self.clock_accuracy * 0.01:
# treat as no-op decrement time
self.memcache_client.decr(key, delta=time_per_request_m)
raise MaxSleepTimeHitError(
"Max Sleep Time Exceeded: %.2f" %
(float(need_to_sleep_m) / self.clock_accuracy))
return float(need_to_sleep_m) / self.clock_accuracy
except MemcacheConnectionError:
return 0
def handle_ratelimit(self, req, account_name, container_name, obj_name):
"""
Performs rate limiting and account white/black listing. Sleeps
if necessary. If self.memcache_client is not set, immediately returns
None.
:param account_name: account name from path
:param container_name: container name from path
:param obj_name: object name from path
"""
if not self.memcache_client:
return None
if req.environ.get('swift.ratelimit.handled'):
return None
req.environ['swift.ratelimit.handled'] = True
try:
account_info = get_account_info(req.environ, self.app,
swift_source='RL')
account_global_ratelimit = \
account_info.get('sysmeta', {}).get('global-write-ratelimit')
except ValueError:
account_global_ratelimit = None
if account_name in self.ratelimit_whitelist or \
account_global_ratelimit == 'WHITELIST':
return None
if account_name in self.ratelimit_blacklist or \
account_global_ratelimit == 'BLACKLIST':
self.logger.error(_('Returning 497 because of blacklisting: %s'),
account_name)
eventlet.sleep(self.BLACK_LIST_SLEEP)
return Response(status='497 Blacklisted',
body='Your account has been blacklisted',
request=req)
for key, max_rate in self.get_ratelimitable_key_tuples(
req, account_name, container_name=container_name,
obj_name=obj_name, global_ratelimit=account_global_ratelimit):
try:
need_to_sleep = self._get_sleep_time(key, max_rate)
if self.log_sleep_time_seconds and \
need_to_sleep > self.log_sleep_time_seconds:
self.logger.warning(
_("Ratelimit sleep log: %(sleep)s for "
"%(account)s/%(container)s/%(object)s"),
{'sleep': need_to_sleep, 'account': account_name,
'container': container_name, 'object': obj_name})
if need_to_sleep > 0:
eventlet.sleep(need_to_sleep)
except MaxSleepTimeHitError as e:
if obj_name:
path = '/'.join((account_name, container_name, obj_name))
else:
path = '/'.join((account_name, container_name))
self.logger.error(
_('Returning 498 for %(meth)s to %(path)s. '
'Ratelimit (Max Sleep) %(e)s'),
{'meth': req.method, 'path': path, 'e': str(e)})
error_resp = Response(status='498 Rate Limited',
body='Slow down', request=req)
return error_resp
return None
def __call__(self, env, start_response):
"""
WSGI entry point.
Wraps env in swob.Request object and passes it down.
:param env: WSGI environment dictionary
:param start_response: WSGI callable
"""
req = Request(env)
if self.memcache_client is None:
self.memcache_client = cache_from_env(env)
if not self.memcache_client:
self.logger.warning(
_('Warning: Cannot ratelimit without a memcached client'))
return self.app(env, start_response)
try:
version, account, container, obj = req.split_path(1, 4, True)
except ValueError:
return self.app(env, start_response)
if not valid_api_version(version):
return self.app(env, start_response)
ratelimit_resp = self.handle_ratelimit(req, account, container, obj)
if ratelimit_resp is None:
return self.app(env, start_response)
else:
return ratelimit_resp(env, start_response)
def filter_factory(global_conf, **local_conf):
"""
paste.deploy app factory for creating WSGI proxy apps.
"""
conf = global_conf.copy()
conf.update(local_conf)
account_ratelimit = float(conf.get('account_ratelimit', 0))
max_sleep_time_seconds = float(conf.get('max_sleep_time_seconds', 60))
container_ratelimits, cont_limit_info = interpret_conf_limits(
conf, 'container_ratelimit_', info=1)
container_listing_ratelimits, cont_list_limit_info = \
interpret_conf_limits(conf, 'container_listing_ratelimit_', info=1)
# not all limits are exposed (intentionally)
register_swift_info('ratelimit',
account_ratelimit=account_ratelimit,
max_sleep_time_seconds=max_sleep_time_seconds,
container_ratelimits=cont_limit_info,
container_listing_ratelimits=cont_list_limit_info)
def limit_filter(app):
return RateLimitMiddleware(app, conf)
return limit_filter
| swift-master | swift/common/middleware/ratelimit.py |
# Copyright (c) 2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from swift.common.constraints import valid_api_version
from swift.common.container_sync_realms import ContainerSyncRealms
from swift.common.swob import HTTPBadRequest, HTTPUnauthorized, wsgify
from swift.common.utils import (
config_true_value, get_logger, streq_const_time)
from swift.proxy.controllers.base import get_container_info
from swift.common.registry import register_swift_info
class ContainerSync(object):
"""
WSGI middleware that validates an incoming container sync request
using the container-sync-realms.conf style of container sync.
"""
def __init__(self, app, conf, logger=None):
self.app = app
self.conf = conf
self.logger = logger or get_logger(conf, log_route='container_sync')
self.realms_conf = ContainerSyncRealms(
os.path.join(
conf.get('swift_dir', '/etc/swift'),
'container-sync-realms.conf'),
self.logger)
self.allow_full_urls = config_true_value(
conf.get('allow_full_urls', 'true'))
# configure current realm/cluster for /info
self.realm = self.cluster = None
current = conf.get('current', None)
if current:
try:
self.realm, self.cluster = (p.upper() for p in
current.strip('/').split('/'))
except ValueError:
self.logger.error('Invalid current //REALM/CLUSTER (%s)',
current)
self.register_info()
def register_info(self):
dct = {}
for realm in self.realms_conf.realms():
clusters = self.realms_conf.clusters(realm)
if clusters:
dct[realm] = {'clusters': dict((c, {}) for c in clusters)}
if self.realm and self.cluster:
try:
dct[self.realm]['clusters'][self.cluster]['current'] = True
except KeyError:
self.logger.error('Unknown current //REALM/CLUSTER (%s)',
'//%s/%s' % (self.realm, self.cluster))
register_swift_info('container_sync', realms=dct)
@wsgify
def __call__(self, req):
if req.path == '/info':
# Ensure /info requests get the freshest results
self.register_info()
return self.app
try:
(version, acc, cont, obj) = req.split_path(3, 4, True)
bad_path = False
except ValueError:
bad_path = True
# use of bad_path bool is to avoid recursive tracebacks
if bad_path or not valid_api_version(version):
return self.app
# validate container-sync metdata update
info = get_container_info(
req.environ, self.app, swift_source='CS')
sync_to = req.headers.get('x-container-sync-to')
if req.method in ('PUT', 'POST') and cont and not obj:
versions_cont = info.get(
'sysmeta', {}).get('versions-container')
if sync_to and versions_cont:
raise HTTPBadRequest(
'Cannot configure container sync on a container '
'with object versioning configured.',
request=req)
if not self.allow_full_urls:
if sync_to and not sync_to.startswith('//'):
raise HTTPBadRequest(
body='Full URLs are not allowed for X-Container-Sync-To '
'values. Only realm values of the format '
'//realm/cluster/account/container are allowed.\n',
request=req)
auth = req.headers.get('x-container-sync-auth')
if auth:
valid = False
auth = auth.split()
if len(auth) != 3:
req.environ.setdefault('swift.log_info', []).append(
'cs:not-3-args')
else:
realm, nonce, sig = auth
realm_key = self.realms_conf.key(realm)
realm_key2 = self.realms_conf.key2(realm)
if not realm_key:
req.environ.setdefault('swift.log_info', []).append(
'cs:no-local-realm-key')
else:
user_key = info.get('sync_key')
if not user_key:
req.environ.setdefault('swift.log_info', []).append(
'cs:no-local-user-key')
else:
# x-timestamp headers get shunted by gatekeeper
if 'x-backend-inbound-x-timestamp' in req.headers:
req.headers['x-timestamp'] = req.headers.pop(
'x-backend-inbound-x-timestamp')
expected = self.realms_conf.get_sig(
req.method, req.path,
req.headers.get('x-timestamp', '0'), nonce,
realm_key, user_key)
expected2 = self.realms_conf.get_sig(
req.method, req.path,
req.headers.get('x-timestamp', '0'), nonce,
realm_key2, user_key) if realm_key2 else expected
if not streq_const_time(sig, expected) and \
not streq_const_time(sig, expected2):
req.environ.setdefault(
'swift.log_info', []).append('cs:invalid-sig')
else:
req.environ.setdefault(
'swift.log_info', []).append('cs:valid')
valid = True
if not valid:
exc = HTTPUnauthorized(
body='X-Container-Sync-Auth header not valid; '
'contact cluster operator for support.',
headers={'content-type': 'text/plain'},
request=req)
exc.headers['www-authenticate'] = ' '.join([
'SwiftContainerSync',
exc.www_authenticate().split(None, 1)[1]])
raise exc
else:
req.environ['swift.authorize_override'] = True
# An SLO manifest will already be in the internal manifest
# syntax and might be synced before its segments, so stop SLO
# middleware from performing the usual manifest validation.
req.environ['swift.slo_override'] = True
# Similar arguments for static symlinks
req.environ['swift.symlink_override'] = True
return self.app
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info('container_sync')
def cache_filter(app):
return ContainerSync(app, conf)
return cache_filter
| swift-master | swift/common/middleware/container_sync.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from swift.common.swob import Request, Response
class HealthCheckMiddleware(object):
"""
Healthcheck middleware used for monitoring.
If the path is /healthcheck, it will respond 200 with "OK" as the body.
If the optional config parameter "disable_path" is set, and a file is
present at that path, it will respond 503 with "DISABLED BY FILE" as the
body.
"""
def __init__(self, app, conf):
self.app = app
self.disable_path = conf.get('disable_path', '')
def GET(self, req):
"""Returns a 200 response with "OK" in the body."""
return Response(request=req, body=b"OK", content_type="text/plain")
def DISABLED(self, req):
"""Returns a 503 response with "DISABLED BY FILE" in the body."""
return Response(request=req, status=503, body=b"DISABLED BY FILE",
content_type="text/plain")
def __call__(self, env, start_response):
req = Request(env)
if req.path == '/healthcheck':
handler = self.GET
if self.disable_path and os.path.exists(self.disable_path):
handler = self.DISABLED
return handler(req)(env, start_response)
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def healthcheck_filter(app):
return HealthCheckMiddleware(app, conf)
return healthcheck_filter
| swift-master | swift/common/middleware/healthcheck.py |
# Copyright (c) 2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import six
from xml.etree.cElementTree import Element, SubElement, tostring
from swift.common.constraints import valid_api_version
from swift.common.http import HTTP_NO_CONTENT
from swift.common.request_helpers import get_param
from swift.common.swob import HTTPException, HTTPNotAcceptable, Request, \
RESPONSE_REASONS, HTTPBadRequest, wsgi_quote, wsgi_to_bytes
from swift.common.utils import RESERVED, get_logger, list_from_csv
#: Mapping of query string ``format=`` values to their corresponding
#: content-type values.
FORMAT2CONTENT_TYPE = {'plain': 'text/plain', 'json': 'application/json',
'xml': 'application/xml'}
#: Maximum size of a valid JSON container listing body. If we receive
#: a container listing response larger than this, assume it's a staticweb
#: response and pass it on to the client.
# Default max object length is 1024, default container listing limit is 1e4;
# add a fudge factor for things like hash, last_modified, etc.
MAX_CONTAINER_LISTING_CONTENT_LENGTH = 1024 * 10000 * 2
def get_listing_content_type(req):
"""
Determine the content type to use for an account or container listing
response.
:param req: request object
:returns: content type as a string (e.g. text/plain, application/json)
:raises HTTPNotAcceptable: if the requested content type is not acceptable
:raises HTTPBadRequest: if the 'format' query param is provided and
not valid UTF-8
"""
query_format = get_param(req, 'format')
if query_format:
req.accept = FORMAT2CONTENT_TYPE.get(
query_format.lower(), FORMAT2CONTENT_TYPE['plain'])
try:
out_content_type = req.accept.best_match(
['text/plain', 'application/json', 'application/xml', 'text/xml'])
except ValueError:
raise HTTPBadRequest(request=req, body=b'Invalid Accept header')
if not out_content_type:
raise HTTPNotAcceptable(request=req)
return out_content_type
def to_xml(document_element):
result = tostring(document_element, encoding='UTF-8').replace(
b"<?xml version='1.0' encoding='UTF-8'?>",
b'<?xml version="1.0" encoding="UTF-8"?>', 1)
if not result.startswith(b'<?xml '):
# py3 tostring doesn't (necessarily?) include the XML declaration;
# add it if it's missing.
result = b'<?xml version="1.0" encoding="UTF-8"?>\n' + result
return result
def account_to_xml(listing, account_name):
doc = Element('account', name=account_name)
doc.text = '\n'
for record in listing:
if 'subdir' in record:
name = record.pop('subdir')
sub = SubElement(doc, 'subdir', name=name)
else:
sub = SubElement(doc, 'container')
for field in ('name', 'count', 'bytes', 'last_modified'):
SubElement(sub, field).text = six.text_type(
record.pop(field))
sub.tail = '\n'
return to_xml(doc)
def container_to_xml(listing, base_name):
doc = Element('container', name=base_name)
for record in listing:
if 'subdir' in record:
name = record.pop('subdir')
sub = SubElement(doc, 'subdir', name=name)
SubElement(sub, 'name').text = name
else:
sub = SubElement(doc, 'object')
for field in ('name', 'hash', 'bytes', 'content_type',
'last_modified'):
SubElement(sub, field).text = six.text_type(
record.pop(field))
return to_xml(doc)
def listing_to_text(listing):
def get_lines():
for item in listing:
if 'name' in item:
yield item['name'].encode('utf-8') + b'\n'
else:
yield item['subdir'].encode('utf-8') + b'\n'
return b''.join(get_lines())
class ListingFilter(object):
def __init__(self, app, conf, logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='listing-filter')
def filter_reserved(self, listing, account, container):
new_listing = []
for entry in list(listing):
for key in ('name', 'subdir'):
value = entry.get(key, '')
if six.PY2:
value = value.encode('utf-8')
if RESERVED in value:
if container:
self.logger.warning(
'Container listing for %s/%s had '
'reserved byte in %s: %r',
wsgi_quote(account), wsgi_quote(container),
key, value)
else:
self.logger.warning(
'Account listing for %s had '
'reserved byte in %s: %r',
wsgi_quote(account), key, value)
break # out of the *key* loop; check next entry
else:
new_listing.append(entry)
return new_listing
def __call__(self, env, start_response):
req = Request(env)
try:
# account and container only
version, acct, cont = req.split_path(2, 3)
except ValueError:
is_account_or_container_req = False
else:
is_account_or_container_req = True
if not is_account_or_container_req:
return self.app(env, start_response)
if not valid_api_version(version) or req.method not in ('GET', 'HEAD'):
return self.app(env, start_response)
# OK, definitely have an account/container request.
# Get the desired content-type, then force it to a JSON request.
try:
out_content_type = get_listing_content_type(req)
except HTTPException as err:
return err(env, start_response)
params = req.params
can_vary = 'format' not in params
params['format'] = 'json'
req.params = params
# Give other middlewares a chance to be in charge
env.setdefault('swift.format_listing', True)
status, headers, resp_iter = req.call_application(self.app)
if not env.get('swift.format_listing'):
start_response(status, headers)
return resp_iter
header_to_index = {}
resp_content_type = resp_length = None
for i, (header, value) in enumerate(headers):
header = header.lower()
if header == 'content-type':
header_to_index[header] = i
resp_content_type = value.partition(';')[0]
elif header == 'content-length':
header_to_index[header] = i
resp_length = int(value)
elif header == 'vary':
header_to_index[header] = i
if not status.startswith(('200 ', '204 ')):
start_response(status, headers)
return resp_iter
if can_vary:
if 'vary' in header_to_index:
value = headers[header_to_index['vary']][1]
if 'accept' not in list_from_csv(value.lower()):
headers[header_to_index['vary']] = (
'Vary', value + ', Accept')
else:
headers.append(('Vary', 'Accept'))
if resp_content_type != 'application/json':
start_response(status, headers)
return resp_iter
if resp_length is None or \
resp_length > MAX_CONTAINER_LISTING_CONTENT_LENGTH:
start_response(status, headers)
return resp_iter
def set_header(header, value):
if value is None:
del headers[header_to_index[header]]
else:
headers[header_to_index[header]] = (
headers[header_to_index[header]][0], str(value))
if req.method == 'HEAD':
set_header('content-type', out_content_type + '; charset=utf-8')
set_header('content-length', None) # don't know, can't determine
start_response(status, headers)
return resp_iter
body = b''.join(resp_iter)
try:
listing = json.loads(body)
# Do a couple sanity checks
if not isinstance(listing, list):
raise ValueError
if not all(isinstance(item, dict) for item in listing):
raise ValueError
except ValueError:
# Static web listing that's returning invalid JSON?
# Just pass it straight through; that's about all we *can* do.
start_response(status, headers)
return [body]
if not req.allow_reserved_names:
listing = self.filter_reserved(listing, acct, cont)
try:
if out_content_type.endswith('/xml'):
if cont:
body = container_to_xml(
listing, wsgi_to_bytes(cont).decode('utf-8'))
else:
body = account_to_xml(
listing, wsgi_to_bytes(acct).decode('utf-8'))
elif out_content_type == 'text/plain':
body = listing_to_text(listing)
else:
body = json.dumps(listing).encode('ascii')
except KeyError:
# listing was in a bad format -- funky static web listing??
start_response(status, headers)
return [body]
if not body:
status = '%s %s' % (HTTP_NO_CONTENT,
RESPONSE_REASONS[HTTP_NO_CONTENT][0])
set_header('content-type', out_content_type + '; charset=utf-8')
set_header('content-length', len(body))
start_response(status, headers)
return [body]
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def listing_filter(app):
return ListingFilter(app, conf)
return listing_filter
| swift-master | swift/common/middleware/listing_formats.py |
# Copyright (c) 2013 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.swob import Request, Response
from swift.common.registry import register_swift_info
class CrossDomainMiddleware(object):
"""
Cross domain middleware used to respond to requests for cross domain
policy information.
If the path is ``/crossdomain.xml`` it will respond with an xml cross
domain policy document. This allows web pages hosted elsewhere to use
client side technologies such as Flash, Java and Silverlight to interact
with the Swift API.
To enable this middleware, add it to the pipeline in your proxy-server.conf
file. It should be added before any authentication (e.g., tempauth or
keystone) middleware. In this example ellipsis (...) indicate other
middleware you may have chosen to use:
.. code:: cfg
[pipeline:main]
pipeline = ... crossdomain ... authtoken ... proxy-server
And add a filter section, such as:
.. code:: cfg
[filter:crossdomain]
use = egg:swift#crossdomain
cross_domain_policy = <allow-access-from domain="*.example.com" />
<allow-access-from domain="www.example.com" secure="false" />
For continuation lines, put some whitespace before the continuation
text. Ensure you put a completely blank line to terminate the
``cross_domain_policy`` value.
The ``cross_domain_policy`` name/value is optional. If omitted, the policy
defaults as if you had specified:
.. code:: cfg
cross_domain_policy = <allow-access-from domain="*" secure="false" />
.. note::
The default policy is very permissive; this is appropriate
for most public cloud deployments, but may not be appropriate
for all deployments. See also:
`CWE-942 <https://cwe.mitre.org/data/definitions/942.html>`__
"""
def __init__(self, app, conf, *args, **kwargs):
self.app = app
self.conf = conf
default_domain_policy = '<allow-access-from domain="*"' \
' secure="false" />'
self.cross_domain_policy = self.conf.get('cross_domain_policy',
default_domain_policy)
def GET(self, req):
"""Returns a 200 response with cross domain policy information """
body = '<?xml version="1.0"?>\n' \
'<!DOCTYPE cross-domain-policy SYSTEM ' \
'"http://www.adobe.com/xml/dtds/cross-domain-policy.dtd" >\n' \
'<cross-domain-policy>\n' \
'%s\n' \
'</cross-domain-policy>' % self.cross_domain_policy
return Response(request=req, body=body.encode('utf-8'),
content_type="application/xml")
def __call__(self, env, start_response):
req = Request(env)
if req.path == '/crossdomain.xml' and req.method == 'GET':
return self.GET(req)(env, start_response)
else:
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info('crossdomain')
def crossdomain_filter(app):
return CrossDomainMiddleware(app, conf)
return crossdomain_filter
| swift-master | swift/common/middleware/crossdomain.py |
# Copyright (c) 2022 NVIDIA
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from collections import defaultdict
from swift.common.request_helpers import split_and_validate_path
from swift.common.swob import Request, HTTPTooManyBackendRequests, \
HTTPException
from swift.common.utils import get_logger, non_negative_float, \
EventletRateLimiter
RATE_LIMITED_METHODS = ('GET', 'HEAD', 'PUT', 'POST', 'DELETE', 'UPDATE',
'REPLICATE')
class BackendRateLimitMiddleware(object):
"""
Backend rate-limiting middleware.
Rate-limits requests to backend storage node devices. Each device is
independently rate-limited. All requests with a 'GET', 'HEAD', 'PUT',
'POST', 'DELETE', 'UPDATE' or 'REPLICATE' method are included in a device's
rate limit.
If a request would cause the rate-limit to be exceeded then a response with
a 529 status code is returned.
"""
def __init__(self, app, conf, logger=None):
self.app = app
self.logger = logger or get_logger(conf, log_route='backend_ratelimit')
self.requests_per_device_per_second = non_negative_float(
conf.get('requests_per_device_per_second', 0.0))
self.requests_per_device_rate_buffer = non_negative_float(
conf.get('requests_per_device_rate_buffer', 1.0))
# map device -> RateLimiter
self.rate_limiters = defaultdict(
lambda: EventletRateLimiter(
max_rate=self.requests_per_device_per_second,
rate_buffer=self.requests_per_device_rate_buffer,
running_time=time.time(),
burst_after_idle=True))
def __call__(self, env, start_response):
"""
WSGI entry point.
:param env: WSGI environment dictionary
:param start_response: WSGI callable
"""
req = Request(env)
handler = self.app
if req.method in RATE_LIMITED_METHODS:
try:
device, partition, _ = split_and_validate_path(req, 1, 3, True)
int(partition) # check it's a valid partition
except (ValueError, HTTPException):
# request may not have device/partition e.g. a healthcheck req
pass
else:
rate_limiter = self.rate_limiters[device]
if not rate_limiter.is_allowed():
self.logger.increment('backend.ratelimit')
handler = HTTPTooManyBackendRequests()
return handler(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def backend_ratelimit_filter(app):
return BackendRateLimitMiddleware(app, conf)
return backend_ratelimit_filter
| swift-master | swift/common/middleware/backend_ratelimit.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import six
from six.moves.urllib.parse import unquote, urlparse
def clean_acl(name, value):
"""
Returns a cleaned ACL header value, validating that it meets the formatting
requirements for standard Swift ACL strings.
The ACL format is::
[item[,item...]]
Each item can be a group name to give access to or a referrer designation
to grant or deny based on the HTTP Referer header.
The referrer designation format is::
.r:[-]value
The ``.r`` can also be ``.ref``, ``.referer``, or ``.referrer``; though it
will be shortened to just ``.r`` for decreased character count usage.
The value can be ``*`` to specify any referrer host is allowed access, a
specific host name like ``www.example.com``, or if it has a leading period
``.`` or leading ``*.`` it is a domain name specification, like
``.example.com`` or ``*.example.com``. The leading minus sign ``-``
indicates referrer hosts that should be denied access.
Referrer access is applied in the order they are specified. For example,
.r:.example.com,.r:-thief.example.com would allow all hosts ending with
.example.com except for the specific host thief.example.com.
Example valid ACLs::
.r:*
.r:*,.r:-.thief.com
.r:*,.r:.example.com,.r:-thief.example.com
.r:*,.r:-.thief.com,bobs_account,sues_account:sue
bobs_account,sues_account:sue
Example invalid ACLs::
.r:
.r:-
By default, allowing read access via .r will not allow listing objects in
the container -- just retrieving objects from the container. To turn on
listings, use the .rlistings directive.
Also, .r designations aren't allowed in headers whose names include the
word 'write'.
ACLs that are "messy" will be cleaned up. Examples:
====================== ======================
Original Cleaned
---------------------- ----------------------
``bob, sue`` ``bob,sue``
``bob , sue`` ``bob,sue``
``bob,,,sue`` ``bob,sue``
``.referrer : *`` ``.r:*``
``.ref:*.example.com`` ``.r:.example.com``
``.r:*, .rlistings`` ``.r:*,.rlistings``
====================== ======================
:param name: The name of the header being cleaned, such as X-Container-Read
or X-Container-Write.
:param value: The value of the header being cleaned.
:returns: The value, cleaned of extraneous formatting.
:raises ValueError: If the value does not meet the ACL formatting
requirements; the error message will indicate why.
"""
name = name.lower()
values = []
for raw_value in value.split(','):
raw_value = raw_value.strip()
if not raw_value:
continue
if ':' not in raw_value:
values.append(raw_value)
continue
first, second = (v.strip() for v in raw_value.split(':', 1))
if not first or not first.startswith('.'):
values.append(raw_value)
elif first in ('.r', '.ref', '.referer', '.referrer'):
if 'write' in name:
raise ValueError('Referrers not allowed in write ACL: '
'%s' % repr(raw_value))
negate = False
if second and second.startswith('-'):
negate = True
second = second[1:].strip()
if second and second != '*' and second.startswith('*'):
second = second[1:].strip()
if not second or second == '.':
raise ValueError('No host/domain value after referrer '
'designation in ACL: %s' % repr(raw_value))
values.append('.r:%s%s' % ('-' if negate else '', second))
else:
raise ValueError('Unknown designator %s in ACL: %s' %
(repr(first), repr(raw_value)))
return ','.join(values)
def format_acl_v1(groups=None, referrers=None, header_name=None):
"""
Returns a standard Swift ACL string for the given inputs.
Caller is responsible for ensuring that :referrers: parameter is only given
if the ACL is being generated for X-Container-Read. (X-Container-Write
and the account ACL headers don't support referrers.)
:param groups: a list of groups (and/or members in most auth systems) to
grant access
:param referrers: a list of referrer designations (without the leading .r:)
:param header_name: (optional) header name of the ACL we're preparing, for
clean_acl; if None, returned ACL won't be cleaned
:returns: a Swift ACL string for use in X-Container-{Read,Write},
X-Account-Access-Control, etc.
"""
groups, referrers = groups or [], referrers or []
referrers = ['.r:%s' % r for r in referrers]
result = ','.join(groups + referrers)
return (clean_acl(header_name, result) if header_name else result)
def format_acl_v2(acl_dict):
r"""
Returns a version-2 Swift ACL JSON string.
HTTP headers for Version 2 ACLs have the following form:
Header-Name: {"arbitrary":"json","encoded":"string"}
JSON will be forced ASCII (containing six-char \uNNNN sequences rather
than UTF-8; UTF-8 is valid JSON but clients vary in their support for
UTF-8 headers), and without extraneous whitespace.
Advantages over V1: forward compatibility (new keys don't cause parsing
exceptions); Unicode support; no reserved words (you can have a user
named .rlistings if you want).
:param acl_dict: dict of arbitrary data to put in the ACL; see specific
auth systems such as tempauth for supported values
:returns: a JSON string which encodes the ACL
"""
return json.dumps(acl_dict, ensure_ascii=True, separators=(',', ':'),
sort_keys=True)
def format_acl(version=1, **kwargs):
"""
Compatibility wrapper to help migrate ACL syntax from version 1 to 2.
Delegates to the appropriate version-specific format_acl method, defaulting
to version 1 for backward compatibility.
:param kwargs: keyword args appropriate for the selected ACL syntax version
(see :func:`format_acl_v1` or :func:`format_acl_v2`)
"""
if version == 1:
return format_acl_v1(
groups=kwargs.get('groups'), referrers=kwargs.get('referrers'),
header_name=kwargs.get('header_name'))
elif version == 2:
return format_acl_v2(kwargs.get('acl_dict'))
raise ValueError("Invalid ACL version: %r" % version)
def parse_acl_v1(acl_string):
"""
Parses a standard Swift ACL string into a referrers list and groups list.
See :func:`clean_acl` for documentation of the standard Swift ACL format.
:param acl_string: The standard Swift ACL string to parse.
:returns: A tuple of (referrers, groups) where referrers is a list of
referrer designations (without the leading .r:) and groups is a
list of groups to allow access.
"""
referrers = []
groups = []
if acl_string:
for value in acl_string.split(','):
if value.startswith('.r:'):
referrers.append(value[len('.r:'):])
else:
groups.append(unquote(value))
return referrers, groups
def parse_acl_v2(data):
"""
Parses a version-2 Swift ACL string and returns a dict of ACL info.
:param data: string containing the ACL data in JSON format
:returns: A dict (possibly empty) containing ACL info, e.g.:
{"groups": [...], "referrers": [...]}
:returns: None if data is None, is not valid JSON or does not parse
as a dict
:returns: empty dictionary if data is an empty string
"""
if data is None:
return None
if data == '':
return {}
try:
result = json.loads(data)
return (result if type(result) is dict else None)
except ValueError:
return None
def parse_acl(*args, **kwargs):
"""
Compatibility wrapper to help migrate ACL syntax from version 1 to 2.
Delegates to the appropriate version-specific parse_acl method, attempting
to determine the version from the types of args/kwargs.
:param args: positional args for the selected ACL syntax version
:param kwargs: keyword args for the selected ACL syntax version
(see :func:`parse_acl_v1` or :func:`parse_acl_v2`)
:returns: the return value of :func:`parse_acl_v1` or :func:`parse_acl_v2`
"""
version = kwargs.pop('version', None)
if version in (1, None):
return parse_acl_v1(*args)
elif version == 2:
return parse_acl_v2(*args, **kwargs)
else:
raise ValueError('Unknown ACL version: parse_acl(%r, %r)' %
(args, kwargs))
def referrer_allowed(referrer, referrer_acl):
"""
Returns True if the referrer should be allowed based on the referrer_acl
list (as returned by :func:`parse_acl`).
See :func:`clean_acl` for documentation of the standard Swift ACL format.
:param referrer: The value of the HTTP Referer header.
:param referrer_acl: The list of referrer designations as returned by
:func:`parse_acl`.
:returns: True if the referrer should be allowed; False if not.
"""
allow = False
if referrer_acl:
rhost = urlparse(referrer or '').hostname or 'unknown'
for mhost in referrer_acl:
if mhost.startswith('-'):
mhost = mhost[1:]
if mhost == rhost or (mhost.startswith('.') and
rhost.endswith(mhost)):
allow = False
elif mhost == '*' or mhost == rhost or \
(mhost.startswith('.') and rhost.endswith(mhost)):
allow = True
return allow
def acls_from_account_info(info):
"""
Extract the account ACLs from the given account_info, and return the ACLs.
:param info: a dict of the form returned by get_account_info
:returns: None (no ACL system metadata is set), or a dict of the form::
{'admin': [...], 'read-write': [...], 'read-only': [...]}
:raises ValueError: on a syntactically invalid header
"""
acl = parse_acl(
version=2, data=info.get('sysmeta', {}).get('core-access-control'))
if acl is None:
return None
admin_members = acl.get('admin', [])
readwrite_members = acl.get('read-write', [])
readonly_members = acl.get('read-only', [])
if not any((admin_members, readwrite_members, readonly_members)):
return None
acls = {
'admin': admin_members,
'read-write': readwrite_members,
'read-only': readonly_members,
}
if six.PY2:
for k in ('admin', 'read-write', 'read-only'):
acls[k] = [v.encode('utf8') for v in acls[k]]
return acls
| swift-master | swift/common/middleware/acl.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.memcached import load_memcache
from swift.common.utils import get_logger
class MemcacheMiddleware(object):
"""
Caching middleware that manages caching in swift.
"""
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='memcache')
self.memcache = load_memcache(conf, self.logger)
def __call__(self, env, start_response):
env['swift.cache'] = self.memcache
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def cache_filter(app):
return MemcacheMiddleware(app, conf)
return cache_filter
| swift-master | swift/common/middleware/memcache.py |
# Copyright (c) 2011-2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test authentication and authorization system.
Add to your pipeline in proxy-server.conf, such as::
[pipeline:main]
pipeline = catch_errors cache tempauth proxy-server
Set account auto creation to true in proxy-server.conf::
[app:proxy-server]
account_autocreate = true
And add a tempauth filter section, such as::
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
# To allow accounts/users with underscores you can base64 encode them.
# Here is the account "under_score" and username "a_b" (note the lack
# of padding equal signs):
user64_dW5kZXJfc2NvcmU_YV9i = testing4
See the proxy-server.conf-sample for more information.
Account/User List
^^^^^^^^^^^^^^^^^
All accounts/users are listed in the filter section. The format is::
user_<account>_<user> = <key> [group] [group] [...] [storage_url]
If you want to be able to include underscores in the ``<account>`` or
``<user>`` portions, you can base64 encode them (with *no* equal signs)
in a line like this::
user64_<account_b64>_<user_b64> = <key> [group] [...] [storage_url]
There are three special groups:
* ``.reseller_admin`` -- can do anything to any account for this auth
* ``.reseller_reader`` -- can GET/HEAD anything in any account for this auth
* ``.admin`` -- can do anything within the account
If none of these groups are specified, the user can only access
containers that have been explicitly allowed for them by a ``.admin`` or
``.reseller_admin``.
The trailing optional ``storage_url`` allows you to specify an alternate
URL to hand back to the user upon authentication. If not specified, this
defaults to::
$HOST/v1/<reseller_prefix>_<account>
Where ``$HOST`` will do its best to resolve to what the requester would
need to use to reach this host, ``<reseller_prefix>`` is from this section,
and ``<account>`` is from the ``user_<account>_<user>`` name. Note that
``$HOST`` cannot possibly handle when you have a load balancer in front of
it that does https while TempAuth itself runs with http; in such a case,
you'll have to specify the ``storage_url_scheme`` configuration value as
an override.
Multiple Reseller Prefix Items
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The reseller prefix specifies which parts of the account namespace this
middleware is responsible for managing authentication and authorization.
By default, the prefix is ``AUTH`` so accounts and tokens are prefixed
by ``AUTH_``. When a request's token and/or path start with ``AUTH_``, this
middleware knows it is responsible.
We allow the reseller prefix to be a list. In tempauth, the first item
in the list is used as the prefix for tokens and user groups. The
other prefixes provide alternate accounts that user's can access. For
example if the reseller prefix list is ``AUTH, OTHER``, a user with
admin access to ``AUTH_account`` also has admin access to
``OTHER_account``.
Required Group
^^^^^^^^^^^^^^
The group ``.admin`` is normally needed to access an account (ACLs provide
an additional way to access an account). You can specify the
``require_group`` parameter. This means that you also need the named group
to access an account. If you have several reseller prefix items, prefix
the ``require_group`` parameter with the appropriate prefix.
X-Service-Token
^^^^^^^^^^^^^^^
If an ``X-Service-Token`` is presented in the request headers, the groups
derived from the token are appended to the roles derived from
``X-Auth-Token``. If ``X-Auth-Token`` is missing or invalid,
``X-Service-Token`` is not processed.
The ``X-Service-Token`` is useful when combined with multiple reseller
prefix items. In the following configuration, accounts prefixed
``SERVICE_`` are only accessible if ``X-Auth-Token`` is from the end-user
and ``X-Service-Token`` is from the ``glance`` user::
[filter:tempauth]
use = egg:swift#tempauth
reseller_prefix = AUTH, SERVICE
SERVICE_require_group = .service
user_admin_admin = admin .admin .reseller_admin
user_joeacct_joe = joepw .admin
user_maryacct_mary = marypw .admin
user_glance_glance = glancepw .service
The name ``.service`` is an example. Unlike ``.admin``, ``.reseller_admin``,
``.reseller_reader`` it is not a reserved name.
Please note that ACLs can be set on service accounts and are matched
against the identity validated by ``X-Auth-Token``. As such ACLs can grant
access to a service account's container without needing to provide a
service token, just like any other cross-reseller request using ACLs.
Account ACLs
^^^^^^^^^^^^
If a swift_owner issues a POST or PUT to the account with the
``X-Account-Access-Control`` header set in the request, then this may
allow certain types of access for additional users.
* Read-Only: Users with read-only access can list containers in the
account, list objects in any container, retrieve objects, and view
unprivileged account/container/object metadata.
* Read-Write: Users with read-write access can (in addition to the
read-only privileges) create objects, overwrite existing objects,
create new containers, and set unprivileged container/object
metadata.
* Admin: Users with admin access are swift_owners and can perform
any action, including viewing/setting privileged metadata (e.g.
changing account ACLs).
To generate headers for setting an account ACL::
from swift.common.middleware.acl import format_acl
acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] }
header_value = format_acl(version=2, acl_dict=acl_data)
To generate a curl command line from the above::
token=...
storage_url=...
python -c '
from swift.common.middleware.acl import format_acl
acl_data = { 'admin': ['alice'], 'read-write': ['bob', 'carol'] }
headers = {'X-Account-Access-Control':
format_acl(version=2, acl_dict=acl_data)}
header_str = ' '.join(["-H '%s: %s'" % (k, v)
for k, v in headers.items()])
print('curl -D- -X POST -H "x-auth-token: $token" %s '
'$storage_url' % header_str)
'
"""
from __future__ import print_function
import json
from time import time
from traceback import format_exc
from uuid import uuid4
import base64
from eventlet import Timeout
import six
from swift.common.memcached import MemcacheConnectionError
from swift.common.swob import Response, Request, wsgi_to_str
from swift.common.swob import HTTPBadRequest, HTTPForbidden, HTTPNotFound, \
HTTPUnauthorized, HTTPMethodNotAllowed, HTTPServiceUnavailable
from swift.common.request_helpers import get_sys_meta_prefix
from swift.common.middleware.acl import (
clean_acl, parse_acl, referrer_allowed, acls_from_account_info)
from swift.common.utils import cache_from_env, get_logger, \
split_path, config_true_value
from swift.common.registry import register_swift_info
from swift.common.utils import config_read_reseller_options, quote
from swift.proxy.controllers.base import get_account_info
DEFAULT_TOKEN_LIFE = 86400
class TempAuth(object):
"""
:param app: The next WSGI app in the pipeline
:param conf: The dict of configuration values from the Paste config file
"""
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.reseller_prefixes, self.account_rules = \
config_read_reseller_options(conf, dict(require_group=''))
self.reseller_prefix = self.reseller_prefixes[0]
statsd_tail_prefix = 'tempauth.%s' % (
self.reseller_prefix if self.reseller_prefix else 'NONE',)
self.logger = get_logger(conf, log_route='tempauth',
statsd_tail_prefix=statsd_tail_prefix)
self.log_headers = config_true_value(conf.get('log_headers', 'f'))
self.auth_prefix = conf.get('auth_prefix', '/auth/')
if not self.auth_prefix or not self.auth_prefix.strip('/'):
self.logger.warning('Rewriting invalid auth prefix "%s" to '
'"/auth/" (Non-empty auth prefix path '
'is required)' % self.auth_prefix)
self.auth_prefix = '/auth/'
if not self.auth_prefix.startswith('/'):
self.auth_prefix = '/' + self.auth_prefix
if not self.auth_prefix.endswith('/'):
self.auth_prefix += '/'
self.token_life = int(conf.get('token_life', DEFAULT_TOKEN_LIFE))
self.allow_overrides = config_true_value(
conf.get('allow_overrides', 't'))
self.storage_url_scheme = conf.get('storage_url_scheme', 'default')
self.users = {}
for conf_key in conf:
if conf_key.startswith(('user_', 'user64_')):
try:
account, username = conf_key.split('_', 1)[1].split('_')
except ValueError:
raise ValueError("key %s was provided in an "
"invalid format" % conf_key)
if conf_key.startswith('user64_'):
# Because trailing equal signs would screw up config file
# parsing, we auto-pad with '=' chars.
account += '=' * (len(account) % 4)
account = base64.b64decode(account)
username += '=' * (len(username) % 4)
username = base64.b64decode(username)
if not six.PY2:
account = account.decode('utf8')
username = username.decode('utf8')
values = conf[conf_key].split()
if not values:
raise ValueError('%s has no key set' % conf_key)
key = values.pop(0)
if values and ('://' in values[-1] or '$HOST' in values[-1]):
url = values.pop()
else:
url = '$HOST/v1/%s%s' % (
self.reseller_prefix, quote(account))
self.users[account + ':' + username] = {
'key': key, 'url': url, 'groups': values}
def __call__(self, env, start_response):
"""
Accepts a standard WSGI application call, authenticating the request
and installing callback hooks for authorization and ACL header
validation. For an authenticated request, REMOTE_USER will be set to a
comma separated list of the user's groups.
With a non-empty reseller prefix, acts as the definitive auth service
for just tokens and accounts that begin with that prefix, but will deny
requests outside this prefix if no other auth middleware overrides it.
With an empty reseller prefix, acts as the definitive auth service only
for tokens that validate to a non-empty set of groups. For all other
requests, acts as the fallback auth service when no other auth
middleware overrides it.
Alternatively, if the request matches the self.auth_prefix, the request
will be routed through the internal auth request handler (self.handle).
This is to handle granting tokens, etc.
"""
if self.allow_overrides and env.get('swift.authorize_override', False):
return self.app(env, start_response)
if env.get('PATH_INFO', '').startswith(self.auth_prefix):
return self.handle(env, start_response)
s3 = env.get('s3api.auth_details') or env.get('swift3.auth_details')
token = env.get('HTTP_X_AUTH_TOKEN', env.get('HTTP_X_STORAGE_TOKEN'))
service_token = env.get('HTTP_X_SERVICE_TOKEN')
if s3 or (token and token.startswith(self.reseller_prefix)):
# Note: Empty reseller_prefix will match all tokens.
groups = self.get_groups(env, token)
if service_token:
service_groups = self.get_groups(env, service_token)
if groups and service_groups:
groups += ',' + service_groups
if groups:
group_list = groups.split(',', 2)
if len(group_list) > 1:
user = group_list[1]
else:
user = group_list[0]
trans_id = env.get('swift.trans_id')
self.logger.debug('User: %s uses token %s (trans_id %s)' %
(user, 's3' if s3 else token, trans_id))
env['REMOTE_USER'] = groups
env['swift.authorize'] = self.authorize
env['swift.clean_acl'] = clean_acl
if '.reseller_admin' in groups:
env['reseller_request'] = True
else:
# Unauthorized token
if self.reseller_prefix and not s3:
# Because I know I'm the definitive auth for this token, I
# can deny it outright.
self.logger.increment('unauthorized')
try:
vrs, realm, rest = split_path(env['PATH_INFO'],
2, 3, True)
except ValueError:
realm = 'unknown'
return HTTPUnauthorized(headers={
'Www-Authenticate': 'Swift realm="%s"' % realm})(
env, start_response)
# Because I'm not certain if I'm the definitive auth for empty
# reseller_prefixed tokens, I won't overwrite swift.authorize.
elif 'swift.authorize' not in env:
env['swift.authorize'] = self.denied_response
else:
if self._is_definitive_auth(env.get('PATH_INFO', '')):
# Handle anonymous access to accounts I'm the definitive
# auth for.
env['swift.authorize'] = self.authorize
env['swift.clean_acl'] = clean_acl
elif self.reseller_prefix == '':
# Because I'm not certain if I'm the definitive auth, I won't
# overwrite swift.authorize.
if 'swift.authorize' not in env:
env['swift.authorize'] = self.authorize
env['swift.clean_acl'] = clean_acl
else:
# Not my token, not my account, I can't authorize this request,
# deny all is a good idea if not already set...
if 'swift.authorize' not in env:
env['swift.authorize'] = self.denied_response
return self.app(env, start_response)
def _is_definitive_auth(self, path):
"""
Determine if we are the definitive auth
Determines if we are the definitive auth for a given path.
If the account name is prefixed with something matching one
of the reseller_prefix items, then we are the auth (return True)
Non-matching: we are not the auth.
However, one of the reseller_prefix items can be blank. If
so, we cannot always be definite so return False.
:param path: A path (e.g., /v1/AUTH_joesaccount/c/o)
:return:True if we are definitive auth
"""
try:
version, account, rest = split_path(path, 1, 3, True)
except ValueError:
return False
if account:
return bool(self._get_account_prefix(account))
return False
def _non_empty_reseller_prefixes(self):
return iter([pre for pre in self.reseller_prefixes if pre != ''])
def _get_account_prefix(self, account):
"""
Get the prefix of an account
Determines which reseller prefix matches the account and returns
that prefix. If account does not start with one of the known
reseller prefixes, returns None.
:param account: Account name (e.g., AUTH_joesaccount) or None
:return: The prefix string (examples: 'AUTH_', 'SERVICE_', '')
If we can't match the prefix of the account, return None
"""
if account is None:
return None
# Empty prefix matches everything, so try to match others first
for prefix in self._non_empty_reseller_prefixes():
if account.startswith(prefix):
return prefix
if '' in self.reseller_prefixes:
return ''
return None
def _dot_account(self, account):
"""
Detect if account starts with dot character after the prefix
:param account: account in path (e.g., AUTH_joesaccount)
:return:True if name starts with dot character
"""
prefix = self._get_account_prefix(account)
return prefix is not None and account[len(prefix)] == '.'
def _get_user_groups(self, account, account_user, account_id):
"""
:param account: example: test
:param account_user: example: test:tester
:param account_id: example: AUTH_test
:return: a comma separated string of group names. The group names are
as follows: account,account_user,groups...
If .admin is in the groups, this is replaced by all the
possible account ids. For example, for user joe, account acct
and resellers AUTH_, OTHER_, the returned string is as
follows: acct,acct:joe,AUTH_acct,OTHER_acct
"""
groups = [account, account_user]
groups.extend(self.users[account_user]['groups'])
if '.admin' in groups:
groups.remove('.admin')
for prefix in self._non_empty_reseller_prefixes():
groups.append('%s%s' % (prefix, account))
if account_id not in groups:
groups.append(account_id)
groups = ','.join(groups)
return groups
def get_groups(self, env, token):
"""
Get groups for the given token.
:param env: The current WSGI environment dictionary.
:param token: Token to validate and return a group string for.
:returns: None if the token is invalid or a string containing a comma
separated list of groups the authenticated user is a member
of. The first group in the list is also considered a unique
identifier for that user.
"""
groups = None
memcache_client = cache_from_env(env)
if not memcache_client:
raise Exception('Memcache required')
memcache_token_key = '%s/token/%s' % (self.reseller_prefix, token)
cached_auth_data = memcache_client.get(memcache_token_key)
if cached_auth_data:
expires, groups = cached_auth_data
if expires < time():
groups = None
elif six.PY2:
groups = groups.encode('utf8')
s3_auth_details = env.get('s3api.auth_details') or\
env.get('swift3.auth_details')
if s3_auth_details:
if 'check_signature' not in s3_auth_details:
self.logger.warning(
'Swift3 did not provide a check_signature function; '
'upgrade Swift3 if you want to use it with tempauth')
return None
account_user = s3_auth_details['access_key']
if account_user not in self.users:
return None
user = self.users[account_user]
account = account_user.split(':', 1)[0]
account_id = user['url'].rsplit('/', 1)[-1]
if not s3_auth_details['check_signature'](user['key']):
return None
env['PATH_INFO'] = env['PATH_INFO'].replace(
account_user, account_id, 1)
groups = self._get_user_groups(account, account_user, account_id)
return groups
def account_acls(self, req):
"""
Return a dict of ACL data from the account server via get_account_info.
Auth systems may define their own format, serialization, structure,
and capabilities implemented in the ACL headers and persisted in the
sysmeta data. However, auth systems are strongly encouraged to be
interoperable with Tempauth.
Account ACLs are set and retrieved via the header
X-Account-Access-Control
For header format and syntax, see:
* :func:`swift.common.middleware.acl.parse_acl()`
* :func:`swift.common.middleware.acl.format_acl()`
"""
info = get_account_info(req.environ, self.app, swift_source='TA')
try:
acls = acls_from_account_info(info)
except ValueError as e1:
self.logger.warning("Invalid ACL stored in metadata: %r" % e1)
return None
except NotImplementedError as e2:
self.logger.warning(
"ACL version exceeds middleware version: %r"
% e2)
return None
return acls
def extract_acl_and_report_errors(self, req):
"""
Return a user-readable string indicating the errors in the input ACL,
or None if there are no errors.
"""
acl_header = 'x-account-access-control'
acl_data = wsgi_to_str(req.headers.get(acl_header))
result = parse_acl(version=2, data=acl_data)
if result is None:
return 'Syntax error in input (%r)' % acl_data
tempauth_acl_keys = 'admin read-write read-only'.split()
for key in result:
# While it is possible to construct auth systems that collaborate
# on ACLs, TempAuth is not such an auth system. At this point,
# it thinks it is authoritative.
if key not in tempauth_acl_keys:
return "Key %s not recognized" % json.dumps(key)
for key in tempauth_acl_keys:
if key not in result:
continue
if not isinstance(result[key], list):
return "Value for key %s must be a list" % json.dumps(key)
for grantee in result[key]:
if not isinstance(grantee, six.string_types):
return "Elements of %s list must be strings" % json.dumps(
key)
# Everything looks fine, no errors found
internal_hdr = get_sys_meta_prefix('account') + 'core-access-control'
req.headers[internal_hdr] = req.headers.pop(acl_header)
return None
def authorize(self, req):
"""
Returns None if the request is authorized to continue or a standard
WSGI response callable if not.
"""
try:
_junk, account, container, obj = req.split_path(1, 4, True)
except ValueError:
self.logger.increment('errors')
return HTTPNotFound(request=req)
if self._get_account_prefix(account) is None:
self.logger.debug("Account name: %s doesn't start with "
"reseller_prefix(s): %s."
% (account, ','.join(self.reseller_prefixes)))
return self.denied_response(req)
# At this point, TempAuth is convinced that it is authoritative.
# If you are sending an ACL header, it must be syntactically valid
# according to TempAuth's rules for ACL syntax.
acl_data = req.headers.get('x-account-access-control')
if acl_data is not None:
error = self.extract_acl_and_report_errors(req)
if error:
msg = 'X-Account-Access-Control invalid: %s\n\nInput: %s\n' % (
error, acl_data)
headers = [('Content-Type', 'text/plain; charset=UTF-8')]
return HTTPBadRequest(request=req, headers=headers, body=msg)
user_groups = (req.remote_user or '').split(',')
account_user = user_groups[1] if len(user_groups) > 1 else None
if '.reseller_admin' in user_groups and \
account not in self.reseller_prefixes and \
not self._dot_account(account):
req.environ['swift_owner'] = True
self.logger.debug("User %s has reseller admin authorizing."
% account_user)
return None
if '.reseller_reader' in user_groups and \
account not in self.reseller_prefixes and \
not self._dot_account(account) and \
req.method in ('GET', 'HEAD'):
self.logger.debug("User %s has reseller reader authorizing."
% account_user)
return None
if wsgi_to_str(account) in user_groups and \
(req.method not in ('DELETE', 'PUT') or container):
# The user is admin for the account and is not trying to do an
# account DELETE or PUT
account_prefix = self._get_account_prefix(account)
require_group = self.account_rules.get(account_prefix).get(
'require_group')
if require_group and require_group in user_groups:
req.environ['swift_owner'] = True
self.logger.debug("User %s has admin and %s group."
" Authorizing." % (account_user,
require_group))
return None
elif not require_group:
req.environ['swift_owner'] = True
self.logger.debug("User %s has admin authorizing."
% account_user)
return None
if (req.environ.get('swift_sync_key')
and (req.environ['swift_sync_key'] ==
req.headers.get('x-container-sync-key', None))
and 'x-timestamp' in req.headers):
self.logger.debug("Allow request with container sync-key: %s."
% req.environ['swift_sync_key'])
return None
if req.method == 'OPTIONS':
# allow OPTIONS requests to proceed as normal
self.logger.debug("Allow OPTIONS request.")
return None
referrers, groups = parse_acl(getattr(req, 'acl', None))
if referrer_allowed(req.referer, referrers):
if obj or '.rlistings' in groups:
self.logger.debug("Allow authorizing %s via referer ACL."
% req.referer)
return None
for user_group in user_groups:
if user_group in groups:
self.logger.debug("User %s allowed in ACL: %s authorizing."
% (account_user, user_group))
return None
# Check for access via X-Account-Access-Control
acct_acls = self.account_acls(req)
if acct_acls:
# At least one account ACL is set in this account's sysmeta data,
# so we should see whether this user is authorized by the ACLs.
user_group_set = set(user_groups)
if user_group_set.intersection(acct_acls['admin']):
req.environ['swift_owner'] = True
self.logger.debug('User %s allowed by X-Account-Access-Control'
' (admin)' % account_user)
return None
if (user_group_set.intersection(acct_acls['read-write']) and
(container or req.method in ('GET', 'HEAD'))):
# The RW ACL allows all operations to containers/objects, but
# only GET/HEAD to accounts (and OPTIONS, above)
self.logger.debug('User %s allowed by X-Account-Access-Control'
' (read-write)' % account_user)
return None
if (user_group_set.intersection(acct_acls['read-only']) and
req.method in ('GET', 'HEAD')):
self.logger.debug('User %s allowed by X-Account-Access-Control'
' (read-only)' % account_user)
return None
return self.denied_response(req)
def denied_response(self, req):
"""
Returns a standard WSGI response callable with the status of 403 or 401
depending on whether the REMOTE_USER is set or not.
"""
if req.remote_user:
self.logger.increment('forbidden')
return HTTPForbidden(request=req)
else:
self.logger.increment('unauthorized')
return HTTPUnauthorized(request=req)
def handle(self, env, start_response):
"""
WSGI entry point for auth requests (ones that match the
self.auth_prefix).
Wraps env in swob.Request object and passes it down.
:param env: WSGI environment dictionary
:param start_response: WSGI callable
"""
try:
req = Request(env)
if self.auth_prefix:
req.path_info_pop()
if 'x-storage-token' in req.headers and \
'x-auth-token' not in req.headers:
req.headers['x-auth-token'] = req.headers['x-storage-token']
return self.handle_request(req)(env, start_response)
except (Exception, Timeout):
print("EXCEPTION IN handle: %s: %s" % (format_exc(), env))
self.logger.increment('errors')
start_response('500 Server Error',
[('Content-Type', 'text/plain')])
return [b'Internal server error.\n']
def handle_request(self, req):
"""
Entry point for auth requests (ones that match the self.auth_prefix).
Should return a WSGI-style callable (such as swob.Response).
:param req: swob.Request object
"""
req.start_time = time()
handler = None
if req.method != 'GET':
req.response = HTTPMethodNotAllowed(request=req)
return req.response
try:
version, account, user, _junk = split_path(req.path_info,
1, 4, True)
except ValueError:
self.logger.increment('errors')
return HTTPNotFound(request=req)
if version in ('v1', 'v1.0', 'auth'):
if req.method == 'GET':
handler = self.handle_get_token
if not handler:
self.logger.increment('errors')
req.response = HTTPBadRequest(request=req)
else:
req.response = handler(req)
return req.response
def _create_new_token(self, memcache_client,
account, account_user, account_id):
# Generate new token
token = '%stk%s' % (self.reseller_prefix, uuid4().hex)
expires = time() + self.token_life
groups = self._get_user_groups(account, account_user, account_id)
# Save token
memcache_token_key = '%s/token/%s' % (self.reseller_prefix, token)
memcache_client.set(memcache_token_key, (expires, groups),
time=float(expires - time()),
raise_on_error=True)
# Record the token with the user info for future use.
memcache_user_key = \
'%s/user/%s' % (self.reseller_prefix, account_user)
memcache_client.set(memcache_user_key, token,
time=float(expires - time()),
raise_on_error=True)
return token, expires
def handle_get_token(self, req):
"""
Handles the various `request for token and service end point(s)` calls.
There are various formats to support the various auth servers in the
past. Examples::
GET <auth-prefix>/v1/<act>/auth
X-Auth-User: <act>:<usr> or X-Storage-User: <usr>
X-Auth-Key: <key> or X-Storage-Pass: <key>
GET <auth-prefix>/auth
X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr>
X-Auth-Key: <key> or X-Storage-Pass: <key>
GET <auth-prefix>/v1.0
X-Auth-User: <act>:<usr> or X-Storage-User: <act>:<usr>
X-Auth-Key: <key> or X-Storage-Pass: <key>
On successful authentication, the response will have X-Auth-Token and
X-Storage-Token set to the token to use with Swift and X-Storage-URL
set to the URL to the default Swift cluster to use.
:param req: The swob.Request to process.
:returns: swob.Response, 2xx on success with data set as explained
above.
"""
# Validate the request info
try:
pathsegs = split_path(req.path_info, 1, 3, True)
except ValueError:
self.logger.increment('errors')
return HTTPNotFound(request=req)
if pathsegs[0] == 'v1' and pathsegs[2] == 'auth':
account = pathsegs[1]
user = req.headers.get('x-storage-user')
if not user:
user = req.headers.get('x-auth-user')
if not user or ':' not in user:
self.logger.increment('token_denied')
auth = 'Swift realm="%s"' % account
return HTTPUnauthorized(request=req,
headers={'Www-Authenticate': auth})
account2, user = user.split(':', 1)
if wsgi_to_str(account) != account2:
self.logger.increment('token_denied')
auth = 'Swift realm="%s"' % account
return HTTPUnauthorized(request=req,
headers={'Www-Authenticate': auth})
key = req.headers.get('x-storage-pass')
if not key:
key = req.headers.get('x-auth-key')
elif pathsegs[0] in ('auth', 'v1.0'):
user = req.headers.get('x-auth-user')
if not user:
user = req.headers.get('x-storage-user')
if not user or ':' not in user:
self.logger.increment('token_denied')
auth = 'Swift realm="unknown"'
return HTTPUnauthorized(request=req,
headers={'Www-Authenticate': auth})
account, user = user.split(':', 1)
key = req.headers.get('x-auth-key')
if not key:
key = req.headers.get('x-storage-pass')
else:
return HTTPBadRequest(request=req)
unauthed_headers = {
'Www-Authenticate': 'Swift realm="%s"' % (account or 'unknown'),
}
if not all((account, user, key)):
self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers=unauthed_headers)
# Authenticate user
account = wsgi_to_str(account)
user = wsgi_to_str(user)
key = wsgi_to_str(key)
account_user = account + ':' + user
if account_user not in self.users:
self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers=unauthed_headers)
if self.users[account_user]['key'] != key:
self.logger.increment('token_denied')
return HTTPUnauthorized(request=req, headers=unauthed_headers)
account_id = self.users[account_user]['url'].rsplit('/', 1)[-1]
# Get memcache client
memcache_client = cache_from_env(req.environ)
if not memcache_client:
raise Exception('Memcache required')
# See if a token already exists and hasn't expired
token = None
memcache_user_key = '%s/user/%s' % (self.reseller_prefix, account_user)
candidate_token = memcache_client.get(memcache_user_key)
if candidate_token:
memcache_token_key = \
'%s/token/%s' % (self.reseller_prefix, candidate_token)
cached_auth_data = memcache_client.get(memcache_token_key)
if cached_auth_data:
expires, old_groups = cached_auth_data
old_groups = [group.encode('utf8') if six.PY2 else group
for group in old_groups.split(',')]
new_groups = self._get_user_groups(account, account_user,
account_id)
if expires > time() and \
set(old_groups) == set(new_groups.split(',')):
token = candidate_token
# Create a new token if one didn't exist
if not token:
try:
token, expires = self._create_new_token(
memcache_client, account, account_user, account_id)
except MemcacheConnectionError:
return HTTPServiceUnavailable(request=req)
resp = Response(request=req, headers={
'x-auth-token': token, 'x-storage-token': token,
'x-auth-token-expires': str(int(expires - time()))})
url = self.users[account_user]['url'].replace('$HOST', resp.host_url)
if self.storage_url_scheme != 'default':
url = self.storage_url_scheme + ':' + url.split(':', 1)[1]
resp.headers['x-storage-url'] = url
return resp
def filter_factory(global_conf, **local_conf):
"""Returns a WSGI filter app for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info('tempauth', account_acls=True)
def auth_filter(app):
return TempAuth(app, conf)
return auth_filter
| swift-master | swift/common/middleware/tempauth.py |
# Copyright (c) 2011-2014 Greg Holt
# Copyright (c) 2012-2013 John Dickinson
# Copyright (c) 2012 Felipe Reyes
# Copyright (c) 2012 Peter Portante
# Copyright (c) 2012 Victor Rodionov
# Copyright (c) 2013-2014 Samuel Merritt
# Copyright (c) 2013 Chuck Thier
# Copyright (c) 2013 David Goetz
# Copyright (c) 2013 Dirk Mueller
# Copyright (c) 2013 Donagh McCabe
# Copyright (c) 2013 Fabien Boucher
# Copyright (c) 2013 Greg Lange
# Copyright (c) 2013 Kun Huang
# Copyright (c) 2013 Richard Hawkins
# Copyright (c) 2013 Tong Li
# Copyright (c) 2013 ZhiQiang Fan
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
TempURL Middleware
Allows the creation of URLs to provide temporary access to objects.
For example, a website may wish to provide a link to download a large
object in Swift, but the Swift account has no public access. The
website can generate a URL that will provide GET access for a limited
time to the resource. When the web browser user clicks on the link,
the browser will download the object directly from Swift, obviating
the need for the website to act as a proxy for the request.
If the user were to share the link with all his friends, or
accidentally post it on a forum, etc. the direct access would be
limited to the expiration time set when the website created the link.
Beyond that, the middleware provides the ability to create URLs, which
contain signatures which are valid for all objects which share a
common prefix. These prefix-based URLs are useful for sharing a set
of objects.
Restrictions can also be placed on the ip that the resource is allowed
to be accessed from. This can be useful for locking down where the urls
can be used from.
------------
Client Usage
------------
To create temporary URLs, first an ``X-Account-Meta-Temp-URL-Key``
header must be set on the Swift account. Then, an HMAC (RFC 2104)
signature is generated using the HTTP method to allow (``GET``, ``PUT``,
``DELETE``, etc.), the Unix timestamp until which the access should be allowed,
the full path to the object, and the key set on the account.
The digest algorithm to be used may be configured by the operator. By default,
HMAC-SHA256 and HMAC-SHA512 are supported. Check the
``tempurl.allowed_digests`` entry in the cluster's capabilities response to
see which algorithms are supported by your deployment; see
:doc:`api/discoverability` for more information. On older clusters,
the ``tempurl`` key may be present while the ``allowed_digests`` subkey
is not; in this case, only HMAC-SHA1 is supported.
For example, here is code generating the signature for a ``GET`` for 60
seconds on ``/v1/AUTH_account/container/object``::
import hmac
from hashlib import sha256
from time import time
method = 'GET'
expires = int(time() + 60)
path = '/v1/AUTH_account/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = hmac.new(key, hmac_body, sha256).hexdigest()
Be certain to use the full path, from the ``/v1/`` onward.
Let's say ``sig`` ends up equaling
``732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b`` and
``expires`` ends up ``1512508563``. Then, for example, the website could
provide a link to::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563
For longer hashes, a hex encoding becomes unwieldy. Base64 encoding is also
supported, and indicated by prefixing the signature with ``"<digest name>:"``.
This is *required* for HMAC-SHA512 signatures. For example, comparable code
for generating a HMAC-SHA512 signature would be::
import base64
import hmac
from hashlib import sha512
from time import time
method = 'GET'
expires = int(time() + 60)
path = '/v1/AUTH_account/container/object'
key = 'mykey'
hmac_body = '%s\n%s\n%s' % (method, expires, path)
sig = 'sha512:' + base64.urlsafe_b64encode(hmac.new(
key, hmac_body, sha512).digest())
Supposing that ``sig`` ends up equaling
``sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvROQ4jtYH4nRAmm
5ErY2X11Yc1Yhy2OMCyN3yueeXg==`` and ``expires`` ends up
``1516741234``, then the website could provide a link to::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=sha512:ZrSijn0GyDhsv1ltIj9hWUTrbAeE45NcKXyBaz7aPbSMvRO
Q4jtYH4nRAmm5ErY2X11Yc1Yhy2OMCyN3yueeXg==&
temp_url_expires=1516741234
You may also use ISO 8601 UTC timestamps with the format
``"%Y-%m-%dT%H:%M:%SZ"`` instead of UNIX timestamps in the URL
(but NOT in the code above for generating the signature!).
So, the above HMAC-SHA246 URL could also be formulated as::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=2017-12-05T21:16:03Z
If a prefix-based signature with the prefix ``pre`` is desired, set path to::
path = 'prefix:/v1/AUTH_account/container/pre'
The generated signature would be valid for all objects starting
with ``pre``. The middleware detects a prefix-based temporary URL by
a query parameter called ``temp_url_prefix``. So, if ``sig`` and ``expires``
would end up like above, following URL would be valid::
https://swift-cluster.example.com/v1/AUTH_account/container/pre/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563&
temp_url_prefix=pre
Another valid URL::
https://swift-cluster.example.com/v1/AUTH_account/container/pre/
subfolder/another_object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563&
temp_url_prefix=pre
If you wish to lock down the ip ranges from where the resource can be accessed
to the ip ``1.2.3.4``::
import hmac
from hashlib import sha256
from time import time
method = 'GET'
expires = int(time() + 60)
path = '/v1/AUTH_account/container/object'
ip_range = '1.2.3.4'
key = b'mykey'
hmac_body = 'ip=%s\n%s\n%s\n%s' % (ip_range, method, expires, path)
sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest()
The generated signature would only be valid from the ip ``1.2.3.4``. The
middleware detects an ip-based temporary URL by a query parameter called
``temp_url_ip_range``. So, if ``sig`` and ``expires`` would end up like
above, following URL would be valid::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=3f48476acaf5ec272acd8e99f7b5bad96c52ddba53ed27c60613711774a06f0c&
temp_url_expires=1648082711&
temp_url_ip_range=1.2.3.4
Similarly to lock down the ip to a range of ``1.2.3.X`` so starting
from the ip ``1.2.3.0`` to ``1.2.3.255``::
import hmac
from hashlib import sha256
from time import time
method = 'GET'
expires = int(time() + 60)
path = '/v1/AUTH_account/container/object'
ip_range = '1.2.3.0/24'
key = b'mykey'
hmac_body = 'ip=%s\n%s\n%s\n%s' % (ip_range, method, expires, path)
sig = hmac.new(key, hmac_body.encode('ascii'), sha256).hexdigest()
Then the following url would be valid::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=6ff81256b8a3ba11d239da51a703b9c06a56ffddeb8caab74ca83af8f73c9c83&
temp_url_expires=1648082711&
temp_url_ip_range=1.2.3.0/24
Any alteration of the resource path or query arguments of a temporary URL
would result in ``401 Unauthorized``. Similarly, a ``PUT`` where ``GET`` was
the allowed method would be rejected with ``401 Unauthorized``.
However, ``HEAD`` is allowed if ``GET``, ``PUT``, or ``POST`` is allowed.
Using this in combination with browser form post translation
middleware could also allow direct-from-browser uploads to specific
locations in Swift.
TempURL supports both account and container level keys. Each allows up to two
keys to be set, allowing key rotation without invalidating all existing
temporary URLs. Account keys are specified by ``X-Account-Meta-Temp-URL-Key``
and ``X-Account-Meta-Temp-URL-Key-2``, while container keys are specified by
``X-Container-Meta-Temp-URL-Key`` and ``X-Container-Meta-Temp-URL-Key-2``.
Signatures are checked against account and container keys, if
present.
With ``GET`` TempURLs, a ``Content-Disposition`` header will be set on the
response so that browsers will interpret this as a file attachment to
be saved. The filename chosen is based on the object name, but you
can override this with a filename query parameter. Modifying the
above example::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563&filename=My+Test+File.pdf
If you do not want the object to be downloaded, you can cause
``Content-Disposition: inline`` to be set on the response by adding the
``inline`` parameter to the query string, like so::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563&inline
In some cases, the client might not able to present the content of the object,
but you still want the content able to save to local with the specific
filename. So you can cause ``Content-Disposition: inline; filename=...`` to be
set on the response by adding the ``inline&filename=...`` parameter to the
query string, like so::
https://swift-cluster.example.com/v1/AUTH_account/container/object?
temp_url_sig=732fcac368abb10c78a4cbe95c3fab7f311584532bf779abd5074e13cbe8b88b&
temp_url_expires=1512508563&inline&filename=My+Test+File.pdf
---------------------
Cluster Configuration
---------------------
This middleware understands the following configuration settings:
``incoming_remove_headers``
A whitespace-delimited list of the headers to remove from
incoming requests. Names may optionally end with ``*`` to
indicate a prefix match. ``incoming_allow_headers`` is a
list of exceptions to these removals.
Default: ``x-timestamp``
``incoming_allow_headers``
A whitespace-delimited list of the headers allowed as
exceptions to ``incoming_remove_headers``. Names may
optionally end with ``*`` to indicate a prefix match.
Default: None
``outgoing_remove_headers``
A whitespace-delimited list of the headers to remove from
outgoing responses. Names may optionally end with ``*`` to
indicate a prefix match. ``outgoing_allow_headers`` is a
list of exceptions to these removals.
Default: ``x-object-meta-*``
``outgoing_allow_headers``
A whitespace-delimited list of the headers allowed as
exceptions to ``outgoing_remove_headers``. Names may
optionally end with ``*`` to indicate a prefix match.
Default: ``x-object-meta-public-*``
``methods``
A whitespace delimited list of request methods that are
allowed to be used with a temporary URL.
Default: ``GET HEAD PUT POST DELETE``
``allowed_digests``
A whitespace delimited list of digest algorithms that are allowed
to be used when calculating the signature for a temporary URL.
Default: ``sha256 sha512``
"""
__all__ = ['TempURL', 'filter_factory',
'DEFAULT_INCOMING_REMOVE_HEADERS',
'DEFAULT_INCOMING_ALLOW_HEADERS',
'DEFAULT_OUTGOING_REMOVE_HEADERS',
'DEFAULT_OUTGOING_ALLOW_HEADERS']
from calendar import timegm
import six
from os.path import basename
from time import time, strftime, strptime, gmtime
from ipaddress import ip_address, ip_network
from six.moves.urllib.parse import parse_qs
from six.moves.urllib.parse import urlencode
from swift.proxy.controllers.base import get_account_info, get_container_info
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.digest import get_allowed_digests, \
extract_digest_and_algorithm, DEFAULT_ALLOWED_DIGESTS, get_hmac
from swift.common.swob import header_to_environ_key, HTTPUnauthorized, \
HTTPBadRequest, wsgi_to_str
from swift.common.utils import split_path, get_valid_utf8_str, \
streq_const_time, quote, get_logger
from swift.common.registry import register_swift_info, register_sensitive_param
DISALLOWED_INCOMING_HEADERS = 'x-object-manifest x-symlink-target'
#: Default headers to remove from incoming requests. Simply a whitespace
#: delimited list of header names and names can optionally end with '*' to
#: indicate a prefix match. DEFAULT_INCOMING_ALLOW_HEADERS is a list of
#: exceptions to these removals.
DEFAULT_INCOMING_REMOVE_HEADERS = 'x-timestamp'
#: Default headers as exceptions to DEFAULT_INCOMING_REMOVE_HEADERS. Simply a
#: whitespace delimited list of header names and names can optionally end with
#: '*' to indicate a prefix match.
DEFAULT_INCOMING_ALLOW_HEADERS = ''
#: Default headers to remove from outgoing responses. Simply a whitespace
#: delimited list of header names and names can optionally end with '*' to
#: indicate a prefix match. DEFAULT_OUTGOING_ALLOW_HEADERS is a list of
#: exceptions to these removals.
DEFAULT_OUTGOING_REMOVE_HEADERS = 'x-object-meta-*'
#: Default headers as exceptions to DEFAULT_OUTGOING_REMOVE_HEADERS. Simply a
#: whitespace delimited list of header names and names can optionally end with
#: '*' to indicate a prefix match.
DEFAULT_OUTGOING_ALLOW_HEADERS = 'x-object-meta-public-*'
CONTAINER_SCOPE = 'container'
ACCOUNT_SCOPE = 'account'
EXPIRES_ISO8601_FORMAT = '%Y-%m-%dT%H:%M:%SZ'
def get_tempurl_keys_from_metadata(meta):
"""
Extracts the tempurl keys from metadata.
:param meta: account metadata
:returns: list of keys found (possibly empty if no keys set)
Example:
meta = get_account_info(...)['meta']
keys = get_tempurl_keys_from_metadata(meta)
"""
return [(get_valid_utf8_str(value) if six.PY2 else value)
for key, value in meta.items()
if key.lower() in ('temp-url-key', 'temp-url-key-2')]
def disposition_format(disposition_type, filename):
# Content-Disposition in HTTP is defined in
# https://tools.ietf.org/html/rfc6266 and references
# https://tools.ietf.org/html/rfc5987#section-3.2
# to explain the filename*= encoding format. The summary
# is that it's the charset, then an optional (and empty) language
# then the filename. Looks funny, but it's right.
return '''%s; filename="%s"; filename*=UTF-8''%s''' % (
disposition_type, quote(filename, safe=' /'), quote(filename))
def authorize_same_account(account_to_match):
def auth_callback_same_account(req):
try:
_ver, acc, _rest = req.split_path(2, 3, True)
except ValueError:
return HTTPUnauthorized(request=req)
if wsgi_to_str(acc) == account_to_match:
return None
else:
return HTTPUnauthorized(request=req)
return auth_callback_same_account
def authorize_same_container(account_to_match, container_to_match):
def auth_callback_same_container(req):
try:
_ver, acc, con, _rest = req.split_path(3, 4, True)
except ValueError:
return HTTPUnauthorized(request=req)
if wsgi_to_str(acc) == account_to_match and \
wsgi_to_str(con) == container_to_match:
return None
else:
return HTTPUnauthorized(request=req)
return auth_callback_same_container
class TempURL(object):
"""
WSGI Middleware to grant temporary URLs specific access to Swift
resources. See the overview for more information.
The proxy logs created for any subrequests made will have swift.source set
to "TU".
:param app: The next WSGI filter or app in the paste.deploy
chain.
:param conf: The configuration dict for the middleware.
"""
def __init__(self, app, conf, logger=None):
#: The next WSGI application/filter in the paste.deploy pipeline.
self.app = app
#: The filter configuration dict.
self.conf = conf
self.logger = logger or get_logger(conf, log_route='tempurl')
self.allowed_digests = conf.get(
'allowed_digests', DEFAULT_ALLOWED_DIGESTS.split())
self.disallowed_headers = set(
header_to_environ_key(h)
for h in DISALLOWED_INCOMING_HEADERS.split())
headers = [header_to_environ_key(h)
for h in conf.get('incoming_remove_headers',
DEFAULT_INCOMING_REMOVE_HEADERS.split())]
#: Headers to remove from incoming requests. Uppercase WSGI env style,
#: like `HTTP_X_PRIVATE`.
self.incoming_remove_headers = \
[h for h in headers if not h.endswith('*')]
#: Header with match prefixes to remove from incoming requests.
#: Uppercase WSGI env style, like `HTTP_X_SENSITIVE_*`.
self.incoming_remove_headers_startswith = \
[h[:-1] for h in headers if h.endswith('*')]
headers = [header_to_environ_key(h)
for h in conf.get('incoming_allow_headers',
DEFAULT_INCOMING_ALLOW_HEADERS.split())]
#: Headers to allow in incoming requests. Uppercase WSGI env style,
#: like `HTTP_X_MATCHES_REMOVE_PREFIX_BUT_OKAY`.
self.incoming_allow_headers = \
[h for h in headers if not h.endswith('*')]
#: Header with match prefixes to allow in incoming requests. Uppercase
#: WSGI env style, like `HTTP_X_MATCHES_REMOVE_PREFIX_BUT_OKAY_*`.
self.incoming_allow_headers_startswith = \
[h[:-1] for h in headers if h.endswith('*')]
headers = [h.title()
for h in conf.get('outgoing_remove_headers',
DEFAULT_OUTGOING_REMOVE_HEADERS.split())]
#: Headers to remove from outgoing responses. Lowercase, like
#: `x-account-meta-temp-url-key`.
self.outgoing_remove_headers = \
[h for h in headers if not h.endswith('*')]
#: Header with match prefixes to remove from outgoing responses.
#: Lowercase, like `x-account-meta-private-*`.
self.outgoing_remove_headers_startswith = \
[h[:-1] for h in headers if h.endswith('*')]
headers = [h.title()
for h in conf.get('outgoing_allow_headers',
DEFAULT_OUTGOING_ALLOW_HEADERS.split())]
#: Headers to allow in outgoing responses. Lowercase, like
#: `x-matches-remove-prefix-but-okay`.
self.outgoing_allow_headers = \
[h for h in headers if not h.endswith('*')]
#: Header with match prefixes to allow in outgoing responses.
#: Lowercase, like `x-matches-remove-prefix-but-okay-*`.
self.outgoing_allow_headers_startswith = \
[h[:-1] for h in headers if h.endswith('*')]
#: HTTP user agent to use for subrequests.
self.agent = '%(orig)s TempURL'
def __call__(self, env, start_response):
"""
Main hook into the WSGI paste.deploy filter/app pipeline.
:param env: The WSGI environment dict.
:param start_response: The WSGI start_response hook.
:returns: Response as per WSGI.
"""
if env['REQUEST_METHOD'] == 'OPTIONS':
return self.app(env, start_response)
info = self._get_temp_url_info(env)
temp_url_sig, temp_url_expires, temp_url_prefix, filename, \
inline_disposition, temp_url_ip_range = info
if temp_url_sig is None and temp_url_expires is None:
return self.app(env, start_response)
if not temp_url_sig or not temp_url_expires:
return self._invalid(env, start_response)
try:
hash_algorithm, temp_url_sig = extract_digest_and_algorithm(
temp_url_sig)
except ValueError:
return self._invalid(env, start_response)
if hash_algorithm not in self.allowed_digests:
return self._invalid(env, start_response)
account, container, obj = self._get_path_parts(env)
if not account:
return self._invalid(env, start_response)
if temp_url_ip_range:
client_address = env.get('REMOTE_ADDR')
if client_address is None:
return self._invalid(env, start_response)
try:
allowed_ip_ranges = ip_network(six.u(temp_url_ip_range))
if ip_address(six.u(client_address)) not in allowed_ip_ranges:
return self._invalid(env, start_response)
except ValueError:
return self._invalid(env, start_response)
keys = self._get_keys(env)
if not keys:
return self._invalid(env, start_response)
if temp_url_prefix is None:
path = '/v1/%s/%s/%s' % (account, container, obj)
else:
if not obj.startswith(temp_url_prefix):
return self._invalid(env, start_response)
path = 'prefix:/v1/%s/%s/%s' % (account, container,
temp_url_prefix)
if env['REQUEST_METHOD'] == 'HEAD':
hmac_vals = [
hmac for method in ('HEAD', 'GET', 'POST', 'PUT')
for hmac in self._get_hmacs(
env, temp_url_expires, path, keys, hash_algorithm,
request_method=method, ip_range=temp_url_ip_range)]
else:
hmac_vals = self._get_hmacs(
env, temp_url_expires, path, keys, hash_algorithm,
ip_range=temp_url_ip_range)
is_valid_hmac = False
hmac_scope = None
for hmac, scope in hmac_vals:
# While it's true that we short-circuit, this doesn't affect the
# timing-attack resistance since the only way this will
# short-circuit is when a valid signature is passed in.
if streq_const_time(temp_url_sig, hmac):
is_valid_hmac = True
hmac_scope = scope
break
if not is_valid_hmac:
return self._invalid(env, start_response)
self.logger.increment('tempurl.digests.%s' % hash_algorithm)
# disallowed headers prevent accidentally allowing upload of a pointer
# to data that the PUT tempurl would not otherwise allow access for.
# It should be safe to provide a GET tempurl for data that an
# untrusted client just uploaded with a PUT tempurl.
resp = self._clean_disallowed_headers(env, start_response)
if resp:
return resp
self._clean_incoming_headers(env)
if hmac_scope == ACCOUNT_SCOPE:
env['swift.authorize'] = authorize_same_account(account)
else:
env['swift.authorize'] = authorize_same_container(account,
container)
env['swift.authorize_override'] = True
env['REMOTE_USER'] = '.wsgi.tempurl'
qs = {'temp_url_sig': temp_url_sig,
'temp_url_expires': temp_url_expires}
if temp_url_prefix is not None:
qs['temp_url_prefix'] = temp_url_prefix
if filename:
qs['filename'] = filename
env['QUERY_STRING'] = urlencode(qs)
def _start_response(status, headers, exc_info=None):
headers = self._clean_outgoing_headers(headers)
if env['REQUEST_METHOD'] in ('GET', 'HEAD') and status[0] == '2':
# figure out the right value for content-disposition
# 1) use the value from the query string
# 2) use the value from the object metadata
# 3) use the object name (default)
out_headers = []
existing_disposition = None
for h, v in headers:
if h.lower() != 'content-disposition':
out_headers.append((h, v))
else:
existing_disposition = v
if inline_disposition:
if filename:
disposition_value = disposition_format('inline',
filename)
else:
disposition_value = 'inline'
elif filename:
disposition_value = disposition_format('attachment',
filename)
elif existing_disposition:
disposition_value = existing_disposition
else:
name = basename(wsgi_to_str(env['PATH_INFO']).rstrip('/'))
disposition_value = disposition_format('attachment',
name)
# this is probably just paranoia, I couldn't actually get a
# newline into existing_disposition
value = disposition_value.replace('\n', '%0A')
out_headers.append(('Content-Disposition', value))
# include Expires header for better cache-control
out_headers.append(('Expires', strftime(
"%a, %d %b %Y %H:%M:%S GMT",
gmtime(temp_url_expires))))
headers = out_headers
return start_response(status, headers, exc_info)
return self.app(env, _start_response)
def _get_path_parts(self, env):
"""
Return the account, container and object name for the request,
if it's an object request and one of the configured methods;
otherwise, None is returned.
:param env: The WSGI environment for the request.
:returns: (Account str, container str, object str) or
(None, None, None).
"""
if env['REQUEST_METHOD'] in self.conf['methods']:
try:
ver, acc, cont, obj = split_path(env['PATH_INFO'], 4, 4, True)
except ValueError:
return (None, None, None)
if ver == 'v1' and obj.strip('/'):
return (wsgi_to_str(acc), wsgi_to_str(cont), wsgi_to_str(obj))
return (None, None, None)
def _get_temp_url_info(self, env):
"""
Returns the provided temporary URL parameters (sig, expires, prefix,
temp_url_ip_range), if given and syntactically valid.
Either sig, expires or prefix could be None if not provided.
If provided, expires is also converted to an int if possible or 0
if not, and checked for expiration (returns 0 if expired).
:param env: The WSGI environment for the request.
:returns: (sig, expires, prefix, filename, inline,
temp_url_ip_range) as described above.
"""
temp_url_sig = temp_url_expires = temp_url_prefix = filename =\
inline = None
temp_url_ip_range = None
qs = parse_qs(env.get('QUERY_STRING', ''), keep_blank_values=True)
if 'temp_url_ip_range' in qs:
temp_url_ip_range = qs['temp_url_ip_range'][0]
if 'temp_url_sig' in qs:
temp_url_sig = qs['temp_url_sig'][0]
if 'temp_url_expires' in qs:
try:
temp_url_expires = int(qs['temp_url_expires'][0])
except ValueError:
try:
temp_url_expires = timegm(strptime(
qs['temp_url_expires'][0],
EXPIRES_ISO8601_FORMAT))
except ValueError:
temp_url_expires = 0
if temp_url_expires < time():
temp_url_expires = 0
if 'temp_url_prefix' in qs:
temp_url_prefix = qs['temp_url_prefix'][0]
if 'filename' in qs:
filename = qs['filename'][0]
if 'inline' in qs:
inline = True
return (temp_url_sig, temp_url_expires, temp_url_prefix, filename,
inline, temp_url_ip_range)
def _get_keys(self, env):
"""
Returns the X-[Account|Container]-Meta-Temp-URL-Key[-2] header values
for the account or container, or an empty list if none are set. Each
value comes as a 2-tuple (key, scope), where scope is either
CONTAINER_SCOPE or ACCOUNT_SCOPE.
Returns 0-4 elements depending on how many keys are set in the
account's or container's metadata.
:param env: The WSGI environment for the request.
:returns: [
(X-Account-Meta-Temp-URL-Key str value, ACCOUNT_SCOPE) if set,
(X-Account-Meta-Temp-URL-Key-2 str value, ACCOUNT_SCOPE if set,
(X-Container-Meta-Temp-URL-Key str value, CONTAINER_SCOPE) if set,
(X-Container-Meta-Temp-URL-Key-2 str value, CONTAINER_SCOPE if set,
]
"""
account_info = get_account_info(env, self.app, swift_source='TU')
account_keys = get_tempurl_keys_from_metadata(account_info['meta'])
container_info = get_container_info(env, self.app, swift_source='TU')
container_keys = get_tempurl_keys_from_metadata(
container_info.get('meta', []))
return ([(ak, ACCOUNT_SCOPE) for ak in account_keys] +
[(ck, CONTAINER_SCOPE) for ck in container_keys])
def _get_hmacs(self, env, expires, path, scoped_keys, hash_algorithm,
request_method=None, ip_range=None):
"""
:param env: The WSGI environment for the request.
:param expires: Unix timestamp as an int for when the URL
expires.
:param path: The path which is used for hashing.
:param scoped_keys: (key, scope) tuples like _get_keys() returns
:param hash_algorithm: The hash algorithm to use.
:param request_method: Optional override of the request in
the WSGI env. For example, if a HEAD
does not match, you may wish to
override with GET to still allow the
HEAD.
:param ip_range: The ip range from which the resource is allowed
to be accessed
:returns: a list of (hmac, scope) 2-tuples
"""
if not request_method:
request_method = env['REQUEST_METHOD']
return [
(get_hmac(
request_method, path, expires, key,
digest=hash_algorithm, ip_range=ip_range
), scope)
for (key, scope) in scoped_keys]
def _invalid(self, env, start_response):
"""
Performs the necessary steps to indicate a WSGI 401
Unauthorized response to the request.
:param env: The WSGI environment for the request.
:param start_response: The WSGI start_response hook.
:returns: 401 response as per WSGI.
"""
if env['REQUEST_METHOD'] == 'HEAD':
body = None
else:
body = '401 Unauthorized: Temp URL invalid\n'
return HTTPUnauthorized(body=body)(env, start_response)
def _clean_disallowed_headers(self, env, start_response):
"""
Validate the absence of disallowed headers for "unsafe" operations.
:returns: None for safe operations or swob.HTTPBadResponse if the
request includes disallowed headers.
"""
if env['REQUEST_METHOD'] in ('GET', 'HEAD', 'OPTIONS'):
return
for h in env:
if h in self.disallowed_headers:
return HTTPBadRequest(
body='The header %r is not allowed in this tempurl' %
h[len('HTTP_'):].title().replace('_', '-'))(
env, start_response)
def _clean_incoming_headers(self, env):
"""
Removes any headers from the WSGI environment as per the
middleware configuration for incoming requests.
:param env: The WSGI environment for the request.
"""
for h in list(env.keys()):
if h in self.incoming_allow_headers:
continue
for p in self.incoming_allow_headers_startswith:
if h.startswith(p):
break
else:
if h in self.incoming_remove_headers:
del env[h]
continue
for p in self.incoming_remove_headers_startswith:
if h.startswith(p):
del env[h]
break
def _clean_outgoing_headers(self, headers):
"""
Removes any headers as per the middleware configuration for
outgoing responses.
:param headers: A WSGI start_response style list of headers,
[('header1', 'value), ('header2', 'value),
...]
:returns: The same headers list, but with some headers
removed as per the middlware configuration for
outgoing responses.
"""
headers = HeaderKeyDict(headers)
for h in list(headers.keys()):
if h in self.outgoing_allow_headers:
continue
for p in self.outgoing_allow_headers_startswith:
if h.startswith(p):
break
else:
if h in self.outgoing_remove_headers:
del headers[h]
continue
for p in self.outgoing_remove_headers_startswith:
if h.startswith(p):
del headers[h]
break
return list(headers.items())
def filter_factory(global_conf, **local_conf):
"""Returns the WSGI filter for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
logger = get_logger(conf, log_route='tempurl')
defaults = {
'methods': 'GET HEAD PUT POST DELETE',
'incoming_remove_headers': DEFAULT_INCOMING_REMOVE_HEADERS,
'incoming_allow_headers': DEFAULT_INCOMING_ALLOW_HEADERS,
'outgoing_remove_headers': DEFAULT_OUTGOING_REMOVE_HEADERS,
'outgoing_allow_headers': DEFAULT_OUTGOING_ALLOW_HEADERS,
}
info_conf = {k: conf.get(k, v).split() for k, v in defaults.items()}
allowed_digests, deprecated_digests = get_allowed_digests(
conf.get('allowed_digests', '').split(), logger)
info_conf['allowed_digests'] = sorted(allowed_digests)
if deprecated_digests:
info_conf['deprecated_digests'] = sorted(deprecated_digests)
register_swift_info('tempurl', **info_conf)
conf.update(info_conf)
register_sensitive_param('temp_url_sig')
return lambda app: TempURL(app, conf, logger)
| swift-master | swift/common/middleware/tempurl.py |
# Copyright (c) 2010-2020 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This middleware fix the Etag header of responses so that it is RFC compliant.
`RFC 7232 <https://tools.ietf.org/html/rfc7232#section-2.3>`__ specifies that
the value of the Etag header must be double quoted.
It must be placed at the beggining of the pipeline, right after cache::
[pipeline:main]
pipeline = ... cache etag-quoter ...
[filter:etag-quoter]
use = egg:swift#etag_quoter
Set ``X-Account-Rfc-Compliant-Etags: true`` at the account
level to have any Etags in object responses be double quoted, as in
``"d41d8cd98f00b204e9800998ecf8427e"``. Alternatively, you may
only fix Etags in a single container by setting
``X-Container-Rfc-Compliant-Etags: true`` on the container.
This may be necessary for Swift to work properly with some CDNs.
Either option may also be explicitly *disabled*, so you may enable quoted
Etags account-wide as above but turn them off for individual containers
with ``X-Container-Rfc-Compliant-Etags: false``. This may be
useful if some subset of applications expect Etags to be bare MD5s.
"""
from swift.common.constraints import valid_api_version
from swift.common.http import is_success
from swift.common.swob import Request
from swift.common.utils import config_true_value
from swift.common.registry import register_swift_info
from swift.proxy.controllers.base import get_account_info, get_container_info
class EtagQuoterMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
def __call__(self, env, start_response):
req = Request(env)
try:
version, account, container, obj = req.split_path(
2, 4, rest_with_last=True)
is_swifty_request = valid_api_version(version)
except ValueError:
is_swifty_request = False
if not is_swifty_request:
return self.app(env, start_response)
if not obj:
typ = 'Container' if container else 'Account'
client_header = 'X-%s-Rfc-Compliant-Etags' % typ
sysmeta_header = 'X-%s-Sysmeta-Rfc-Compliant-Etags' % typ
if client_header in req.headers:
if req.headers[client_header]:
req.headers[sysmeta_header] = config_true_value(
req.headers[client_header])
else:
req.headers[sysmeta_header] = ''
if req.headers.get(client_header.replace('X-', 'X-Remove-', 1)):
req.headers[sysmeta_header] = ''
def translating_start_response(status, headers, exc_info=None):
return start_response(status, [
(client_header if h.title() == sysmeta_header else h,
v) for h, v in headers
], exc_info)
return self.app(env, translating_start_response)
container_info = get_container_info(env, self.app, 'EQ')
if not container_info or not is_success(container_info['status']):
return self.app(env, start_response)
flag = container_info.get('sysmeta', {}).get('rfc-compliant-etags')
if flag is None:
account_info = get_account_info(env, self.app, 'EQ')
if not account_info or not is_success(account_info['status']):
return self.app(env, start_response)
flag = account_info.get('sysmeta', {}).get(
'rfc-compliant-etags')
if flag is None:
flag = self.conf.get('enable_by_default', 'false')
if not config_true_value(flag):
return self.app(env, start_response)
status, headers, resp_iter = req.call_application(self.app)
headers = [
(header, value) if header.lower() != 'etag' or (
value.startswith(('"', 'W/"')) and value.endswith('"'))
else (header, '"%s"' % value)
for header, value in headers]
start_response(status, headers)
return resp_iter
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info(
'etag_quoter', enable_by_default=config_true_value(
conf.get('enable_by_default', 'false')))
def etag_quoter_filter(app):
return EtagQuoterMiddleware(app, conf)
return etag_quoter_filter
| swift-master | swift/common/middleware/etag_quoter.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The ``gatekeeper`` middleware imposes restrictions on the headers that
may be included with requests and responses. Request headers are filtered
to remove headers that should never be generated by a client. Similarly,
response headers are filtered to remove private headers that should
never be passed to a client.
The ``gatekeeper`` middleware must always be present in the proxy server
wsgi pipeline. It should be configured close to the start of the pipeline
specified in ``/etc/swift/proxy-server.conf``, immediately after catch_errors
and before any other middleware. It is essential that it is configured ahead
of all middlewares using system metadata in order that they function
correctly.
If ``gatekeeper`` middleware is not configured in the pipeline then it will be
automatically inserted close to the start of the pipeline by the proxy server.
"""
from swift.common.swob import Request
from swift.common.utils import get_logger, config_true_value
from swift.common.request_helpers import (
remove_items, get_sys_meta_prefix, OBJECT_TRANSIENT_SYSMETA_PREFIX
)
from six.moves.urllib.parse import urlsplit
import re
#: A list of python regular expressions that will be used to
#: match against inbound request headers. Matching headers will
#: be removed from the request.
# Exclude headers starting with a sysmeta prefix.
# Exclude headers starting with object transient system metadata prefix.
# Exclude headers starting with an internal backend header prefix.
# If adding to this list, note that these are regex patterns,
# so use a trailing $ to constrain to an exact header match
# rather than prefix match.
inbound_exclusions = [get_sys_meta_prefix('account'),
get_sys_meta_prefix('container'),
get_sys_meta_prefix('object'),
OBJECT_TRANSIENT_SYSMETA_PREFIX,
'x-backend']
#: A list of python regular expressions that will be used to
#: match against outbound response headers. Matching headers will
#: be removed from the response.
outbound_exclusions = inbound_exclusions
def make_exclusion_test(exclusions):
expr = '|'.join(exclusions)
test = re.compile(expr, re.IGNORECASE)
return test.match
class GatekeeperMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='gatekeeper')
self.inbound_condition = make_exclusion_test(inbound_exclusions)
self.outbound_condition = make_exclusion_test(outbound_exclusions)
self.shunt_x_timestamp = config_true_value(
conf.get('shunt_inbound_x_timestamp', 'true'))
self.allow_reserved_names_header = config_true_value(
conf.get('allow_reserved_names_header', 'false'))
def __call__(self, env, start_response):
req = Request(env)
removed = remove_items(req.headers, self.inbound_condition)
if removed:
self.logger.debug('removed request headers: %s' % removed)
if 'X-Timestamp' in req.headers and self.shunt_x_timestamp:
ts = req.headers.pop('X-Timestamp')
req.headers['X-Backend-Inbound-X-Timestamp'] = ts
# log in a similar format as the removed headers
self.logger.debug('shunted request headers: %s' %
[('X-Timestamp', ts)])
if 'X-Allow-Reserved-Names' in req.headers \
and self.allow_reserved_names_header:
req.headers['X-Backend-Allow-Reserved-Names'] = \
req.headers.pop('X-Allow-Reserved-Names')
def gatekeeper_response(status, response_headers, exc_info=None):
def fixed_response_headers():
def relative_path(value):
parsed = urlsplit(value)
new_path = parsed.path
if parsed.query:
new_path += ('?%s' % parsed.query)
if parsed.fragment:
new_path += ('#%s' % parsed.fragment)
return new_path
if not env.get('swift.leave_relative_location'):
return response_headers
else:
return [
(k, v) if k.lower() != 'location' else
(k, relative_path(v)) for (k, v) in response_headers
]
response_headers = fixed_response_headers()
removed = [(header, value) for header, value in response_headers
if self.outbound_condition(header)]
if removed:
self.logger.debug('removed response headers: %s' % removed)
new_headers = [
(header, value) for header, value in response_headers
if not self.outbound_condition(header)]
return start_response(status, new_headers, exc_info)
return start_response(status, response_headers, exc_info)
return self.app(env, gatekeeper_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def gatekeeper_filter(app):
return GatekeeperMiddleware(app, conf)
return gatekeeper_filter
| swift-master | swift/common/middleware/gatekeeper.py |
# Copyright (c) 2010-2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Logging middleware for the Swift proxy.
This serves as both the default logging implementation and an example of how
to plug in your own logging format/method.
The logging format implemented below is as follows::
client_ip remote_addr end_time.datetime method path protocol
status_int referer user_agent auth_token bytes_recvd bytes_sent
client_etag transaction_id headers request_time source log_info
start_time end_time policy_index
These values are space-separated, and each is url-encoded, so that they can
be separated with a simple .split()
* remote_addr is the contents of the REMOTE_ADDR environment variable, while
client_ip is swift's best guess at the end-user IP, extracted variously
from the X-Forwarded-For header, X-Cluster-Ip header, or the REMOTE_ADDR
environment variable.
* source (swift.source in the WSGI environment) indicates the code
that generated the request, such as most middleware. (See below for
more detail.)
* log_info (swift.log_info in the WSGI environment) is for additional
information that could prove quite useful, such as any x-delete-at
value or other "behind the scenes" activity that might not
otherwise be detectable from the plain log information. Code that
wishes to add additional log information should use code like
``env.setdefault('swift.log_info', []).append(your_info)`` so as to
not disturb others' log information.
* Values that are missing (e.g. due to a header not being present) or zero
are generally represented by a single hyphen ('-').
.. note::
The message format may be configured using the ``log_msg_template`` option,
allowing fields to be added, removed, re-ordered, and even anonymized. For
more information, see https://docs.openstack.org/swift/latest/logs.html
The proxy-logging can be used twice in the proxy server's pipeline when there
is middleware installed that can return custom responses that don't follow the
standard pipeline to the proxy server.
For example, with staticweb, the middleware might intercept a request to
/v1/AUTH_acc/cont/, make a subrequest to the proxy to retrieve
/v1/AUTH_acc/cont/index.html and, in effect, respond to the client's original
request using the 2nd request's body. In this instance the subrequest will be
logged by the rightmost middleware (with a swift.source set) and the outgoing
request (with body overridden) will be logged by leftmost middleware.
Requests that follow the normal pipeline (use the same wsgi environment
throughout) will not be double logged because an environment variable
(swift.proxy_access_log_made) is checked/set when a log is made.
All middleware making subrequests should take care to set swift.source when
needed. With the doubled proxy logs, any consumer/processor of swift's proxy
logs should look at the swift.source field, the rightmost log value, to decide
if this is a middleware subrequest or not. A log processor calculating
bandwidth usage will want to only sum up logs with no swift.source.
"""
import os
import time
from swift.common.middleware.catch_errors import enforce_byte_count
from swift.common.swob import Request
from swift.common.utils import (get_logger, get_remote_client,
config_true_value, reiterate,
close_if_possible, cap_length,
InputProxy, list_from_csv, get_policy_index,
split_path, StrAnonymizer, StrFormatTime,
LogStringFormatter)
from swift.common.storage_policy import POLICIES
from swift.common.registry import get_sensitive_headers, \
get_sensitive_params, register_sensitive_header
class ProxyLoggingMiddleware(object):
"""
Middleware that logs Swift proxy requests in the swift log format.
"""
def __init__(self, app, conf, logger=None):
self.app = app
self.pid = os.getpid()
self.log_formatter = LogStringFormatter(default='-', quote=True)
self.log_msg_template = conf.get(
'log_msg_template', (
'{client_ip} {remote_addr} {end_time.datetime} {method} '
'{path} {protocol} {status_int} {referer} {user_agent} '
'{auth_token} {bytes_recvd} {bytes_sent} {client_etag} '
'{transaction_id} {headers} {request_time} {source} '
'{log_info} {start_time} {end_time} {policy_index}'))
# The salt is only used in StrAnonymizer. This class requires bytes,
# convert it now to prevent useless convertion later.
self.anonymization_method = conf.get('log_anonymization_method', 'md5')
self.anonymization_salt = conf.get('log_anonymization_salt', '')
self.log_hdrs = config_true_value(conf.get(
'access_log_headers',
conf.get('log_headers', 'no')))
log_hdrs_only = list_from_csv(conf.get(
'access_log_headers_only', ''))
self.log_hdrs_only = [x.title() for x in log_hdrs_only]
# The leading access_* check is in case someone assumes that
# log_statsd_valid_http_methods behaves like the other log_statsd_*
# settings.
self.valid_methods = conf.get(
'access_log_statsd_valid_http_methods',
conf.get('log_statsd_valid_http_methods',
'GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS'))
self.valid_methods = [m.strip().upper() for m in
self.valid_methods.split(',') if m.strip()]
access_log_conf = {}
for key in ('log_facility', 'log_name', 'log_level', 'log_udp_host',
'log_udp_port', 'log_statsd_host', 'log_statsd_port',
'log_statsd_default_sample_rate',
'log_statsd_sample_rate_factor',
'log_statsd_metric_prefix'):
value = conf.get('access_' + key, conf.get(key, None))
if value:
access_log_conf[key] = value
self.access_logger = logger or get_logger(
access_log_conf,
log_route=conf.get('access_log_route', 'proxy-access'),
statsd_tail_prefix='proxy-server')
self.reveal_sensitive_prefix = int(
conf.get('reveal_sensitive_prefix', 16))
self.check_log_msg_template_validity()
def check_log_msg_template_validity(self):
replacements = {
# Time information
'end_time': StrFormatTime(1000001),
'start_time': StrFormatTime(1000000),
# Information worth to anonymize
'client_ip': StrAnonymizer('1.2.3.4', self.anonymization_method,
self.anonymization_salt),
'remote_addr': StrAnonymizer('4.3.2.1', self.anonymization_method,
self.anonymization_salt),
'domain': StrAnonymizer('', self.anonymization_method,
self.anonymization_salt),
'path': StrAnonymizer('/', self.anonymization_method,
self.anonymization_salt),
'referer': StrAnonymizer('ref', self.anonymization_method,
self.anonymization_salt),
'user_agent': StrAnonymizer('swift', self.anonymization_method,
self.anonymization_salt),
'headers': StrAnonymizer('header', self.anonymization_method,
self.anonymization_salt),
'client_etag': StrAnonymizer('etag', self.anonymization_method,
self.anonymization_salt),
'account': StrAnonymizer('a', self.anonymization_method,
self.anonymization_salt),
'container': StrAnonymizer('c', self.anonymization_method,
self.anonymization_salt),
'object': StrAnonymizer('', self.anonymization_method,
self.anonymization_salt),
# Others information
'method': 'GET',
'protocol': '',
'status_int': '0',
'auth_token': '1234...',
'bytes_recvd': '1',
'bytes_sent': '0',
'transaction_id': 'tx1234',
'request_time': '0.05',
'source': '',
'log_info': '',
'policy_index': '',
'ttfb': '0.05',
'pid': '42',
'wire_status_int': '200',
}
try:
self.log_formatter.format(self.log_msg_template, **replacements)
except Exception as e:
raise ValueError('Cannot interpolate log_msg_template: %s' % e)
def method_from_req(self, req):
return req.environ.get('swift.orig_req_method', req.method)
def req_already_logged(self, env):
return env.get('swift.proxy_access_log_made')
def mark_req_logged(self, env):
env['swift.proxy_access_log_made'] = True
def obscure_sensitive(self, value):
return cap_length(value, self.reveal_sensitive_prefix)
def obscure_req(self, req):
for header in get_sensitive_headers():
if header in req.headers:
req.headers[header] = \
self.obscure_sensitive(req.headers[header])
obscure_params = get_sensitive_params()
new_params = []
any_obscured = False
for k, v in req.params.items():
if k in obscure_params:
new_params.append((k, self.obscure_sensitive(v)))
any_obscured = True
else:
new_params.append((k, v))
if any_obscured:
req.params = new_params
def log_request(self, req, status_int, bytes_received, bytes_sent,
start_time, end_time, resp_headers=None, ttfb=0,
wire_status_int=None):
"""
Log a request.
:param req: swob.Request object for the request
:param status_int: integer code for the response status
:param bytes_received: bytes successfully read from the request body
:param bytes_sent: bytes yielded to the WSGI server
:param start_time: timestamp request started
:param end_time: timestamp request completed
:param resp_headers: dict of the response headers
:param wire_status_int: the on the wire status int
"""
self.obscure_req(req)
domain = req.environ.get('HTTP_HOST',
req.environ.get('SERVER_NAME', None))
if ':' in domain:
domain, port = domain.rsplit(':', 1)
resp_headers = resp_headers or {}
logged_headers = None
if self.log_hdrs:
if self.log_hdrs_only:
logged_headers = '\n'.join('%s: %s' % (k, v)
for k, v in req.headers.items()
if k in self.log_hdrs_only)
else:
logged_headers = '\n'.join('%s: %s' % (k, v)
for k, v in req.headers.items())
method = self.method_from_req(req)
duration_time_str = "%.4f" % (end_time - start_time)
policy_index = get_policy_index(req.headers, resp_headers)
acc, cont, obj = None, None, None
swift_path = req.environ.get('swift.backend_path', req.path)
if swift_path.startswith('/v1/'):
_, acc, cont, obj = split_path(swift_path, 1, 4, True)
replacements = {
# Time information
'end_time': StrFormatTime(end_time),
'start_time': StrFormatTime(start_time),
# Information worth to anonymize
'client_ip': StrAnonymizer(get_remote_client(req),
self.anonymization_method,
self.anonymization_salt),
'remote_addr': StrAnonymizer(req.remote_addr,
self.anonymization_method,
self.anonymization_salt),
'domain': StrAnonymizer(domain, self.anonymization_method,
self.anonymization_salt),
'path': StrAnonymizer(req.path_qs, self.anonymization_method,
self.anonymization_salt),
'referer': StrAnonymizer(req.referer, self.anonymization_method,
self.anonymization_salt),
'user_agent': StrAnonymizer(req.user_agent,
self.anonymization_method,
self.anonymization_salt),
'headers': StrAnonymizer(logged_headers, self.anonymization_method,
self.anonymization_salt),
'client_etag': StrAnonymizer(req.headers.get('etag'),
self.anonymization_method,
self.anonymization_salt),
'account': StrAnonymizer(acc, self.anonymization_method,
self.anonymization_salt),
'container': StrAnonymizer(cont, self.anonymization_method,
self.anonymization_salt),
'object': StrAnonymizer(obj, self.anonymization_method,
self.anonymization_salt),
# Others information
'method': method,
'protocol':
req.environ.get('SERVER_PROTOCOL'),
'status_int': status_int,
'auth_token':
req.headers.get('x-auth-token'),
'bytes_recvd': bytes_received,
'bytes_sent': bytes_sent,
'transaction_id': req.environ.get('swift.trans_id'),
'request_time': duration_time_str,
'source': req.environ.get('swift.source'),
'log_info':
','.join(req.environ.get('swift.log_info', '')),
'policy_index': policy_index,
'ttfb': ttfb,
'pid': self.pid,
'wire_status_int': wire_status_int or status_int,
}
self.access_logger.info(
self.log_formatter.format(self.log_msg_template,
**replacements))
# Log timing and bytes-transferred data to StatsD
metric_name = self.statsd_metric_name(req, status_int, method)
metric_name_policy = self.statsd_metric_name_policy(req, status_int,
method,
policy_index)
# Only log data for valid controllers (or SOS) to keep the metric count
# down (egregious errors will get logged by the proxy server itself).
if metric_name:
self.access_logger.timing(metric_name + '.timing',
(end_time - start_time) * 1000)
self.access_logger.update_stats(metric_name + '.xfer',
bytes_received + bytes_sent)
if metric_name_policy:
self.access_logger.timing(metric_name_policy + '.timing',
(end_time - start_time) * 1000)
self.access_logger.update_stats(metric_name_policy + '.xfer',
bytes_received + bytes_sent)
def get_metric_name_type(self, req):
swift_path = req.environ.get('swift.backend_path', req.path)
if swift_path.startswith('/v1/'):
try:
stat_type = [None, 'account', 'container',
'object'][swift_path.strip('/').count('/')]
except IndexError:
stat_type = 'object'
else:
stat_type = req.environ.get('swift.source')
return stat_type
def statsd_metric_name(self, req, status_int, method):
stat_type = self.get_metric_name_type(req)
if stat_type is None:
return None
stat_method = method if method in self.valid_methods \
else 'BAD_METHOD'
return '.'.join((stat_type, stat_method, str(status_int)))
def statsd_metric_name_policy(self, req, status_int, method, policy_index):
if policy_index is None:
return None
stat_type = self.get_metric_name_type(req)
if stat_type == 'object':
stat_method = method if method in self.valid_methods \
else 'BAD_METHOD'
# The policy may not exist
policy = POLICIES.get_by_index(policy_index)
if policy:
return '.'.join((stat_type, 'policy', str(policy_index),
stat_method, str(status_int)))
else:
return None
else:
return None
def __call__(self, env, start_response):
if self.req_already_logged(env):
return self.app(env, start_response)
self.mark_req_logged(env)
start_response_args = [None]
input_proxy = InputProxy(env['wsgi.input'])
env['wsgi.input'] = input_proxy
start_time = time.time()
def my_start_response(status, headers, exc_info=None):
start_response_args[0] = (status, list(headers), exc_info)
def status_int_for_logging(start_status, client_disconnect=False):
# log disconnected clients as '499' status code
if client_disconnect or input_proxy.client_disconnect:
return 499
return start_status
def iter_response(iterable):
iterator = reiterate(iterable)
content_length = None
for h, v in start_response_args[0][1]:
if h.lower() == 'content-length':
content_length = int(v)
break
elif h.lower() == 'transfer-encoding':
break
else:
if isinstance(iterator, list):
content_length = sum(len(i) for i in iterator)
start_response_args[0][1].append(
('Content-Length', str(content_length)))
req = Request(env)
method = self.method_from_req(req)
if method == 'HEAD':
content_length = 0
if content_length is not None:
iterator = enforce_byte_count(iterator, content_length)
wire_status_int = int(start_response_args[0][0].split(' ', 1)[0])
resp_headers = dict(start_response_args[0][1])
start_response(*start_response_args[0])
# Log timing information for time-to-first-byte (GET requests only)
ttfb = 0.0
if method == 'GET':
policy_index = get_policy_index(req.headers, resp_headers)
metric_name = self.statsd_metric_name(
req, wire_status_int, method)
metric_name_policy = self.statsd_metric_name_policy(
req, wire_status_int, method, policy_index)
ttfb = time.time() - start_time
if metric_name:
self.access_logger.timing(
metric_name + '.first-byte.timing', ttfb * 1000)
if metric_name_policy:
self.access_logger.timing(
metric_name_policy + '.first-byte.timing', ttfb * 1000)
bytes_sent = 0
client_disconnect = False
start_status = wire_status_int
try:
for chunk in iterator:
bytes_sent += len(chunk)
yield chunk
except GeneratorExit: # generator was closed before we finished
client_disconnect = True
raise
except Exception:
start_status = 500
raise
finally:
status_int = status_int_for_logging(
start_status, client_disconnect)
self.log_request(
req, status_int, input_proxy.bytes_received, bytes_sent,
start_time, time.time(), resp_headers=resp_headers,
ttfb=ttfb, wire_status_int=wire_status_int)
close_if_possible(iterator)
try:
iterable = self.app(env, my_start_response)
except Exception:
req = Request(env)
status_int = status_int_for_logging(500)
self.log_request(
req, status_int, input_proxy.bytes_received, 0, start_time,
time.time())
raise
else:
return iter_response(iterable)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
# Normally it would be the middleware that uses the header that
# would register it, but because there could be 3rd party auth middlewares
# that use 'x-auth-token' or 'x-storage-token' we special case it here.
register_sensitive_header('x-auth-token')
register_sensitive_header('x-storage-token')
def proxy_logger(app):
return ProxyLoggingMiddleware(app, conf)
return proxy_logger
| swift-master | swift/common/middleware/proxy_logging.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
CNAME Lookup Middleware
Middleware that translates an unknown domain in the host header to
something that ends with the configured storage_domain by looking up
the given domain's CNAME record in DNS.
This middleware will continue to follow a CNAME chain in DNS until it finds
a record ending in the configured storage domain or it reaches the configured
maximum lookup depth. If a match is found, the environment's Host header is
rewritten and the request is passed further down the WSGI chain.
"""
import six
from swift import gettext_ as _
try:
import dns.resolver
import dns.exception
except ImportError:
# catch this to allow docs to be built without the dependency
MODULE_DEPENDENCY_MET = False
else: # executed if the try block finishes with no errors
MODULE_DEPENDENCY_MET = True
from swift.common.middleware import RewriteContext
from swift.common.swob import Request, HTTPBadRequest, \
str_to_wsgi, wsgi_to_str
from swift.common.utils import cache_from_env, get_logger, is_valid_ip, \
list_from_csv, parse_socket_string
from swift.common.registry import register_swift_info
def lookup_cname(domain, resolver): # pragma: no cover
"""
Given a domain, returns its DNS CNAME mapping and DNS ttl.
:param domain: domain to query on
:param resolver: dns.resolver.Resolver() instance used for executing DNS
queries
:returns: (ttl, result)
"""
try:
answer = resolver.query(domain, 'CNAME').rrset
ttl = answer.ttl
result = list(answer.items)[0].to_text()
result = result.rstrip('.')
return ttl, result
except (dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
# As the memcache lib returns None when nothing is found in cache,
# returning false helps to distinguish between "nothing in cache"
# (None) and "nothing to cache" (False).
return 60, False
except (dns.exception.DNSException):
return 0, None
class _CnameLookupContext(RewriteContext):
base_re = r'^(https?://)%s(/.*)?$'
class CNAMELookupMiddleware(object):
"""
CNAME Lookup Middleware
See above for a full description.
:param app: The next WSGI filter or app in the paste.deploy
chain.
:param conf: The configuration dict for the middleware.
"""
def __init__(self, app, conf):
if not MODULE_DEPENDENCY_MET:
# reraise the exception if the dependency wasn't met
raise ImportError('dnspython is required for this module')
self.app = app
storage_domain = conf.get('storage_domain', 'example.com')
self.storage_domain = ['.' + s for s in
list_from_csv(storage_domain)
if not s.startswith('.')]
self.storage_domain += [s for s in list_from_csv(storage_domain)
if s.startswith('.')]
self.lookup_depth = int(conf.get('lookup_depth', '1'))
nameservers = list_from_csv(conf.get('nameservers'))
try:
for i, server in enumerate(nameservers):
ip_or_host, maybe_port = nameservers[i] = \
parse_socket_string(server, None)
if not is_valid_ip(ip_or_host):
raise ValueError
if maybe_port is not None:
int(maybe_port)
except ValueError:
raise ValueError('Invalid cname_lookup/nameservers configuration '
'found. All nameservers must be valid IPv4 or '
'IPv6, followed by an optional :<integer> port.')
self.resolver = dns.resolver.Resolver()
if nameservers:
self.resolver.nameservers = [ip for (ip, port) in nameservers]
self.resolver.nameserver_ports = {
ip: int(port) for (ip, port) in nameservers
if port is not None}
self.memcache = None
self.logger = get_logger(conf, log_route='cname-lookup')
def _domain_endswith_in_storage_domain(self, a_domain):
a_domain = '.' + a_domain
for domain in self.storage_domain:
if a_domain.endswith(domain):
return True
return False
def __call__(self, env, start_response):
if not self.storage_domain:
return self.app(env, start_response)
if 'HTTP_HOST' in env:
requested_host = env['HTTP_HOST']
else:
requested_host = env['SERVER_NAME']
given_domain = wsgi_to_str(requested_host)
port = ''
if ':' in given_domain:
given_domain, port = given_domain.rsplit(':', 1)
if is_valid_ip(given_domain):
return self.app(env, start_response)
a_domain = given_domain
if not self._domain_endswith_in_storage_domain(a_domain):
if self.memcache is None:
self.memcache = cache_from_env(env)
error = True
for tries in range(self.lookup_depth):
found_domain = None
if self.memcache:
memcache_key = ''.join(['cname-', a_domain])
found_domain = self.memcache.get(memcache_key)
if six.PY2 and found_domain:
found_domain = found_domain.encode('utf-8')
if found_domain is None:
ttl, found_domain = lookup_cname(a_domain, self.resolver)
if self.memcache and ttl > 0:
memcache_key = ''.join(['cname-', given_domain])
self.memcache.set(memcache_key, found_domain,
time=ttl)
if not found_domain or found_domain == a_domain:
# no CNAME records or we're at the last lookup
error = True
found_domain = None
break
elif self._domain_endswith_in_storage_domain(found_domain):
# Found it!
self.logger.info(
_('Mapped %(given_domain)s to %(found_domain)s') %
{'given_domain': given_domain,
'found_domain': found_domain})
if port:
env['HTTP_HOST'] = ':'.join([
str_to_wsgi(found_domain), port])
else:
env['HTTP_HOST'] = str_to_wsgi(found_domain)
error = False
break
else:
# try one more deep in the chain
self.logger.debug(
_('Following CNAME chain for '
'%(given_domain)s to %(found_domain)s') %
{'given_domain': given_domain,
'found_domain': found_domain})
a_domain = found_domain
if error:
if found_domain:
msg = 'CNAME lookup failed after %d tries' % \
self.lookup_depth
else:
msg = 'CNAME lookup failed to resolve to a valid domain'
resp = HTTPBadRequest(request=Request(env), body=msg,
content_type='text/plain')
return resp(env, start_response)
else:
context = _CnameLookupContext(self.app, requested_host,
env['HTTP_HOST'])
return context.handle_request(env, start_response)
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf): # pragma: no cover
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info('cname_lookup',
lookup_depth=int(conf.get('lookup_depth', '1')))
def cname_filter(app):
return CNAMELookupMiddleware(app, conf)
return cname_filter
| swift-master | swift/common/middleware/cname_lookup.py |
# Copyright (c) 2013 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
``account_quotas`` is a middleware which blocks write requests (PUT, POST) if a
given account quota (in bytes) is exceeded while DELETE requests are still
allowed.
``account_quotas`` uses the ``x-account-meta-quota-bytes`` metadata entry to
store the overall account quota. Write requests to this metadata entry are
only permitted for resellers. There is no overall account quota limit if
``x-account-meta-quota-bytes`` is not set.
Additionally, account quotas may be set for each storage policy, using metadata
of the form ``x-account-quota-bytes-policy-<policy name>``. Again, only
resellers may update these metadata, and there will be no limit for a
particular policy if the corresponding metadata is not set.
.. note::
Per-policy quotas need not sum to the overall account quota, and the sum of
all :ref:`container_quotas` for a given policy need not sum to the account's
policy quota.
The ``account_quotas`` middleware should be added to the pipeline in your
``/etc/swift/proxy-server.conf`` file just after any auth middleware.
For example::
[pipeline:main]
pipeline = catch_errors cache tempauth account_quotas proxy-server
[filter:account_quotas]
use = egg:swift#account_quotas
To set the quota on an account::
swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret \
post -m quota-bytes:10000
Remove the quota::
swift -A http://127.0.0.1:8080/auth/v1.0 -U account:reseller -K secret \
post -m quota-bytes:
The same limitations apply for the account quotas as for the container quotas.
For example, when uploading an object without a content-length header the proxy
server doesn't know the final size of the currently uploaded object and the
upload will be allowed if the current account size is within the quota.
Due to the eventual consistency further uploads might be possible until the
account size has been updated.
"""
from swift.common.swob import HTTPForbidden, HTTPBadRequest, \
HTTPRequestEntityTooLarge, wsgify
from swift.common.registry import register_swift_info
from swift.common.storage_policy import POLICIES
from swift.proxy.controllers.base import get_account_info, get_container_info
class AccountQuotaMiddleware(object):
"""Account quota middleware
See above for a full description.
"""
def __init__(self, app, *args, **kwargs):
self.app = app
def handle_account(self, request):
if request.method in ("POST", "PUT"):
# account request, so we pay attention to the quotas
new_quotas = {}
new_quotas[None] = request.headers.get(
'X-Account-Meta-Quota-Bytes')
if request.headers.get(
'X-Remove-Account-Meta-Quota-Bytes'):
new_quotas[None] = 0 # X-Remove dominates if both are present
for policy in POLICIES:
tail = 'Account-Quota-Bytes-Policy-%s' % policy.name
if request.headers.get('X-Remove-' + tail):
new_quotas[policy.idx] = 0
else:
quota = request.headers.pop('X-' + tail, None)
new_quotas[policy.idx] = quota
if request.environ.get('reseller_request') is True:
if any(quota and not quota.isdigit()
for quota in new_quotas.values()):
return HTTPBadRequest()
for idx, quota in new_quotas.items():
if idx is None:
continue # For legacy reasons, it's in user meta
hdr = 'X-Account-Sysmeta-Quota-Bytes-Policy-%d' % idx
request.headers[hdr] = quota
elif any(quota is not None for quota in new_quotas.values()):
# deny quota set for non-reseller
return HTTPForbidden()
resp = request.get_response(self.app)
# Non-resellers can't update quotas, but they *can* see them
for policy in POLICIES:
infix = 'Quota-Bytes-Policy'
value = resp.headers.get('X-Account-Sysmeta-%s-%d' % (
infix, policy.idx))
if value:
resp.headers['X-Account-%s-%s' % (infix, policy.name)] = value
return resp
@wsgify
def __call__(self, request):
try:
ver, account, container, obj = request.split_path(
2, 4, rest_with_last=True)
except ValueError:
return self.app
if not container:
return self.handle_account(request)
# container or object request; even if the quota headers are set
# in the request, they're meaningless
if not (request.method == "PUT" and obj):
return self.app
# OK, object PUT
if request.environ.get('reseller_request') is True:
# but resellers aren't constrained by quotas :-)
return self.app
# Object PUT request
content_length = (request.content_length or 0)
account_info = get_account_info(request.environ, self.app,
swift_source='AQ')
if not account_info:
return self.app
try:
quota = int(account_info['meta'].get('quota-bytes', -1))
except ValueError:
quota = -1
if quota >= 0:
new_size = int(account_info['bytes']) + content_length
if quota < new_size:
resp = HTTPRequestEntityTooLarge(body='Upload exceeds quota.')
if 'swift.authorize' in request.environ:
orig_authorize = request.environ['swift.authorize']
def reject_authorize(*args, **kwargs):
aresp = orig_authorize(*args, **kwargs)
if aresp:
return aresp
return resp
request.environ['swift.authorize'] = reject_authorize
else:
return resp
container_info = get_container_info(request.environ, self.app,
swift_source='AQ')
if not container_info:
return self.app
policy_idx = container_info['storage_policy']
sysmeta_key = 'quota-bytes-policy-%s' % policy_idx
try:
policy_quota = int(account_info['sysmeta'].get(sysmeta_key, -1))
except ValueError:
policy_quota = -1
if policy_quota >= 0:
policy_stats = account_info['storage_policies'].get(policy_idx, {})
new_size = int(policy_stats.get('bytes', 0)) + content_length
if policy_quota < new_size:
resp = HTTPRequestEntityTooLarge(
body='Upload exceeds policy quota.')
if 'swift.authorize' in request.environ:
orig_authorize = request.environ['swift.authorize']
def reject_authorize(*args, **kwargs):
aresp = orig_authorize(*args, **kwargs)
if aresp:
return aresp
return resp
request.environ['swift.authorize'] = reject_authorize
else:
return resp
return self.app
def filter_factory(global_conf, **local_conf):
"""Returns a WSGI filter app for use with paste.deploy."""
register_swift_info('account_quotas')
def account_quota_filter(app):
return AccountQuotaMiddleware(app)
return account_quota_filter
| swift-master | swift/common/middleware/account_quotas.py |
# Copyright (c) 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
'''
Created on February 27, 2012
A filter that disallows any paths that contain defined forbidden characters or
that exceed a defined length.
Place early in the proxy-server pipeline after the left-most occurrence of the
``proxy-logging`` middleware (if present) and before the final
``proxy-logging`` middleware (if present) or the ``proxy-serer`` app itself,
e.g.::
[pipeline:main]
pipeline = catch_errors healthcheck proxy-logging name_check cache \
ratelimit tempauth sos proxy-logging proxy-server
[filter:name_check]
use = egg:swift#name_check
forbidden_chars = '"`<>
maximum_length = 255
There are default settings for forbidden_chars (FORBIDDEN_CHARS) and
maximum_length (MAX_LENGTH)
The filter returns HTTPBadRequest if path is invalid.
@author: eamonn-otoole
'''
import re
from swift.common.utils import get_logger
from swift.common.registry import register_swift_info
from swift.common.swob import Request, HTTPBadRequest
FORBIDDEN_CHARS = "\'\"`<>"
MAX_LENGTH = 255
FORBIDDEN_REGEXP = r"/\./|/\.\./|/\.$|/\.\.$"
class NameCheckMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.forbidden_chars = self.conf.get('forbidden_chars',
FORBIDDEN_CHARS)
self.maximum_length = int(self.conf.get('maximum_length', MAX_LENGTH))
self.forbidden_regexp = self.conf.get('forbidden_regexp',
FORBIDDEN_REGEXP)
if self.forbidden_regexp:
self.forbidden_regexp_compiled = re.compile(self.forbidden_regexp)
else:
self.forbidden_regexp_compiled = None
self.logger = get_logger(self.conf, log_route='name_check')
self.register_info()
def register_info(self):
register_swift_info('name_check',
forbidden_chars=self.forbidden_chars,
maximum_length=self.maximum_length,
forbidden_regexp=self.forbidden_regexp
)
def check_character(self, req):
'''
Checks req.path for any forbidden characters
Returns True if there are any forbidden characters
Returns False if there aren't any forbidden characters
'''
self.logger.debug("name_check: path %s" % req.path)
self.logger.debug("name_check: self.forbidden_chars %s" %
self.forbidden_chars)
return any((c in req.path_info) for c in self.forbidden_chars)
def check_length(self, req):
'''
Checks that req.path doesn't exceed the defined maximum length
Returns True if the length exceeds the maximum
Returns False if the length is <= the maximum
'''
length = len(req.path_info)
return length > self.maximum_length
def check_regexp(self, req):
'''
Checks that req.path doesn't contain a substring matching regexps.
Returns True if there are any forbidden substring
Returns False if there aren't any forbidden substring
'''
if self.forbidden_regexp_compiled is None:
return False
self.logger.debug("name_check: path %s" % req.path)
self.logger.debug("name_check: self.forbidden_regexp %s" %
self.forbidden_regexp)
match = self.forbidden_regexp_compiled.search(req.path_info)
return (match is not None)
def __call__(self, env, start_response):
req = Request(env)
if self.check_character(req):
return HTTPBadRequest(
request=req,
body=("Object/Container/Account name contains forbidden "
"chars from %s"
% self.forbidden_chars))(env, start_response)
elif self.check_length(req):
return HTTPBadRequest(
request=req,
body=("Object/Container/Account name longer than the "
"allowed maximum "
"%s" % self.maximum_length))(env, start_response)
elif self.check_regexp(req):
return HTTPBadRequest(
request=req,
body=("Object/Container/Account name contains a forbidden "
"substring from regular expression %s"
% self.forbidden_regexp))(env, start_response)
else:
# Pass on to downstream WSGI component
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def name_check_filter(app):
return NameCheckMiddleware(app, conf)
return name_check_filter
| swift-master | swift/common/middleware/name_check.py |
# Copyright (c) 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
List endpoints for an object, account or container.
This middleware makes it possible to integrate swift with software
that relies on data locality information to avoid network overhead,
such as Hadoop.
Using the original API, answers requests of the form::
/endpoints/{account}/{container}/{object}
/endpoints/{account}/{container}
/endpoints/{account}
/endpoints/v1/{account}/{container}/{object}
/endpoints/v1/{account}/{container}
/endpoints/v1/{account}
with a JSON-encoded list of endpoints of the form::
http://{server}:{port}/{dev}/{part}/{acc}/{cont}/{obj}
http://{server}:{port}/{dev}/{part}/{acc}/{cont}
http://{server}:{port}/{dev}/{part}/{acc}
correspondingly, e.g.::
http://10.1.1.1:6200/sda1/2/a/c2/o1
http://10.1.1.1:6200/sda1/2/a/c2
http://10.1.1.1:6200/sda1/2/a
Using the v2 API, answers requests of the form::
/endpoints/v2/{account}/{container}/{object}
/endpoints/v2/{account}/{container}
/endpoints/v2/{account}
with a JSON-encoded dictionary containing a key 'endpoints' that maps to a list
of endpoints having the same form as described above, and a key 'headers' that
maps to a dictionary of headers that should be sent with a request made to
the endpoints, e.g.::
{ "endpoints": {"http://10.1.1.1:6210/sda1/2/a/c3/o1",
"http://10.1.1.1:6230/sda3/2/a/c3/o1",
"http://10.1.1.1:6240/sda4/2/a/c3/o1"},
"headers": {"X-Backend-Storage-Policy-Index": "1"}}
In this example, the 'headers' dictionary indicates that requests to the
endpoint URLs should include the header 'X-Backend-Storage-Policy-Index: 1'
because the object's container is using storage policy index 1.
The '/endpoints/' path is customizable ('list_endpoints_path'
configuration parameter).
Intended for consumption by third-party services living inside the
cluster (as the endpoints make sense only inside the cluster behind
the firewall); potentially written in a different language.
This is why it's provided as a REST API and not just a Python API:
to avoid requiring clients to write their own ring parsers in their
languages, and to avoid the necessity to distribute the ring file
to clients and keep it up-to-date.
Note that the call is not authenticated, which means that a proxy
with this middleware enabled should not be open to an untrusted
environment (everyone can query the locality data using this middleware).
"""
import json
from six.moves.urllib.parse import quote, unquote
from swift.common.ring import Ring
from swift.common.utils import get_logger, split_path
from swift.common.swob import Request, Response
from swift.common.swob import HTTPBadRequest, HTTPMethodNotAllowed
from swift.common.storage_policy import POLICIES
from swift.proxy.controllers.base import get_container_info
RESPONSE_VERSIONS = (1.0, 2.0)
class ListEndpointsMiddleware(object):
"""
List endpoints for an object, account or container.
See above for a full description.
Uses configuration parameter `swift_dir` (default `/etc/swift`).
:param app: The next WSGI filter or app in the paste.deploy
chain.
:param conf: The configuration dict for the middleware.
"""
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='endpoints')
self.swift_dir = conf.get('swift_dir', '/etc/swift')
self.account_ring = Ring(self.swift_dir, ring_name='account')
self.container_ring = Ring(self.swift_dir, ring_name='container')
self.endpoints_path = conf.get('list_endpoints_path', '/endpoints/')
if not self.endpoints_path.endswith('/'):
self.endpoints_path += '/'
self.default_response_version = 1.0
self.response_map = {
1.0: self.v1_format_response,
2.0: self.v2_format_response,
}
def get_object_ring(self, policy_idx):
"""
Get the ring object to use to handle a request based on its policy.
:policy_idx: policy index as defined in swift.conf
:returns: appropriate ring object
"""
return POLICIES.get_object_ring(policy_idx, self.swift_dir)
def _parse_version(self, raw_version):
err_msg = 'Unsupported version %r' % raw_version
try:
version = float(raw_version.lstrip('v'))
except ValueError:
raise ValueError(err_msg)
if not any(version == v for v in RESPONSE_VERSIONS):
raise ValueError(err_msg)
return version
def _parse_path(self, request):
"""
Parse path parts of request into a tuple of version, account,
container, obj. Unspecified container or obj is filled in as
None; account is required; version is always returned as a
float using the configured default response version if not
specified in the request.
:param request: the swob request
:returns: parsed path parts as a tuple with version filled in as
configured default response version if not specified.
:raises ValueError: if path is invalid, message will say why.
"""
clean_path = request.path[len(self.endpoints_path) - 1:]
# try to peel off version
try:
raw_version, rest = split_path(clean_path, 1, 2, True)
except ValueError:
raise ValueError('No account specified')
try:
version = self._parse_version(raw_version)
except ValueError:
if raw_version.startswith('v') and '_' not in raw_version:
# looks more like an invalid version than an account
raise
# probably no version specified, but if the client really
# said /endpoints/v_3/account they'll probably be sorta
# confused by the useless response and lack of error.
version = self.default_response_version
rest = clean_path
else:
rest = '/' + rest if rest else '/'
try:
account, container, obj = split_path(rest, 1, 3, True)
except ValueError:
raise ValueError('No account specified')
return version, account, container, obj
def v1_format_response(self, req, endpoints, **kwargs):
return Response(json.dumps(endpoints),
content_type='application/json')
def v2_format_response(self, req, endpoints, storage_policy_index,
**kwargs):
resp = {
'endpoints': endpoints,
'headers': {},
}
if storage_policy_index is not None:
resp['headers'][
'X-Backend-Storage-Policy-Index'] = str(storage_policy_index)
return Response(json.dumps(resp),
content_type='application/json')
def __call__(self, env, start_response):
request = Request(env)
if not request.path.startswith(self.endpoints_path):
return self.app(env, start_response)
if request.method != 'GET':
return HTTPMethodNotAllowed(
req=request, headers={"Allow": "GET"})(env, start_response)
try:
version, account, container, obj = self._parse_path(request)
except ValueError as err:
return HTTPBadRequest(str(err))(env, start_response)
account = unquote(account)
if container is not None:
container = unquote(container)
if obj is not None:
obj = unquote(obj)
storage_policy_index = None
if obj is not None:
container_info = get_container_info(
{'PATH_INFO': '/v1/%s/%s' % (account, container)},
self.app, swift_source='LE')
storage_policy_index = container_info['storage_policy']
obj_ring = self.get_object_ring(storage_policy_index)
partition, nodes = obj_ring.get_nodes(
account, container, obj)
endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
'{account}/{container}/{obj}'
elif container is not None:
partition, nodes = self.container_ring.get_nodes(
account, container)
endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
'{account}/{container}'
else:
partition, nodes = self.account_ring.get_nodes(
account)
endpoint_template = 'http://{ip}:{port}/{device}/{partition}/' + \
'{account}'
endpoints = []
for node in nodes:
endpoint = endpoint_template.format(
ip=node['ip'],
port=node['port'],
device=node['device'],
partition=partition,
account=quote(account),
container=quote(container or ''),
obj=quote(obj or ''))
endpoints.append(endpoint)
resp = self.response_map[version](
request, endpoints=endpoints,
storage_policy_index=storage_policy_index)
return resp(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def list_endpoints_filter(app):
return ListEndpointsMiddleware(app, conf)
return list_endpoints_filter
| swift-master | swift/common/middleware/list_endpoints.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The ``container_quotas`` middleware implements simple quotas that can be
imposed on swift containers by a user with the ability to set container
metadata, most likely the account administrator. This can be useful for
limiting the scope of containers that are delegated to non-admin users, exposed
to ``formpost`` uploads, or just as a self-imposed sanity check.
Any object PUT operations that exceed these quotas return a 413 response
(request entity too large) with a descriptive body.
Quotas are subject to several limitations: eventual consistency, the timeliness
of the cached container_info (60 second ttl by default), and it's unable to
reject chunked transfer uploads that exceed the quota (though once the quota
is exceeded, new chunked transfers will be refused).
Quotas are set by adding meta values to the container, and are validated when
set:
+---------------------------------------------+-------------------------------+
|Metadata | Use |
+=============================================+===============================+
| X-Container-Meta-Quota-Bytes | Maximum size of the |
| | container, in bytes. |
+---------------------------------------------+-------------------------------+
| X-Container-Meta-Quota-Count | Maximum object count of the |
| | container. |
+---------------------------------------------+-------------------------------+
The ``container_quotas`` middleware should be added to the pipeline in your
``/etc/swift/proxy-server.conf`` file just after any auth middleware.
For example::
[pipeline:main]
pipeline = catch_errors cache tempauth container_quotas proxy-server
[filter:container_quotas]
use = egg:swift#container_quotas
"""
from swift.common.http import is_success
from swift.common.swob import HTTPRequestEntityTooLarge, HTTPBadRequest, \
wsgify
from swift.common.registry import register_swift_info
from swift.proxy.controllers.base import get_container_info
class ContainerQuotaMiddleware(object):
def __init__(self, app, *args, **kwargs):
self.app = app
def bad_response(self, req, container_info):
# 401 if the user couldn't have PUT this object in the first place.
# This prevents leaking the container's existence to unauthed users.
if 'swift.authorize' in req.environ:
req.acl = container_info['write_acl']
aresp = req.environ['swift.authorize'](req)
if aresp:
return aresp
return HTTPRequestEntityTooLarge(body='Upload exceeds quota.')
@wsgify
def __call__(self, req):
try:
(version, account, container, obj) = req.split_path(3, 4, True)
except ValueError:
return self.app
# verify new quota headers are properly formatted
if not obj and req.method in ('PUT', 'POST'):
val = req.headers.get('X-Container-Meta-Quota-Bytes')
if val and not val.isdigit():
return HTTPBadRequest(body='Invalid bytes quota.')
val = req.headers.get('X-Container-Meta-Quota-Count')
if val and not val.isdigit():
return HTTPBadRequest(body='Invalid count quota.')
# check user uploads against quotas
elif obj and req.method in ('PUT'):
container_info = get_container_info(
req.environ, self.app, swift_source='CQ')
if not container_info or not is_success(container_info['status']):
# this will hopefully 404 later
return self.app
if 'quota-bytes' in container_info.get('meta', {}) and \
'bytes' in container_info and \
container_info['meta']['quota-bytes'].isdigit():
content_length = (req.content_length or 0)
new_size = int(container_info['bytes']) + content_length
if int(container_info['meta']['quota-bytes']) < new_size:
return self.bad_response(req, container_info)
if 'quota-count' in container_info.get('meta', {}) and \
'object_count' in container_info and \
container_info['meta']['quota-count'].isdigit():
new_count = int(container_info['object_count']) + 1
if int(container_info['meta']['quota-count']) < new_count:
return self.bad_response(req, container_info)
return self.app
def filter_factory(global_conf, **local_conf):
register_swift_info('container_quotas')
def container_quota_filter(app):
return ContainerQuotaMiddleware(app)
return container_quota_filter
| swift-master | swift/common/middleware/container_quotas.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Domain Remap Middleware
Middleware that translates container and account parts of a domain to path
parameters that the proxy server understands.
Translation is only performed when the request URL's host domain matches one of
a list of domains. This list may be configured by the option
``storage_domain``, and defaults to the single domain ``example.com``.
If not already present, a configurable ``path_root``, which defaults to ``v1``,
will be added to the start of the translated path.
For example, with the default configuration::
container.AUTH-account.example.com/object
container.AUTH-account.example.com/v1/object
would both be translated to::
container.AUTH-account.example.com/v1/AUTH_account/container/object
and::
AUTH-account.example.com/container/object
AUTH-account.example.com/v1/container/object
would both be translated to::
AUTH-account.example.com/v1/AUTH_account/container/object
Additionally, translation is only performed when the account name in the
translated path starts with a reseller prefix matching one of a list configured
by the option ``reseller_prefixes``, or when no match is found but a
``default_reseller_prefix`` has been configured.
The ``reseller_prefixes`` list defaults to the single prefix ``AUTH``. The
``default_reseller_prefix`` is not configured by default.
Browsers can convert a host header to lowercase, so the middleware checks that
the reseller prefix on the account name is the correct case. This is done by
comparing the items in the ``reseller_prefixes`` config option to the found
prefix. If they match except for case, the item from ``reseller_prefixes`` will
be used instead of the found reseller prefix. The middleware will also replace
any hyphen ('-') in the account name with an underscore ('_').
For example, with the default configuration::
auth-account.example.com/container/object
AUTH-account.example.com/container/object
auth_account.example.com/container/object
AUTH_account.example.com/container/object
would all be translated to::
<unchanged>.example.com/v1/AUTH_account/container/object
When no match is found in ``reseller_prefixes``, the
``default_reseller_prefix`` config option is used. When no
``default_reseller_prefix`` is configured, any request with an account prefix
not in the ``reseller_prefixes`` list will be ignored by this middleware.
For example, with ``default_reseller_prefix = AUTH``::
account.example.com/container/object
would be translated to::
account.example.com/v1/AUTH_account/container/object
Note that this middleware requires that container names and account names
(except as described above) must be DNS-compatible. This means that the account
name created in the system and the containers created by users cannot exceed 63
characters or have UTF-8 characters. These are restrictions over and above what
Swift requires and are not explicitly checked. Simply put, this middleware
will do a best-effort attempt to derive account and container names from
elements in the domain name and put those derived values into the URL path
(leaving the ``Host`` header unchanged).
Also note that using :doc:`overview_container_sync` with remapped domain names
is not advised. With :doc:`overview_container_sync`, you should use the true
storage end points as sync destinations.
"""
from swift.common.middleware import RewriteContext
from swift.common.swob import Request, HTTPBadRequest, wsgi_quote
from swift.common.utils import config_true_value, list_from_csv
from swift.common.registry import register_swift_info
class _DomainRemapContext(RewriteContext):
base_re = r'^(https?://[^/]+)%s(.*)$'
class DomainRemapMiddleware(object):
"""
Domain Remap Middleware
See above for a full description.
:param app: The next WSGI filter or app in the paste.deploy
chain.
:param conf: The configuration dict for the middleware.
"""
def __init__(self, app, conf):
self.app = app
storage_domain = conf.get('storage_domain', 'example.com')
self.storage_domain = ['.' + s for s in
list_from_csv(storage_domain)
if not s.startswith('.')]
self.storage_domain += [s for s in list_from_csv(storage_domain)
if s.startswith('.')]
self.path_root = conf.get('path_root', 'v1').strip('/') + '/'
prefixes = conf.get('reseller_prefixes', 'AUTH')
self.reseller_prefixes = list_from_csv(prefixes)
self.reseller_prefixes_lower = [x.lower()
for x in self.reseller_prefixes]
self.default_reseller_prefix = conf.get('default_reseller_prefix')
self.mangle_client_paths = config_true_value(
conf.get('mangle_client_paths'))
def __call__(self, env, start_response):
if not self.storage_domain:
return self.app(env, start_response)
if 'HTTP_HOST' in env:
given_domain = env['HTTP_HOST']
else:
given_domain = env['SERVER_NAME']
port = ''
if ':' in given_domain:
given_domain, port = given_domain.rsplit(':', 1)
storage_domain = next((domain for domain in self.storage_domain
if given_domain.endswith(domain)), None)
if storage_domain:
parts_to_parse = given_domain[:-len(storage_domain)]
parts_to_parse = parts_to_parse.strip('.').split('.')
len_parts_to_parse = len(parts_to_parse)
if len_parts_to_parse == 2:
container, account = parts_to_parse
elif len_parts_to_parse == 1:
container, account = None, parts_to_parse[0]
else:
resp = HTTPBadRequest(request=Request(env),
body=b'Bad domain in host header',
content_type='text/plain')
return resp(env, start_response)
if len(self.reseller_prefixes) > 0:
if '_' not in account and '-' in account:
account = account.replace('-', '_', 1)
account_reseller_prefix = account.split('_', 1)[0].lower()
if account_reseller_prefix in self.reseller_prefixes_lower:
prefix_index = self.reseller_prefixes_lower.index(
account_reseller_prefix)
real_prefix = self.reseller_prefixes[prefix_index]
if not account.startswith(real_prefix):
account_suffix = account[len(real_prefix):]
account = real_prefix + account_suffix
elif self.default_reseller_prefix:
# account prefix is not in config list. Add default one.
account = "%s_%s" % (self.default_reseller_prefix, account)
else:
# account prefix is not in config list. bail.
return self.app(env, start_response)
requested_path = env['PATH_INFO']
path = requested_path[1:]
new_path_parts = ['', self.path_root[:-1], account]
if container:
new_path_parts.append(container)
if self.mangle_client_paths and (path + '/').startswith(
self.path_root):
path = path[len(self.path_root):]
new_path_parts.append(path)
new_path = '/'.join(new_path_parts)
env['PATH_INFO'] = new_path
context = _DomainRemapContext(
self.app, wsgi_quote(requested_path), wsgi_quote(new_path))
return context.handle_request(env, start_response)
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info(
'domain_remap',
default_reseller_prefix=conf.get('default_reseller_prefix'))
def domain_filter(app):
return DomainRemapMiddleware(app, conf)
return domain_filter
| swift-master | swift/common/middleware/domain_remap.py |
# Copyright (c) 2010-2012 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Profiling middleware for Swift Servers.
The current implementation is based on eventlet aware profiler.(For the
future, more profilers could be added in to collect more data for analysis.)
Profiling all incoming requests and accumulating cpu timing statistics
information for performance tuning and optimization. An mini web UI is also
provided for profiling data analysis. It can be accessed from the URL as
below.
Index page for browse profile data::
http://SERVER_IP:PORT/__profile__
List all profiles to return profile ids in json format::
http://SERVER_IP:PORT/__profile__/
http://SERVER_IP:PORT/__profile__/all
Retrieve specific profile data in different formats::
http://SERVER_IP:PORT/__profile__/PROFILE_ID?format=[default|json|csv|ods]
http://SERVER_IP:PORT/__profile__/current?format=[default|json|csv|ods]
http://SERVER_IP:PORT/__profile__/all?format=[default|json|csv|ods]
Retrieve metrics from specific function in json format::
http://SERVER_IP:PORT/__profile__/PROFILE_ID/NFL?format=json
http://SERVER_IP:PORT/__profile__/current/NFL?format=json
http://SERVER_IP:PORT/__profile__/all/NFL?format=json
NFL is defined by concatenation of file name, function name and the first
line number.
e.g.::
account.py:50(GETorHEAD)
or with full path:
opt/stack/swift/swift/proxy/controllers/account.py:50(GETorHEAD)
A list of URL examples:
http://localhost:8080/__profile__ (proxy server)
http://localhost:6200/__profile__/all (object server)
http://localhost:6201/__profile__/current (container server)
http://localhost:6202/__profile__/12345?format=json (account server)
The profiling middleware can be configured in paste file for WSGI servers such
as proxy, account, container and object servers. Please refer to the sample
configuration files in etc directory.
The profiling data is provided with four formats such as binary(by default),
json, csv and odf spreadsheet which requires installing odfpy library::
sudo pip install odfpy
There's also a simple visualization capability which is enabled by using
matplotlib toolkit. it is also required to be installed if you want to use
it to visualize statistic data::
sudo apt-get install python-matplotlib
"""
import os
import sys
import time
from eventlet import greenthread, GreenPool, patcher
import eventlet.green.profile as eprofile
import six
from six.moves import urllib
from swift import gettext_ as _
from swift.common.utils import get_logger, config_true_value
from swift.common.swob import Request
from swift.common.middleware.x_profile.exceptions import MethodNotAllowed
from swift.common.middleware.x_profile.exceptions import NotFoundException
from swift.common.middleware.x_profile.exceptions import ProfileException
from swift.common.middleware.x_profile.html_viewer import HTMLViewer
from swift.common.middleware.x_profile.profile_model import ProfileLog
DEFAULT_PROFILE_PREFIX = '/tmp/log/swift/profile/default.profile'
# unwind the iterator; it may call start_response, do lots of work, etc
PROFILE_EXEC_EAGER = """
app_iter = self.app(environ, start_response)
app_iter_ = list(app_iter)
if hasattr(app_iter, 'close'):
app_iter.close()
"""
# don't unwind the iterator (don't consume resources)
PROFILE_EXEC_LAZY = """
app_iter_ = self.app(environ, start_response)
"""
if six.PY3:
thread = patcher.original('_thread') # non-monkeypatched module needed
else:
thread = patcher.original('thread') # non-monkeypatched module needed
# This monkey patch code fix the problem of eventlet profile tool
# which can not accumulate profiling results across multiple calls
# of runcalls and runctx.
def new_setup(self):
self._has_setup = True
self.cur = None
self.timings = {}
self.current_tasklet = greenthread.getcurrent()
self.thread_id = thread.get_ident()
self.simulate_call("profiler")
def new_runctx(self, cmd, globals, locals):
if not getattr(self, '_has_setup', False):
self._setup()
try:
return self.base.runctx(self, cmd, globals, locals)
finally:
self.TallyTimings()
def new_runcall(self, func, *args, **kw):
if not getattr(self, '_has_setup', False):
self._setup()
try:
return self.base.runcall(self, func, *args, **kw)
finally:
self.TallyTimings()
class ProfileMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route='profile')
self.log_filename_prefix = conf.get('log_filename_prefix',
DEFAULT_PROFILE_PREFIX)
dirname = os.path.dirname(self.log_filename_prefix)
# Notes: this effort may fail due to permission denied.
# it is better to be created and authorized to current
# user in advance.
if not os.path.exists(dirname):
os.makedirs(dirname)
self.dump_interval = float(conf.get('dump_interval', 5.0))
self.dump_timestamp = config_true_value(conf.get(
'dump_timestamp', 'no'))
self.flush_at_shutdown = config_true_value(conf.get(
'flush_at_shutdown', 'no'))
self.path = conf.get('path', '__profile__').replace('/', '')
self.unwind = config_true_value(conf.get('unwind', 'no'))
self.profile_module = conf.get('profile_module',
'eventlet.green.profile')
self.profiler = get_profiler(self.profile_module)
self.profile_log = ProfileLog(self.log_filename_prefix,
self.dump_timestamp)
self.viewer = HTMLViewer(self.path, self.profile_module,
self.profile_log)
self.dump_pool = GreenPool(1000)
self.last_dump_at = None
def __del__(self):
if self.flush_at_shutdown:
self.profile_log.clear(str(os.getpid()))
def _combine_body_qs(self, request):
wsgi_input = request.environ['wsgi.input']
query_dict = request.params
qs_in_body = wsgi_input.read().decode('utf-8')
query_dict.update(urllib.parse.parse_qs(qs_in_body,
keep_blank_values=True,
strict_parsing=False))
return query_dict
def dump_checkpoint(self):
current_time = time.time()
if self.last_dump_at is None or self.last_dump_at +\
self.dump_interval < current_time:
self.dump_pool.spawn_n(self.profile_log.dump_profile,
self.profiler, os.getpid())
self.last_dump_at = current_time
def __call__(self, environ, start_response):
request = Request(environ)
path_entry = request.path_info.split('/')
# hijack favicon request sent by browser so that it doesn't
# invoke profiling hook and contaminate the data.
if path_entry[1] == 'favicon.ico':
start_response('200 OK', [])
return ''
elif path_entry[1] == self.path:
try:
self.dump_checkpoint()
query_dict = self._combine_body_qs(request)
content, headers = self.viewer.render(request.url,
request.method,
path_entry,
query_dict,
self.renew_profile)
start_response('200 OK', headers)
if isinstance(content, six.text_type):
content = content.encode('utf-8')
return [content]
except MethodNotAllowed as mx:
start_response('405 Method Not Allowed', [])
return '%s' % mx
except NotFoundException as nx:
start_response('404 Not Found', [])
return '%s' % nx
except ProfileException as pf:
start_response('500 Internal Server Error', [])
return '%s' % pf
except Exception as ex:
start_response('500 Internal Server Error', [])
return _('Error on render profiling results: %s') % ex
else:
_locals = locals()
code = self.unwind and PROFILE_EXEC_EAGER or\
PROFILE_EXEC_LAZY
self.profiler.runctx(code, globals(), _locals)
app_iter = _locals['app_iter_']
self.dump_checkpoint()
return app_iter
def renew_profile(self):
self.profiler = get_profiler(self.profile_module)
def get_profiler(profile_module):
if profile_module == 'eventlet.green.profile':
eprofile.Profile._setup = new_setup
eprofile.Profile.runctx = new_runctx
eprofile.Profile.runcall = new_runcall
# hacked method to import profile module supported in python 2.6
__import__(profile_module)
return sys.modules[profile_module].Profile()
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def profile_filter(app):
return ProfileMiddleware(app, conf)
return profile_filter
| swift-master | swift/common/middleware/xprofile.py |
# Copyright (c) 2018 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""
Middleware that will provide Static Large Object (SLO) support.
This feature is very similar to Dynamic Large Object (DLO) support in that
it allows the user to upload many objects concurrently and afterwards
download them as a single object. It is different in that it does not rely
on eventually consistent container listings to do so. Instead, a user
defined manifest of the object segments is used.
----------------------
Uploading the Manifest
----------------------
After the user has uploaded the objects to be concatenated, a manifest is
uploaded. The request must be a ``PUT`` with the query parameter::
?multipart-manifest=put
The body of this request will be an ordered list of segment descriptions in
JSON format. The data to be supplied for each segment is either:
=========== ========================================================
Key Description
=========== ========================================================
path the path to the segment object (not including account)
/container/object_name
etag (optional) the ETag given back when the segment object
was PUT
size_bytes (optional) the size of the complete segment object in
bytes
range (optional) the (inclusive) range within the object to
use as a segment. If omitted, the entire object is used
=========== ========================================================
Or:
=========== ========================================================
Key Description
=========== ========================================================
data base64-encoded data to be returned
=========== ========================================================
.. note::
At least one object-backed segment must be included. If you'd like
to create a manifest consisting purely of data segments, consider
uploading a normal object instead.
The format of the list will be::
[{"path": "/cont/object",
"etag": "etagoftheobjectsegment",
"size_bytes": 10485760,
"range": "1048576-2097151"},
{"data": base64.b64encode("interstitial data")},
{"path": "/cont/another-object", ...},
...]
The number of object-backed segments is limited to ``max_manifest_segments``
(configurable in proxy-server.conf, default 1000). Each segment must be at
least 1 byte. On upload, the middleware will head every object-backed segment
passed in to verify:
1. the segment exists (i.e. the ``HEAD`` was successful);
2. the segment meets minimum size requirements;
3. if the user provided a non-null ``etag``, the etag matches;
4. if the user provided a non-null ``size_bytes``, the size_bytes matches; and
5. if the user provided a ``range``, it is a singular, syntactically correct
range that is satisfiable given the size of the object referenced.
For inlined data segments, the middleware verifies each is valid, non-empty
base64-encoded binary data. Note that data segments *do not* count against
``max_manifest_segments``.
Note that the ``etag`` and ``size_bytes`` keys are optional; if omitted, the
verification is not performed. If any of the objects fail to verify (not
found, size/etag mismatch, below minimum size, invalid range) then the user
will receive a 4xx error response. If everything does match, the user will
receive a 2xx response and the SLO object is ready for downloading.
Note that large manifests may take a long time to verify; historically,
clients would need to use a long read timeout for the connection to give
Swift enough time to send a final ``201 Created`` or ``400 Bad Request``
response. Now, clients should use the query parameters::
?multipart-manifest=put&heartbeat=on
to request that Swift send an immediate ``202 Accepted`` response and periodic
whitespace to keep the connection alive. A final response code will appear in
the body. The format of the response body defaults to text/plain but can be
either json or xml depending on the ``Accept`` header. An example body is as
follows::
Response Status: 201 Created
Response Body:
Etag: "8f481cede6d2ddc07cb36aa084d9a64d"
Last Modified: Wed, 25 Oct 2017 17:08:55 GMT
Errors:
Or, as a json response::
{"Response Status": "201 Created",
"Response Body": "",
"Etag": "\"8f481cede6d2ddc07cb36aa084d9a64d\"",
"Last Modified": "Wed, 25 Oct 2017 17:08:55 GMT",
"Errors": []}
Behind the scenes, on success, a JSON manifest generated from the user input is
sent to object servers with an extra ``X-Static-Large-Object: True`` header
and a modified ``Content-Type``. The items in this manifest will include the
``etag`` and ``size_bytes`` for each segment, regardless of whether the client
specified them for verification. The parameter ``swift_bytes=$total_size`` will
be appended to the existing ``Content-Type``, where ``$total_size`` is the sum
of all the included segments' ``size_bytes``. This extra parameter will be
hidden from the user.
Manifest files can reference objects in separate containers, which will improve
concurrent upload speed. Objects can be referenced by multiple manifests. The
segments of a SLO manifest can even be other SLO manifests. Treat them as any
other object i.e., use the ``Etag`` and ``Content-Length`` given on the ``PUT``
of the sub-SLO in the manifest to the parent SLO.
While uploading a manifest, a user can send ``Etag`` for verification. It needs
to be md5 of the segments' etags, if there is no range specified. For example,
if the manifest to be uploaded looks like this::
[{"path": "/cont/object1",
"etag": "etagoftheobjectsegment1",
"size_bytes": 10485760},
{"path": "/cont/object2",
"etag": "etagoftheobjectsegment2",
"size_bytes": 10485760}]
The Etag of the above manifest would be md5 of ``etagoftheobjectsegment1`` and
``etagoftheobjectsegment2``. This could be computed in the following way::
echo -n 'etagoftheobjectsegment1etagoftheobjectsegment2' | md5sum
If a manifest to be uploaded with a segment range looks like this::
[{"path": "/cont/object1",
"etag": "etagoftheobjectsegmentone",
"size_bytes": 10485760,
"range": "1-2"},
{"path": "/cont/object2",
"etag": "etagoftheobjectsegmenttwo",
"size_bytes": 10485760,
"range": "3-4"}]
While computing the Etag of the above manifest, internally each segment's etag
will be taken in the form of ``etagvalue:rangevalue;``. Hence the Etag of the
above manifest would be::
echo -n 'etagoftheobjectsegmentone:1-2;etagoftheobjectsegmenttwo:3-4;' \
| md5sum
For the purposes of Etag computations, inlined data segments are considered to
have an etag of the md5 of the raw data (i.e., *not* base64-encoded).
-------------------
Range Specification
-------------------
Users now have the ability to specify ranges for SLO segments.
Users can include an optional ``range`` field in segment descriptions
to specify which bytes from the underlying object should be used for the
segment data. Only one range may be specified per segment.
.. note::
The ``etag`` and ``size_bytes`` fields still describe the backing object
as a whole.
If a user uploads this manifest::
[{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "0-1048576"},
{"path": "/con/obj_seg_2", "size_bytes": 2097152,
"range": "512-1550000"},
{"path": "/con/obj_seg_1", "size_bytes": 2097152, "range": "-2048"}]
The segment will consist of the first 1048576 bytes of /con/obj_seg_1,
followed by bytes 513 through 1550000 (inclusive) of /con/obj_seg_2, and
finally bytes 2095104 through 2097152 (i.e., the last 2048 bytes) of
/con/obj_seg_1.
.. note::
The minimum sized range is 1 byte. This is the same as the minimum
segment size.
-------------------------
Inline Data Specification
-------------------------
When uploading a manifest, users can include 'data' segments that should
be included along with objects. The data in these segments must be
base64-encoded binary data and will be included in the etag of the
resulting large object exactly as if that data had been uploaded and
referenced as separate objects.
.. note::
This feature is primarily aimed at reducing the need for storing
many tiny objects, and as such any supplied data must fit within
the maximum manifest size (default is 8MiB). This maximum size
can be configured via ``max_manifest_size`` in proxy-server.conf.
-------------------------
Retrieving a Large Object
-------------------------
A ``GET`` request to the manifest object will return the concatenation of the
objects from the manifest much like DLO. If any of the segments from the
manifest are not found or their ``Etag``/``Content-Length`` have changed since
upload, the connection will drop. In this case a ``409 Conflict`` will be
logged in the proxy logs and the user will receive incomplete results. Note
that this will be enforced regardless of whether the user performed per-segment
validation during upload.
The headers from this ``GET`` or ``HEAD`` request will return the metadata
attached to the manifest object itself with some exceptions:
===================== ==================================================
Header Value
===================== ==================================================
Content-Length the total size of the SLO (the sum of the sizes of
the segments in the manifest)
X-Static-Large-Object the string "True"
Etag the etag of the SLO (generated the same way as DLO)
===================== ==================================================
A ``GET`` request with the query parameter::
?multipart-manifest=get
will return a transformed version of the original manifest, containing
additional fields and different key names. For example, the first manifest in
the example above would look like this::
[{"name": "/cont/object",
"hash": "etagoftheobjectsegment",
"bytes": 10485760,
"range": "1048576-2097151"}, ...]
As you can see, some of the fields are renamed compared to the put request:
*path* is *name*, *etag* is *hash*, *size_bytes* is *bytes*. The *range* field
remains the same (if present).
A GET request with the query parameters::
?multipart-manifest=get&format=raw
will return the contents of the original manifest as it was sent by the client.
The main purpose for both calls is solely debugging.
When the manifest object is uploaded you are more or less guaranteed that
every segment in the manifest exists and matched the specifications.
However, there is nothing that prevents the user from breaking the
SLO download by deleting/replacing a segment referenced in the manifest. It is
left to the user to use caution in handling the segments.
-----------------------
Deleting a Large Object
-----------------------
A ``DELETE`` request will just delete the manifest object itself. The segment
data referenced by the manifest will remain unchanged.
A ``DELETE`` with a query parameter::
?multipart-manifest=delete
will delete all the segments referenced in the manifest and then the manifest
itself. The failure response will be similar to the bulk delete middleware.
A ``DELETE`` with the query parameters::
?multipart-manifest=delete&async=yes
will schedule all the segments referenced in the manifest to be deleted
asynchronously and then delete the manifest itself. Note that segments will
continue to appear in listings and be counted for quotas until they are
cleaned up by the object-expirer. This option is only available when all
segments are in the same container and none of them are nested SLOs.
------------------------
Modifying a Large Object
------------------------
``PUT`` and ``POST`` requests will work as expected; ``PUT``\s will just
overwrite the manifest object for example.
------------------
Container Listings
------------------
In a container listing the size listed for SLO manifest objects will be the
``total_size`` of the concatenated segments in the manifest. The overall
``X-Container-Bytes-Used`` for the container (and subsequently for the account)
will not reflect ``total_size`` of the manifest but the actual size of the JSON
data stored. The reason for this somewhat confusing discrepancy is we want the
container listing to reflect the size of the manifest object when it is
downloaded. We do not, however, want to count the bytes-used twice (for both
the manifest and the segments it's referring to) in the container and account
metadata which can be used for stats and billing purposes.
"""
import base64
from cgi import parse_header
from collections import defaultdict
from datetime import datetime
import json
import mimetypes
import re
import time
import six
from swift.cli.container_deleter import make_delete_jobs
from swift.common.exceptions import ListingIterError, SegmentError
from swift.common.middleware.listing_formats import \
MAX_CONTAINER_LISTING_CONTENT_LENGTH
from swift.common.swob import Request, HTTPBadRequest, HTTPServerError, \
HTTPMethodNotAllowed, HTTPRequestEntityTooLarge, HTTPLengthRequired, \
HTTPOk, HTTPPreconditionFailed, HTTPException, HTTPNotFound, \
HTTPUnauthorized, HTTPConflict, HTTPUnprocessableEntity, \
HTTPServiceUnavailable, Response, Range, normalize_etag, \
RESPONSE_REASONS, str_to_wsgi, bytes_to_wsgi, wsgi_to_str, wsgi_quote
from swift.common.utils import get_logger, config_true_value, \
get_valid_utf8_str, override_bytes_from_content_type, split_path, \
RateLimitedIterator, quote, close_if_possible, closing_if_possible, \
LRUCache, StreamingPile, strict_b64decode, Timestamp, drain_and_close, \
get_expirer_container, md5
from swift.common.registry import register_swift_info
from swift.common.request_helpers import SegmentedIterable, \
get_sys_meta_prefix, update_etag_is_at_header, resolve_etag_is_at_header, \
get_container_update_override_key, update_ignore_range_header
from swift.common.constraints import check_utf8, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common.http import HTTP_NOT_FOUND, HTTP_UNAUTHORIZED, is_success
from swift.common.wsgi import WSGIContext, make_subrequest, make_env, \
make_pre_authed_request
from swift.common.middleware.bulk import get_response_body, \
ACCEPTABLE_FORMATS, Bulk
from swift.proxy.controllers.base import get_container_info
DEFAULT_RATE_LIMIT_UNDER_SIZE = 1024 ** 2 # 1 MiB
DEFAULT_MAX_MANIFEST_SEGMENTS = 1000
DEFAULT_MAX_MANIFEST_SIZE = 8 * (1024 ** 2) # 8 MiB
DEFAULT_YIELD_FREQUENCY = 10
SLO_KEYS = {
# required: optional
'data': set(),
'path': {'range', 'etag', 'size_bytes'},
}
SYSMETA_SLO_ETAG = get_sys_meta_prefix('object') + 'slo-etag'
SYSMETA_SLO_SIZE = get_sys_meta_prefix('object') + 'slo-size'
def parse_and_validate_input(req_body, req_path):
"""
Given a request body, parses it and returns a list of dictionaries.
The output structure is nearly the same as the input structure, but it
is not an exact copy. Given a valid object-backed input dictionary
``d_in``, its corresponding output dictionary ``d_out`` will be as follows:
* d_out['etag'] == d_in['etag']
* d_out['path'] == d_in['path']
* d_in['size_bytes'] can be a string ("12") or an integer (12), but
d_out['size_bytes'] is an integer.
* (optional) d_in['range'] is a string of the form "M-N", "M-", or
"-N", where M and N are non-negative integers. d_out['range'] is the
corresponding swob.Range object. If d_in does not have a key
'range', neither will d_out.
Inlined data dictionaries will have any extraneous padding stripped.
:raises: HTTPException on parse errors or semantic errors (e.g. bogus
JSON structure, syntactically invalid ranges)
:returns: a list of dictionaries on success
"""
try:
parsed_data = json.loads(req_body)
except ValueError:
raise HTTPBadRequest("Manifest must be valid JSON.\n")
if not isinstance(parsed_data, list):
raise HTTPBadRequest("Manifest must be a list.\n")
# If we got here, req_path refers to an object, so this won't ever raise
# ValueError.
vrs, account, _junk = split_path(req_path, 3, 3, True)
errors = []
for seg_index, seg_dict in enumerate(parsed_data):
if not isinstance(seg_dict, dict):
errors.append(b"Index %d: not a JSON object" % seg_index)
continue
for required in SLO_KEYS:
if required in seg_dict:
segment_type = required
break
else:
errors.append(
b"Index %d: expected keys to include one of %s"
% (seg_index,
b" or ".join(repr(required) for required in SLO_KEYS)))
continue
allowed_keys = SLO_KEYS[segment_type].union([segment_type])
extraneous_keys = [k for k in seg_dict if k not in allowed_keys]
if extraneous_keys:
errors.append(
b"Index %d: extraneous keys %s"
% (seg_index,
b", ".join(json.dumps(ek).encode('ascii')
for ek in sorted(extraneous_keys))))
continue
if segment_type == 'path':
if not isinstance(seg_dict['path'], six.string_types):
errors.append(b"Index %d: \"path\" must be a string" %
seg_index)
continue
if not (seg_dict.get('etag') is None or
isinstance(seg_dict['etag'], six.string_types)):
errors.append(b'Index %d: "etag" must be a string or null '
b'(if provided)' % seg_index)
continue
if '/' not in seg_dict['path'].strip('/'):
errors.append(
b"Index %d: path does not refer to an object. Path must "
b"be of the form /container/object." % seg_index)
continue
seg_size = seg_dict.get('size_bytes')
if seg_size is not None:
try:
seg_size = int(seg_size)
seg_dict['size_bytes'] = seg_size
except (TypeError, ValueError):
errors.append(b"Index %d: invalid size_bytes" % seg_index)
continue
if seg_size < 1 and seg_index != (len(parsed_data) - 1):
errors.append(b"Index %d: too small; each segment must be "
b"at least 1 byte."
% (seg_index,))
continue
obj_path = '/'.join(['', vrs, account,
quote(seg_dict['path'].lstrip('/'))])
if req_path == obj_path:
errors.append(
b"Index %d: manifest must not include itself as a segment"
% seg_index)
continue
if seg_dict.get('range'):
try:
seg_dict['range'] = Range('bytes=%s' % seg_dict['range'])
except ValueError:
errors.append(b"Index %d: invalid range" % seg_index)
continue
if len(seg_dict['range'].ranges) > 1:
errors.append(b"Index %d: multiple ranges "
b"(only one allowed)" % seg_index)
continue
# If the user *told* us the object's size, we can check range
# satisfiability right now. If they lied about the size, we'll
# fail that validation later.
if (seg_size is not None and 1 != len(
seg_dict['range'].ranges_for_length(seg_size))):
errors.append(b"Index %d: unsatisfiable range" % seg_index)
continue
elif segment_type == 'data':
# Validate that the supplied data is non-empty and base64-encoded
try:
data = strict_b64decode(seg_dict['data'])
except ValueError:
errors.append(
b"Index %d: data must be valid base64" % seg_index)
continue
if len(data) < 1:
errors.append(b"Index %d: too small; each segment must be "
b"at least 1 byte."
% (seg_index,))
continue
# re-encode to normalize padding
seg_dict['data'] = base64.b64encode(data).decode('ascii')
if parsed_data and all('data' in d for d in parsed_data):
errors.append(b"Inline data segments require at least one "
b"object-backed segment.")
if errors:
error_message = b"".join(e + b"\n" for e in errors)
raise HTTPBadRequest(error_message,
headers={"Content-Type": "text/plain"})
return parsed_data
class SloGetContext(WSGIContext):
max_slo_recursion_depth = 10
def __init__(self, slo):
self.slo = slo
super(SloGetContext, self).__init__(slo.app)
def _fetch_sub_slo_segments(self, req, version, acc, con, obj):
"""
Fetch the submanifest, parse it, and return it.
Raise exception on failures.
:param req: the upstream request
:param version: whatever
:param acc: native
:param con: native
:param obj: native
"""
sub_req = make_subrequest(
req.environ,
path=wsgi_quote('/'.join([
'', str_to_wsgi(version),
str_to_wsgi(acc), str_to_wsgi(con), str_to_wsgi(obj)])),
method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent='%(orig)s SLO MultipartGET', swift_source='SLO')
sub_resp = sub_req.get_response(self.slo.app)
if not sub_resp.is_success:
# Error message should be short
body = sub_resp.body
if not six.PY2:
body = body.decode('utf-8')
msg = ('while fetching %s, GET of submanifest %s '
'failed with status %d (%s)')
raise ListingIterError(msg % (
req.path, sub_req.path, sub_resp.status_int,
body if len(body) <= 60 else body[:57] + '...'))
try:
with closing_if_possible(sub_resp.app_iter):
return json.loads(b''.join(sub_resp.app_iter))
except ValueError as err:
raise ListingIterError(
'while fetching %s, JSON-decoding of submanifest %s '
'failed with %s' % (req.path, sub_req.path, err))
def _segment_path(self, version, account, seg_dict):
return "/{ver}/{acc}/{conobj}".format(
ver=version, acc=account,
conobj=seg_dict['name'].lstrip('/')
)
def _segment_length(self, seg_dict):
"""
Returns the number of bytes that will be fetched from the specified
segment on a plain GET request for this SLO manifest.
"""
if 'raw_data' in seg_dict:
return len(seg_dict['raw_data'])
seg_range = seg_dict.get('range')
if seg_range is not None:
# The range is of the form N-M, where N and M are both positive
# decimal integers. We know this because this middleware is the
# only thing that creates the SLO manifests stored in the
# cluster.
range_start, range_end = [int(x) for x in seg_range.split('-')]
return (range_end - range_start) + 1
else:
return int(seg_dict['bytes'])
def _segment_listing_iterator(self, req, version, account, segments,
byteranges):
for seg_dict in segments:
if config_true_value(seg_dict.get('sub_slo')):
override_bytes_from_content_type(seg_dict,
logger=self.slo.logger)
# We handle the range stuff here so that we can be smart about
# skipping unused submanifests. For example, if our first segment is a
# submanifest referencing 50 MiB total, but start_byte falls in
# the 51st MiB, then we can avoid fetching the first submanifest.
#
# If we were to make SegmentedIterable handle all the range
# calculations, we would be unable to make this optimization.
total_length = sum(self._segment_length(seg) for seg in segments)
if not byteranges:
byteranges = [(0, total_length - 1)]
# Cache segments from sub-SLOs in case more than one byterange
# includes data from a particular sub-SLO. We only cache a few sets
# of segments so that a malicious user cannot build a giant SLO tree
# and then GET it to run the proxy out of memory.
#
# LRUCache is a little awkward to use this way, but it beats doing
# things manually.
#
# 20 is sort of an arbitrary choice; it's twice our max recursion
# depth, so we know this won't expand memory requirements by too
# much.
cached_fetch_sub_slo_segments = \
LRUCache(maxsize=20)(self._fetch_sub_slo_segments)
for first_byte, last_byte in byteranges:
byterange_listing_iter = self._byterange_listing_iterator(
req, version, account, segments, first_byte, last_byte,
cached_fetch_sub_slo_segments)
for seg_info in byterange_listing_iter:
yield seg_info
def _byterange_listing_iterator(self, req, version, account, segments,
first_byte, last_byte,
cached_fetch_sub_slo_segments,
recursion_depth=1):
last_sub_path = None
for seg_dict in segments:
if 'data' in seg_dict:
seg_dict['raw_data'] = strict_b64decode(seg_dict.pop('data'))
seg_length = self._segment_length(seg_dict)
if first_byte >= seg_length:
# don't need any bytes from this segment
first_byte -= seg_length
last_byte -= seg_length
continue
if last_byte < 0:
# no bytes are needed from this or any future segment
return
if 'raw_data' in seg_dict:
yield dict(seg_dict,
first_byte=max(0, first_byte),
last_byte=min(seg_length - 1, last_byte))
first_byte -= seg_length
last_byte -= seg_length
continue
seg_range = seg_dict.get('range')
if seg_range is None:
range_start, range_end = 0, seg_length - 1
else:
# This simple parsing of the range is valid because we already
# validated and supplied concrete values for the range
# during SLO manifest creation
range_start, range_end = map(int, seg_range.split('-'))
if config_true_value(seg_dict.get('sub_slo')):
# Do this check here so that we can avoid fetching this last
# manifest before raising the exception
if recursion_depth >= self.max_slo_recursion_depth:
raise ListingIterError(
"While processing manifest %r, "
"max recursion depth was exceeded" % req.path)
if six.PY2:
sub_path = get_valid_utf8_str(seg_dict['name'])
else:
sub_path = seg_dict['name']
sub_cont, sub_obj = split_path(sub_path, 2, 2, True)
if last_sub_path != sub_path:
sub_segments = cached_fetch_sub_slo_segments(
req, version, account, sub_cont, sub_obj)
last_sub_path = sub_path
# Use the existing machinery to slice into the sub-SLO.
for sub_seg_dict in self._byterange_listing_iterator(
req, version, account, sub_segments,
# This adjusts first_byte and last_byte to be
# relative to the sub-SLO.
range_start + max(0, first_byte),
min(range_end, range_start + last_byte),
cached_fetch_sub_slo_segments,
recursion_depth=recursion_depth + 1):
yield sub_seg_dict
else:
if six.PY2 and isinstance(seg_dict['name'], six.text_type):
seg_dict['name'] = seg_dict['name'].encode("utf-8")
yield dict(seg_dict,
first_byte=max(0, first_byte) + range_start,
last_byte=min(range_end, range_start + last_byte))
first_byte -= seg_length
last_byte -= seg_length
def _need_to_refetch_manifest(self, req):
"""
Just because a response shows that an object is a SLO manifest does not
mean that response's body contains the entire SLO manifest. If it
doesn't, we need to make a second request to actually get the whole
thing.
Note: this assumes that X-Static-Large-Object has already been found.
"""
if req.method == 'HEAD':
# We've already looked for SYSMETA_SLO_ETAG/SIZE in the response
# and didn't find them. We have to fetch the whole manifest and
# recompute.
return True
response_status = int(self._response_status[:3])
# These are based on etag, and the SLO's etag is almost certainly not
# the manifest object's etag. Still, it's highly likely that the
# submitted If-None-Match won't match the manifest object's etag, so
# we can avoid re-fetching the manifest if we got a successful
# response.
if ((req.if_match or req.if_none_match) and
not is_success(response_status)):
return True
if req.range and response_status in (206, 416):
content_range = ''
for header, value in self._response_headers:
if header.lower() == 'content-range':
content_range = value
break
# e.g. Content-Range: bytes 0-14289/14290
match = re.match(r'bytes (\d+)-(\d+)/(\d+)$', content_range)
if not match:
# Malformed or missing, so we don't know what we got.
return True
first_byte, last_byte, length = [int(x) for x in match.groups()]
# If and only if we actually got back the full manifest body, then
# we can avoid re-fetching the object.
got_everything = (first_byte == 0 and last_byte == length - 1)
return not got_everything
return False
def handle_slo_get_or_head(self, req, start_response):
"""
Takes a request and a start_response callable and does the normal WSGI
thing with them. Returns an iterator suitable for sending up the WSGI
chain.
:param req: :class:`~swift.common.swob.Request` object; is a ``GET`` or
``HEAD`` request aimed at what may (or may not) be a static
large object manifest.
:param start_response: WSGI start_response callable
"""
if req.params.get('multipart-manifest') != 'get':
# If this object is an SLO manifest, we may have saved off the
# large object etag during the original PUT. Send an
# X-Backend-Etag-Is-At header so that, if the SLO etag *was*
# saved, we can trust the object-server to respond appropriately
# to If-Match/If-None-Match requests.
update_etag_is_at_header(req, SYSMETA_SLO_ETAG)
# Tell the object server that if it's a manifest,
# we want the whole thing
update_ignore_range_header(req, 'X-Static-Large-Object')
resp_iter = self._app_call(req.environ)
# make sure this response is for a static large object manifest
slo_marker = slo_etag = slo_size = slo_timestamp = None
for header, value in self._response_headers:
header = header.lower()
if header == SYSMETA_SLO_ETAG:
slo_etag = value
elif header == SYSMETA_SLO_SIZE:
slo_size = value
elif (header == 'x-static-large-object' and
config_true_value(value)):
slo_marker = value
elif header == 'x-backend-timestamp':
slo_timestamp = value
if slo_marker and slo_etag and slo_size and slo_timestamp:
break
if not slo_marker:
# Not a static large object manifest. Just pass it through.
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return resp_iter
# Handle pass-through request for the manifest itself
if req.params.get('multipart-manifest') == 'get':
if req.params.get('format') == 'raw':
resp_iter = self.convert_segment_listing(
self._response_headers, resp_iter)
else:
new_headers = []
for header, value in self._response_headers:
if header.lower() == 'content-type':
new_headers.append(('Content-Type',
'application/json; charset=utf-8'))
else:
new_headers.append((header, value))
self._response_headers = new_headers
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return resp_iter
is_conditional = self._response_status.startswith(('304', '412')) and (
req.if_match or req.if_none_match)
if slo_etag and slo_size and (
req.method == 'HEAD' or is_conditional):
# Since we have length and etag, we can respond immediately
resp = Response(
status=self._response_status,
headers=self._response_headers,
app_iter=resp_iter,
request=req,
conditional_etag=resolve_etag_is_at_header(
req, self._response_headers),
conditional_response=True)
resp.headers.update({
'Etag': '"%s"' % slo_etag,
'X-Manifest-Etag': self._response_header_value('etag'),
'Content-Length': slo_size,
})
return resp(req.environ, start_response)
if self._need_to_refetch_manifest(req):
req.environ['swift.non_client_disconnect'] = True
close_if_possible(resp_iter)
del req.environ['swift.non_client_disconnect']
get_req = make_subrequest(
req.environ, method='GET',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent='%(orig)s SLO MultipartGET', swift_source='SLO')
resp_iter = self._app_call(get_req.environ)
slo_marker = config_true_value(self._response_header_value(
'x-static-large-object'))
if not slo_marker: # will also catch non-2xx responses
got_timestamp = self._response_header_value(
'x-backend-timestamp') or '0'
if Timestamp(got_timestamp) >= Timestamp(slo_timestamp):
# We've got a newer response available, so serve that.
# Note that if there's data, it's going to be a 200 now,
# not a 206, and we're not going to drop bytes in the
# proxy on the client's behalf. Fortunately, the RFC is
# pretty forgiving for a server; there's no guarantee that
# a Range header will be respected.
resp = Response(
status=self._response_status,
headers=self._response_headers,
app_iter=resp_iter,
request=req,
conditional_etag=resolve_etag_is_at_header(
req, self._response_headers),
conditional_response=is_success(
int(self._response_status[:3])))
return resp(req.environ, start_response)
else:
# We saw newer data that indicated it's an SLO, but
# couldn't fetch the whole thing; 503 seems reasonable?
close_if_possible(resp_iter)
raise HTTPServiceUnavailable(request=req)
# NB: we might have gotten an out-of-date manifest -- that's OK;
# we'll just try to serve the old data
# Any Content-Range from a manifest is almost certainly wrong for the
# full large object.
resp_headers = [(h, v) for h, v in self._response_headers
if not h.lower() == 'content-range']
response = self.get_or_head_response(
req, resp_headers, resp_iter)
return response(req.environ, start_response)
def convert_segment_listing(self, resp_headers, resp_iter):
"""
Converts the manifest data to match with the format
that was put in through ?multipart-manifest=put
:param resp_headers: response headers
:param resp_iter: a response iterable
"""
segments = self._get_manifest_read(resp_iter)
for seg_dict in segments:
if 'data' in seg_dict:
continue
seg_dict.pop('content_type', None)
seg_dict.pop('last_modified', None)
seg_dict.pop('sub_slo', None)
seg_dict['path'] = seg_dict.pop('name', None)
seg_dict['size_bytes'] = seg_dict.pop('bytes', None)
seg_dict['etag'] = seg_dict.pop('hash', None)
json_data = json.dumps(segments, sort_keys=True) # convert to string
if six.PY3:
json_data = json_data.encode('utf-8')
new_headers = []
for header, value in resp_headers:
if header.lower() == 'content-length':
new_headers.append(('Content-Length', len(json_data)))
elif header.lower() == 'etag':
new_headers.append(
('Etag', md5(json_data, usedforsecurity=False)
.hexdigest()))
else:
new_headers.append((header, value))
self._response_headers = new_headers
return [json_data]
def _get_manifest_read(self, resp_iter):
with closing_if_possible(resp_iter):
resp_body = b''.join(resp_iter)
try:
segments = json.loads(resp_body)
except ValueError:
segments = []
return segments
def get_or_head_response(self, req, resp_headers, resp_iter):
segments = self._get_manifest_read(resp_iter)
slo_etag = None
content_length = None
response_headers = []
for header, value in resp_headers:
lheader = header.lower()
if lheader == 'etag':
response_headers.append(('X-Manifest-Etag', value))
elif lheader != 'content-length':
response_headers.append((header, value))
if lheader == SYSMETA_SLO_ETAG:
slo_etag = value
elif lheader == SYSMETA_SLO_SIZE:
# it's from sysmeta, so we don't worry about non-integer
# values here
content_length = int(value)
# Prep to calculate content_length & etag if necessary
if slo_etag is None:
calculated_etag = md5(usedforsecurity=False)
if content_length is None:
calculated_content_length = 0
for seg_dict in segments:
# Decode any inlined data; it's important that we do this *before*
# calculating the segment length and etag
if 'data' in seg_dict:
seg_dict['raw_data'] = base64.b64decode(seg_dict.pop('data'))
if slo_etag is None:
if 'raw_data' in seg_dict:
r = md5(seg_dict['raw_data'],
usedforsecurity=False).hexdigest()
elif seg_dict.get('range'):
r = '%s:%s;' % (seg_dict['hash'], seg_dict['range'])
else:
r = seg_dict['hash']
calculated_etag.update(r.encode('ascii'))
if content_length is None:
if config_true_value(seg_dict.get('sub_slo')):
override_bytes_from_content_type(
seg_dict, logger=self.slo.logger)
calculated_content_length += self._segment_length(seg_dict)
if slo_etag is None:
slo_etag = calculated_etag.hexdigest()
if content_length is None:
content_length = calculated_content_length
response_headers.append(('Content-Length', str(content_length)))
response_headers.append(('Etag', '"%s"' % slo_etag))
if req.method == 'HEAD':
return self._manifest_head_response(req, response_headers)
else:
return self._manifest_get_response(
req, content_length, response_headers, segments)
def _manifest_head_response(self, req, response_headers):
conditional_etag = resolve_etag_is_at_header(req, response_headers)
return HTTPOk(request=req, headers=response_headers, body=b'',
conditional_etag=conditional_etag,
conditional_response=True)
def _manifest_get_response(self, req, content_length, response_headers,
segments):
if req.range:
byteranges = [
# For some reason, swob.Range.ranges_for_length adds 1 to the
# last byte's position.
(start, end - 1) for start, end
in req.range.ranges_for_length(content_length)]
else:
byteranges = []
ver, account, _junk = req.split_path(3, 3, rest_with_last=True)
account = wsgi_to_str(account)
plain_listing_iter = self._segment_listing_iterator(
req, ver, account, segments, byteranges)
def ratelimit_predicate(seg_dict):
if 'raw_data' in seg_dict:
return False # it's already in memory anyway
start = seg_dict.get('start_byte') or 0
end = seg_dict.get('end_byte')
if end is None:
end = int(seg_dict['bytes']) - 1
is_small = (end - start + 1) < self.slo.rate_limit_under_size
return is_small
ratelimited_listing_iter = RateLimitedIterator(
plain_listing_iter,
self.slo.rate_limit_segments_per_sec,
limit_after=self.slo.rate_limit_after_segment,
ratelimit_if=ratelimit_predicate)
# data segments are already in the correct format, but object-backed
# segments need a path key added
segment_listing_iter = (
seg_dict if 'raw_data' in seg_dict else
dict(seg_dict, path=self._segment_path(ver, account, seg_dict))
for seg_dict in ratelimited_listing_iter)
segmented_iter = SegmentedIterable(
req, self.slo.app, segment_listing_iter,
name=req.path, logger=self.slo.logger,
ua_suffix="SLO MultipartGET",
swift_source="SLO",
max_get_time=self.slo.max_get_time)
try:
segmented_iter.validate_first_segment()
except (ListingIterError, SegmentError):
# Copy from the SLO explanation in top of this file.
# If any of the segments from the manifest are not found or
# their Etag/Content Length no longer match the connection
# will drop. In this case a 409 Conflict will be logged in
# the proxy logs and the user will receive incomplete results.
return HTTPConflict(request=req)
conditional_etag = resolve_etag_is_at_header(req, response_headers)
response = Response(request=req, content_length=content_length,
headers=response_headers,
conditional_response=True,
conditional_etag=conditional_etag,
app_iter=segmented_iter)
return response
class StaticLargeObject(object):
"""
StaticLargeObject Middleware
See above for a full description.
The proxy logs created for any subrequests made will have swift.source set
to "SLO".
:param app: The next WSGI filter or app in the paste.deploy chain.
:param conf: The configuration dict for the middleware.
:param max_manifest_segments: The maximum number of segments allowed in
newly-created static large objects.
:param max_manifest_size: The maximum size (in bytes) of newly-created
static-large-object manifests.
:param yield_frequency: If the client included ``heartbeat=on`` in the
query parameters when creating a new static large
object, the period of time to wait between sending
whitespace to keep the connection alive.
"""
def __init__(self, app, conf,
max_manifest_segments=DEFAULT_MAX_MANIFEST_SEGMENTS,
max_manifest_size=DEFAULT_MAX_MANIFEST_SIZE,
yield_frequency=DEFAULT_YIELD_FREQUENCY,
allow_async_delete=True):
self.conf = conf
self.app = app
self.logger = get_logger(conf, log_route='slo')
self.max_manifest_segments = max_manifest_segments
self.max_manifest_size = max_manifest_size
self.yield_frequency = yield_frequency
self.allow_async_delete = allow_async_delete
self.max_get_time = int(self.conf.get('max_get_time', 86400))
self.rate_limit_under_size = int(self.conf.get(
'rate_limit_under_size', DEFAULT_RATE_LIMIT_UNDER_SIZE))
self.rate_limit_after_segment = int(self.conf.get(
'rate_limit_after_segment', '10'))
self.rate_limit_segments_per_sec = int(self.conf.get(
'rate_limit_segments_per_sec', '1'))
self.concurrency = min(1000, max(0, int(self.conf.get(
'concurrency', '2'))))
delete_concurrency = int(self.conf.get(
'delete_concurrency', self.concurrency))
self.bulk_deleter = Bulk(
app, {},
max_deletes_per_request=float('inf'),
delete_concurrency=delete_concurrency,
logger=self.logger)
# Need to know how to expire things to do async deletes
if conf.get('auto_create_account_prefix'):
# proxy app will log about how this should get moved to swift.conf
prefix = conf['auto_create_account_prefix']
else:
prefix = AUTO_CREATE_ACCOUNT_PREFIX
self.expiring_objects_account = prefix + (
conf.get('expiring_objects_account_name') or 'expiring_objects')
self.expiring_objects_container_divisor = int(
conf.get('expiring_objects_container_divisor', 86400))
def handle_multipart_get_or_head(self, req, start_response):
"""
Handles the GET or HEAD of a SLO manifest.
The response body (only on GET, of course) will consist of the
concatenation of the segments.
:param req: a :class:`~swift.common.swob.Request` with a path
referencing an object
:param start_response: WSGI start_response callable
:raises HttpException: on errors
"""
return SloGetContext(self).handle_slo_get_or_head(req, start_response)
def handle_multipart_put(self, req, start_response):
"""
Will handle the PUT of a SLO manifest.
Heads every object in manifest to check if is valid and if so will
save a manifest generated from the user input. Uses WSGIContext to
call self and start_response and returns a WSGI iterator.
:param req: a :class:`~swift.common.swob.Request` with an obj in path
:param start_response: WSGI start_response callable
:raises HttpException: on errors
"""
vrs, account, container, obj = req.split_path(4, rest_with_last=True)
if req.headers.get('X-Copy-From'):
raise HTTPMethodNotAllowed(
'Multipart Manifest PUTs cannot be COPY requests')
if req.content_length is None:
if req.headers.get('transfer-encoding', '').lower() != 'chunked':
raise HTTPLengthRequired(request=req)
else:
if req.content_length > self.max_manifest_size:
raise HTTPRequestEntityTooLarge(
"Manifest File > %d bytes" % self.max_manifest_size)
parsed_data = parse_and_validate_input(
req.body_file.read(self.max_manifest_size),
wsgi_to_str(req.path))
problem_segments = []
object_segments = [seg for seg in parsed_data if 'path' in seg]
if len(object_segments) > self.max_manifest_segments:
raise HTTPRequestEntityTooLarge(
'Number of object-backed segments must be <= %d' %
self.max_manifest_segments)
try:
out_content_type = req.accept.best_match(ACCEPTABLE_FORMATS)
except ValueError:
out_content_type = 'text/plain' # Ignore invalid header
if not out_content_type:
out_content_type = 'text/plain'
data_for_storage = [None] * len(parsed_data)
total_size = 0
path2indices = defaultdict(list)
for index, seg_dict in enumerate(parsed_data):
if 'data' in seg_dict:
data_for_storage[index] = seg_dict
total_size += len(base64.b64decode(seg_dict['data']))
else:
path2indices[seg_dict['path']].append(index)
def do_head(obj_name):
if six.PY2:
obj_path = '/'.join(['', vrs, account,
get_valid_utf8_str(obj_name).lstrip('/')])
else:
obj_path = '/'.join(['', vrs, account,
str_to_wsgi(obj_name.lstrip('/'))])
obj_path = wsgi_quote(obj_path)
sub_req = make_subrequest(
req.environ, path=obj_path + '?', # kill the query string
method='HEAD',
headers={'x-auth-token': req.headers.get('x-auth-token')},
agent='%(orig)s SLO MultipartPUT', swift_source='SLO')
return obj_name, sub_req.get_response(self)
def validate_seg_dict(seg_dict, head_seg_resp, allow_empty_segment):
obj_name = seg_dict['path']
if not head_seg_resp.is_success:
problem_segments.append([quote(obj_name),
head_seg_resp.status])
return 0, None
segment_length = head_seg_resp.content_length
if seg_dict.get('range'):
# Since we now know the length, we can normalize the
# range. We know that there is exactly one range
# requested since we checked that earlier in
# parse_and_validate_input().
ranges = seg_dict['range'].ranges_for_length(
head_seg_resp.content_length)
if not ranges:
problem_segments.append([quote(obj_name),
'Unsatisfiable Range'])
elif ranges == [(0, head_seg_resp.content_length)]:
# Just one range, and it exactly matches the object.
# Why'd we do this again?
del seg_dict['range']
segment_length = head_seg_resp.content_length
else:
rng = ranges[0]
seg_dict['range'] = '%d-%d' % (rng[0], rng[1] - 1)
segment_length = rng[1] - rng[0]
if segment_length < 1 and not allow_empty_segment:
problem_segments.append(
[quote(obj_name),
'Too small; each segment must be at least 1 byte.'])
_size_bytes = seg_dict.get('size_bytes')
size_mismatch = (
_size_bytes is not None and
_size_bytes != head_seg_resp.content_length
)
if size_mismatch:
problem_segments.append([quote(obj_name), 'Size Mismatch'])
_etag = seg_dict.get('etag')
etag_mismatch = (
_etag is not None and
_etag != head_seg_resp.etag
)
if etag_mismatch:
problem_segments.append([quote(obj_name), 'Etag Mismatch'])
if head_seg_resp.last_modified:
last_modified = head_seg_resp.last_modified
else:
# shouldn't happen
last_modified = datetime.now()
last_modified_formatted = last_modified.strftime(
'%Y-%m-%dT%H:%M:%S.%f'
)
seg_data = {
'name': '/' + seg_dict['path'].lstrip('/'),
'bytes': head_seg_resp.content_length,
'hash': head_seg_resp.etag,
'content_type': head_seg_resp.content_type,
'last_modified': last_modified_formatted
}
if seg_dict.get('range'):
seg_data['range'] = seg_dict['range']
if config_true_value(
head_seg_resp.headers.get('X-Static-Large-Object')):
seg_data['sub_slo'] = True
return segment_length, seg_data
heartbeat = config_true_value(req.params.get('heartbeat'))
separator = b''
if heartbeat:
# Apparently some ways of deploying require that this to happens
# *before* the return? Not sure why.
req.environ['eventlet.minimum_write_chunk_size'] = 0
start_response('202 Accepted', [ # NB: not 201 !
('Content-Type', out_content_type),
])
separator = b'\r\n\r\n'
def resp_iter(total_size=total_size):
# wsgi won't propagate start_response calls until some data has
# been yielded so make sure first heartbeat is sent immediately
if heartbeat:
yield b' '
last_yield_time = time.time()
with StreamingPile(self.concurrency) as pile:
for obj_name, resp in pile.asyncstarmap(do_head, (
(path, ) for path in path2indices)):
now = time.time()
if heartbeat and (now - last_yield_time >
self.yield_frequency):
# Make sure we've called start_response before
# sending data
yield b' '
last_yield_time = now
for i in path2indices[obj_name]:
segment_length, seg_data = validate_seg_dict(
parsed_data[i], resp,
allow_empty_segment=(i == len(parsed_data) - 1))
data_for_storage[i] = seg_data
total_size += segment_length
# Middleware left of SLO can add a callback to the WSGI
# environment to perform additional validation and/or
# manipulation on the manifest that will be written.
hook = req.environ.get('swift.callback.slo_manifest_hook')
if hook:
more_problems = hook(data_for_storage)
if more_problems:
problem_segments.extend(more_problems)
if problem_segments:
err = HTTPBadRequest(content_type=out_content_type)
resp_dict = {}
if heartbeat:
resp_dict['Response Status'] = err.status
err_body = err.body.decode('utf-8')
resp_dict['Response Body'] = err_body or '\n'.join(
RESPONSE_REASONS.get(err.status_int, ['']))
else:
start_response(err.status,
[(h, v) for h, v in err.headers.items()
if h.lower() != 'content-length'])
yield separator + get_response_body(
out_content_type, resp_dict, problem_segments, 'upload')
return
slo_etag = md5(usedforsecurity=False)
for seg_data in data_for_storage:
if 'data' in seg_data:
raw_data = base64.b64decode(seg_data['data'])
r = md5(raw_data, usedforsecurity=False).hexdigest()
elif seg_data.get('range'):
r = '%s:%s;' % (seg_data['hash'], seg_data['range'])
else:
r = seg_data['hash']
slo_etag.update(r.encode('ascii') if six.PY3 else r)
slo_etag = slo_etag.hexdigest()
client_etag = normalize_etag(req.headers.get('Etag'))
if client_etag and client_etag != slo_etag:
err = HTTPUnprocessableEntity(request=req)
if heartbeat:
resp_dict = {}
resp_dict['Response Status'] = err.status
err_body = err.body
if six.PY3 and isinstance(err_body, bytes):
err_body = err_body.decode('utf-8', errors='replace')
resp_dict['Response Body'] = err_body or '\n'.join(
RESPONSE_REASONS.get(err.status_int, ['']))
yield separator + get_response_body(
out_content_type, resp_dict, problem_segments,
'upload')
else:
for chunk in err(req.environ, start_response):
yield chunk
return
json_data = json.dumps(data_for_storage)
if six.PY3:
json_data = json_data.encode('utf-8')
req.body = json_data
req.headers.update({
SYSMETA_SLO_ETAG: slo_etag,
SYSMETA_SLO_SIZE: total_size,
'X-Static-Large-Object': 'True',
'Etag': md5(json_data, usedforsecurity=False).hexdigest(),
})
# Ensure container listings have both etags. However, if any
# middleware to the left of us touched the base value, trust them.
override_header = get_container_update_override_key('etag')
val, sep, params = req.headers.get(
override_header, '').partition(';')
req.headers[override_header] = '%s; slo_etag=%s' % (
(val or req.headers['Etag']) + sep + params, slo_etag)
env = req.environ
if not env.get('CONTENT_TYPE'):
guessed_type, _junk = mimetypes.guess_type(
wsgi_to_str(req.path_info))
env['CONTENT_TYPE'] = (guessed_type or
'application/octet-stream')
env['swift.content_type_overridden'] = True
env['CONTENT_TYPE'] += ";swift_bytes=%d" % total_size
resp = req.get_response(self.app)
resp_dict = {'Response Status': resp.status}
if resp.is_success:
resp.etag = slo_etag
resp_dict['Etag'] = resp.headers['Etag']
resp_dict['Last Modified'] = resp.headers['Last-Modified']
if heartbeat:
resp_body = resp.body
if six.PY3 and isinstance(resp_body, bytes):
resp_body = resp_body.decode('utf-8')
resp_dict['Response Body'] = resp_body
yield separator + get_response_body(
out_content_type, resp_dict, [], 'upload')
else:
for chunk in resp(req.environ, start_response):
yield chunk
return resp_iter()
def get_segments_to_delete_iter(self, req):
"""
A generator function to be used to delete all the segments and
sub-segments referenced in a manifest.
:param req: a :class:`~swift.common.swob.Request` with an SLO manifest
in path
:raises HTTPPreconditionFailed: on invalid UTF8 in request path
:raises HTTPBadRequest: on too many buffered sub segments and
on invalid SLO manifest path
"""
if not check_utf8(wsgi_to_str(req.path_info)):
raise HTTPPreconditionFailed(
request=req, body='Invalid UTF8 or contains NULL')
vrs, account, container, obj = req.split_path(4, 4, True)
if six.PY2:
obj_path = ('/%s/%s' % (container, obj)).decode('utf-8')
else:
obj_path = '/%s/%s' % (wsgi_to_str(container), wsgi_to_str(obj))
segments = [{
'sub_slo': True,
'name': obj_path}]
if 'version-id' in req.params:
segments[0]['version_id'] = req.params['version-id']
while segments:
# We chose not to set the limit at max_manifest_segments
# in the case this value was decreased by operators.
# Still it is important to set a limit to avoid this list
# growing too large and causing OOM failures.
# x10 is a best guess as to how much operators would change
# the value of max_manifest_segments.
if len(segments) > self.max_manifest_segments * 10:
raise HTTPBadRequest(
'Too many buffered slo segments to delete.')
seg_data = segments.pop(0)
if 'data' in seg_data:
continue
if seg_data.get('sub_slo'):
try:
segments.extend(
self.get_slo_segments(seg_data['name'], req))
except HTTPException as err:
# allow bulk delete response to report errors
err_body = err.body
if six.PY3 and isinstance(err_body, bytes):
err_body = err_body.decode('utf-8', errors='replace')
seg_data['error'] = {'code': err.status_int,
'message': err_body}
# add manifest back to be deleted after segments
seg_data['sub_slo'] = False
segments.append(seg_data)
else:
if six.PY2:
seg_data['name'] = seg_data['name'].encode('utf-8')
yield seg_data
def get_slo_segments(self, obj_name, req):
"""
Performs a :class:`~swift.common.swob.Request` and returns the SLO
manifest's segments.
:param obj_name: the name of the object being deleted,
as ``/container/object``
:param req: the base :class:`~swift.common.swob.Request`
:raises HTTPServerError: on unable to load obj_name or
on unable to load the SLO manifest data.
:raises HTTPBadRequest: on not an SLO manifest
:raises HTTPNotFound: on SLO manifest not found
:returns: SLO manifest's segments
"""
vrs, account, _junk = req.split_path(2, 3, True)
new_env = req.environ.copy()
new_env['REQUEST_METHOD'] = 'GET'
del(new_env['wsgi.input'])
new_env['QUERY_STRING'] = 'multipart-manifest=get'
if 'version-id' in req.params:
new_env['QUERY_STRING'] += \
'&version-id=' + req.params['version-id']
new_env['CONTENT_LENGTH'] = 0
new_env['HTTP_USER_AGENT'] = \
'%s MultipartDELETE' % new_env.get('HTTP_USER_AGENT')
new_env['swift.source'] = 'SLO'
if six.PY2:
new_env['PATH_INFO'] = (
'/%s/%s/%s' % (vrs, account,
obj_name.lstrip('/').encode('utf-8'))
)
else:
new_env['PATH_INFO'] = (
'/%s/%s/%s' % (vrs, account, str_to_wsgi(obj_name.lstrip('/')))
)
# Just request the last byte of non-SLO objects so we don't waste
# a bunch of resources in drain_and_close() below
manifest_req = Request.blank('', new_env, range='bytes=-1')
update_ignore_range_header(manifest_req, 'X-Static-Large-Object')
resp = manifest_req.get_response(self.app)
if resp.is_success and config_true_value(resp.headers.get(
'X-Static-Large-Object')) and len(resp.body) == 1:
# pre-2.24.0 object-server
manifest_req = Request.blank('', new_env)
resp = manifest_req.get_response(self.app)
if resp.is_success:
if config_true_value(resp.headers.get('X-Static-Large-Object')):
try:
return json.loads(resp.body)
except ValueError:
raise HTTPServerError('Unable to load SLO manifest')
else:
# Drain and close GET request (prevents socket leaks)
drain_and_close(resp)
raise HTTPBadRequest('Not an SLO manifest')
elif resp.status_int == HTTP_NOT_FOUND:
raise HTTPNotFound('SLO manifest not found')
elif resp.status_int == HTTP_UNAUTHORIZED:
raise HTTPUnauthorized('401 Unauthorized')
else:
raise HTTPServerError('Unable to load SLO manifest or segment.')
def handle_async_delete(self, req):
if not check_utf8(wsgi_to_str(req.path_info)):
raise HTTPPreconditionFailed(
request=req, body='Invalid UTF8 or contains NULL')
vrs, account, container, obj = req.split_path(4, 4, True)
if six.PY2:
obj_path = ('/%s/%s' % (container, obj)).decode('utf-8')
else:
obj_path = '/%s/%s' % (wsgi_to_str(container), wsgi_to_str(obj))
segments = [seg for seg in self.get_slo_segments(obj_path, req)
if 'data' not in seg]
if not segments:
# Degenerate case: just delete the manifest
return self.app
segment_containers, segment_objects = zip(*(
split_path(seg['name'], 2, 2, True) for seg in segments))
segment_containers = set(segment_containers)
if len(segment_containers) > 1:
container_csv = ', '.join(
'"%s"' % quote(c) for c in segment_containers)
raise HTTPBadRequest('All segments must be in one container. '
'Found segments in %s' % container_csv)
if any(seg.get('sub_slo') for seg in segments):
raise HTTPBadRequest('No segments may be large objects.')
# Auth checks
segment_container = segment_containers.pop()
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app, swift_source='SLO')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
req.acl = None
if aresp:
return aresp
if bytes_to_wsgi(segment_container.encode('utf-8')) != container:
path = '/%s/%s/%s' % (vrs, account, bytes_to_wsgi(
segment_container.encode('utf-8')))
seg_container_info = get_container_info(
make_env(req.environ, path=path, swift_source='SLO'),
self.app, swift_source='SLO')
req.acl = seg_container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
req.acl = None
if aresp:
return aresp
# Did our sanity checks; schedule segments to be deleted
ts = req.ensure_x_timestamp()
expirer_jobs = make_delete_jobs(
wsgi_to_str(account), segment_container, segment_objects, ts)
expirer_cont = get_expirer_container(
ts, self.expiring_objects_container_divisor,
wsgi_to_str(account), wsgi_to_str(container), wsgi_to_str(obj))
enqueue_req = make_pre_authed_request(
req.environ,
method='UPDATE',
path="/v1/%s/%s" % (self.expiring_objects_account, expirer_cont),
body=json.dumps(expirer_jobs),
headers={'Content-Type': 'application/json',
'X-Backend-Storage-Policy-Index': '0',
'X-Backend-Allow-Private-Methods': 'True'},
)
resp = enqueue_req.get_response(self.app)
if not resp.is_success:
self.logger.error(
'Failed to enqueue expiration entries: %s\n%s',
resp.status, resp.body)
return HTTPServiceUnavailable()
# consume the response (should be short)
drain_and_close(resp)
# Finally, delete the manifest
return self.app
def handle_multipart_delete(self, req):
"""
Will delete all the segments in the SLO manifest and then, if
successful, will delete the manifest file.
:param req: a :class:`~swift.common.swob.Request` with an obj in path
:returns: swob.Response whose app_iter set to Bulk.handle_delete_iter
"""
if self.allow_async_delete and config_true_value(
req.params.get('async')):
return self.handle_async_delete(req)
req.headers['Content-Type'] = None # Ignore content-type from client
resp = HTTPOk(request=req)
try:
out_content_type = req.accept.best_match(ACCEPTABLE_FORMATS)
except ValueError:
out_content_type = None # Ignore invalid header
if out_content_type:
resp.content_type = out_content_type
resp.app_iter = self.bulk_deleter.handle_delete_iter(
req, objs_to_delete=self.get_segments_to_delete_iter(req),
user_agent='MultipartDELETE', swift_source='SLO',
out_content_type=out_content_type)
return resp
def handle_container_listing(self, req, start_response):
resp = req.get_response(self.app)
if not resp.is_success or resp.content_type != 'application/json':
return resp(req.environ, start_response)
if resp.content_length is None or \
resp.content_length > MAX_CONTAINER_LISTING_CONTENT_LENGTH:
return resp(req.environ, start_response)
try:
listing = json.loads(resp.body)
except ValueError:
return resp(req.environ, start_response)
for item in listing:
if 'subdir' in item:
continue
etag, params = parse_header(item['hash'])
if 'slo_etag' in params:
item['slo_etag'] = '"%s"' % params.pop('slo_etag')
item['hash'] = etag + ''.join(
'; %s=%s' % kv for kv in params.items())
resp.body = json.dumps(listing).encode('ascii')
return resp(req.environ, start_response)
def __call__(self, env, start_response):
"""
WSGI entry point
"""
if env.get('swift.slo_override'):
return self.app(env, start_response)
req = Request(env)
try:
vrs, account, container, obj = req.split_path(3, 4, True)
is_cont_or_obj_req = True
except ValueError:
is_cont_or_obj_req = False
if not is_cont_or_obj_req:
return self.app(env, start_response)
if not obj:
if req.method == 'GET':
return self.handle_container_listing(req, start_response)
return self.app(env, start_response)
try:
if req.method == 'PUT' and \
req.params.get('multipart-manifest') == 'put':
return self.handle_multipart_put(req, start_response)
if req.method == 'DELETE' and \
req.params.get('multipart-manifest') == 'delete':
return self.handle_multipart_delete(req)(env, start_response)
if req.method == 'GET' or req.method == 'HEAD':
return self.handle_multipart_get_or_head(req, start_response)
if 'X-Static-Large-Object' in req.headers:
raise HTTPBadRequest(
request=req,
body='X-Static-Large-Object is a reserved header. '
'To create a static large object add query param '
'multipart-manifest=put.')
except HTTPException as err_resp:
return err_resp(env, start_response)
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
max_manifest_segments = int(conf.get('max_manifest_segments',
DEFAULT_MAX_MANIFEST_SEGMENTS))
max_manifest_size = int(conf.get('max_manifest_size',
DEFAULT_MAX_MANIFEST_SIZE))
yield_frequency = int(conf.get('yield_frequency',
DEFAULT_YIELD_FREQUENCY))
allow_async_delete = config_true_value(conf.get('allow_async_delete',
'true'))
register_swift_info('slo',
max_manifest_segments=max_manifest_segments,
max_manifest_size=max_manifest_size,
yield_frequency=yield_frequency,
# this used to be configurable; report it as 1 for
# clients that might still care
min_segment_size=1,
allow_async_delete=allow_async_delete)
def slo_filter(app):
return StaticLargeObject(
app, conf,
max_manifest_segments=max_manifest_segments,
max_manifest_size=max_manifest_size,
yield_frequency=yield_frequency,
allow_async_delete=allow_async_delete)
return slo_filter
| swift-master | swift/common/middleware/slo.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import errno
import json
import os
import time
from resource import getpagesize
from swift import __version__ as swiftver
from swift import gettext_ as _
from swift.common.constraints import check_mount
from swift.common.storage_policy import POLICIES
from swift.common.swob import Request, Response
from swift.common.utils import get_logger, SWIFT_CONF_FILE, md5_hash_for_file
from swift.common.recon import RECON_OBJECT_FILE, RECON_CONTAINER_FILE, \
RECON_ACCOUNT_FILE, RECON_DRIVE_FILE, RECON_RELINKER_FILE, \
DEFAULT_RECON_CACHE_PATH
class ReconMiddleware(object):
"""
Recon middleware used for monitoring.
/recon/load|mem|async... will return various system metrics.
Needs to be added to the pipeline and requires a filter
declaration in the [account|container|object]-server conf file:
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
"""
def __init__(self, app, conf, *args, **kwargs):
self.app = app
self.devices = conf.get('devices', '/srv/node')
swift_dir = conf.get('swift_dir', '/etc/swift')
self.logger = get_logger(conf, log_route='recon')
self.recon_cache_path = conf.get('recon_cache_path',
DEFAULT_RECON_CACHE_PATH)
self.object_recon_cache = os.path.join(self.recon_cache_path,
RECON_OBJECT_FILE)
self.container_recon_cache = os.path.join(self.recon_cache_path,
RECON_CONTAINER_FILE)
self.account_recon_cache = os.path.join(self.recon_cache_path,
RECON_ACCOUNT_FILE)
self.drive_recon_cache = os.path.join(self.recon_cache_path,
RECON_DRIVE_FILE)
self.relink_recon_cache = os.path.join(self.recon_cache_path,
RECON_RELINKER_FILE)
self.account_ring_path = os.path.join(swift_dir, 'account.ring.gz')
self.container_ring_path = os.path.join(swift_dir, 'container.ring.gz')
self.rings = [self.account_ring_path, self.container_ring_path]
# include all object ring files (for all policies)
for policy in POLICIES:
self.rings.append(os.path.join(swift_dir,
policy.ring_name + '.ring.gz'))
def _from_recon_cache(self, cache_keys, cache_file, openr=open,
ignore_missing=False):
"""retrieve values from a recon cache file
:params cache_keys: list of cache items to retrieve
:params cache_file: cache file to retrieve items from.
:params openr: open to use [for unittests]
:params ignore_missing: Some recon stats are very temporary, in this
case it would be better to not log if things are missing.
:return: dict of cache items and their values or none if not found
"""
try:
with openr(cache_file, 'r') as f:
recondata = json.load(f)
return {key: recondata.get(key) for key in cache_keys}
except IOError as err:
if err.errno == errno.ENOENT and ignore_missing:
pass
else:
self.logger.exception(_('Error reading recon cache file'))
except ValueError:
self.logger.exception(_('Error parsing recon cache file'))
except Exception:
self.logger.exception(_('Error retrieving recon data'))
return dict((key, None) for key in cache_keys)
def get_version(self):
"""get swift version"""
verinfo = {'version': swiftver}
return verinfo
def get_mounted(self, openr=open):
"""get ALL mounted fs from /proc/mounts"""
mounts = []
with openr('/proc/mounts', 'r') as procmounts:
for line in procmounts:
mount = {}
mount['device'], mount['path'], opt1, opt2, opt3, \
opt4 = line.rstrip().split()
mounts.append(mount)
return mounts
def get_load(self, openr=open):
"""get info from /proc/loadavg"""
loadavg = {}
with openr('/proc/loadavg', 'r') as f:
onemin, fivemin, ftmin, tasks, procs = f.read().rstrip().split()
loadavg['1m'] = float(onemin)
loadavg['5m'] = float(fivemin)
loadavg['15m'] = float(ftmin)
loadavg['tasks'] = tasks
loadavg['processes'] = int(procs)
return loadavg
def get_mem(self, openr=open):
"""get info from /proc/meminfo"""
meminfo = {}
with openr('/proc/meminfo', 'r') as memlines:
for i in memlines:
entry = i.rstrip().split(":")
meminfo[entry[0]] = entry[1].strip()
return meminfo
def get_async_info(self):
"""get # of async pendings"""
return self._from_recon_cache(['async_pending', 'async_pending_last'],
self.object_recon_cache)
def get_driveaudit_error(self):
"""get # of drive audit errors"""
return self._from_recon_cache(['drive_audit_errors'],
self.drive_recon_cache)
def get_sharding_info(self):
"""get sharding info"""
return self._from_recon_cache(["sharding_stats",
"sharding_time",
"sharding_last"],
self.container_recon_cache)
def get_replication_info(self, recon_type):
"""get replication info"""
replication_list = ['replication_time',
'replication_stats',
'replication_last']
if recon_type == 'account':
return self._from_recon_cache(replication_list,
self.account_recon_cache)
elif recon_type == 'container':
return self._from_recon_cache(replication_list,
self.container_recon_cache)
elif recon_type == 'object':
replication_list += ['object_replication_time',
'object_replication_last']
return self._from_recon_cache(replication_list,
self.object_recon_cache)
else:
return None
def get_reconstruction_info(self):
"""get reconstruction info"""
reconstruction_list = ['object_reconstruction_last',
'object_reconstruction_time']
return self._from_recon_cache(reconstruction_list,
self.object_recon_cache)
def get_device_info(self):
"""get devices"""
try:
return {self.devices: os.listdir(self.devices)}
except Exception:
self.logger.exception(_('Error listing devices'))
return {self.devices: None}
def get_updater_info(self, recon_type):
"""get updater info"""
if recon_type == 'container':
return self._from_recon_cache(['container_updater_sweep'],
self.container_recon_cache)
elif recon_type == 'object':
return self._from_recon_cache(['object_updater_sweep'],
self.object_recon_cache)
else:
return None
def get_expirer_info(self, recon_type):
"""get expirer info"""
if recon_type == 'object':
return self._from_recon_cache(['object_expiration_pass',
'expired_last_pass'],
self.object_recon_cache)
def get_auditor_info(self, recon_type):
"""get auditor info"""
if recon_type == 'account':
return self._from_recon_cache(['account_audits_passed',
'account_auditor_pass_completed',
'account_audits_since',
'account_audits_failed'],
self.account_recon_cache)
elif recon_type == 'container':
return self._from_recon_cache(['container_audits_passed',
'container_auditor_pass_completed',
'container_audits_since',
'container_audits_failed'],
self.container_recon_cache)
elif recon_type == 'object':
return self._from_recon_cache(['object_auditor_stats_ALL',
'object_auditor_stats_ZBF'],
self.object_recon_cache)
else:
return None
def get_unmounted(self):
"""list unmounted (failed?) devices"""
mountlist = []
for entry in os.listdir(self.devices):
if not os.path.isdir(os.path.join(self.devices, entry)):
continue
try:
check_mount(self.devices, entry)
except OSError as err:
mounted = str(err)
except ValueError:
mounted = False
else:
continue
mountlist.append({'device': entry, 'mounted': mounted})
return mountlist
def get_diskusage(self):
"""get disk utilization statistics"""
devices = []
for entry in os.listdir(self.devices):
if not os.path.isdir(os.path.join(self.devices, entry)):
continue
try:
check_mount(self.devices, entry)
except OSError as err:
devices.append({'device': entry, 'mounted': str(err),
'size': '', 'used': '', 'avail': ''})
except ValueError:
devices.append({'device': entry, 'mounted': False,
'size': '', 'used': '', 'avail': ''})
else:
path = os.path.join(self.devices, entry)
disk = os.statvfs(path)
capacity = disk.f_bsize * disk.f_blocks
available = disk.f_bsize * disk.f_bavail
used = disk.f_bsize * (disk.f_blocks - disk.f_bavail)
devices.append({'device': entry, 'mounted': True,
'size': capacity, 'used': used,
'avail': available})
return devices
def get_ring_md5(self):
"""get all ring md5sum's"""
sums = {}
for ringfile in self.rings:
if os.path.exists(ringfile):
try:
sums[ringfile] = md5_hash_for_file(ringfile)
except IOError as err:
sums[ringfile] = None
if err.errno != errno.ENOENT:
self.logger.exception(_('Error reading ringfile'))
return sums
def get_swift_conf_md5(self):
"""get md5 of swift.conf"""
hexsum = None
try:
hexsum = md5_hash_for_file(SWIFT_CONF_FILE)
except IOError as err:
if err.errno != errno.ENOENT:
self.logger.exception(_('Error reading swift.conf'))
return {SWIFT_CONF_FILE: hexsum}
def get_quarantine_count(self):
"""get obj/container/account quarantine counts"""
qcounts = {"objects": 0, "containers": 0, "accounts": 0,
"policies": {}}
qdir = "quarantined"
for device in os.listdir(self.devices):
qpath = os.path.join(self.devices, device, qdir)
if os.path.exists(qpath):
for qtype in os.listdir(qpath):
qtgt = os.path.join(qpath, qtype)
linkcount = os.lstat(qtgt).st_nlink
if linkcount > 2:
if qtype.startswith('objects'):
if '-' in qtype:
pkey = qtype.split('-', 1)[1]
else:
pkey = '0'
qcounts['policies'].setdefault(pkey,
{'objects': 0})
qcounts['policies'][pkey]['objects'] \
+= linkcount - 2
qcounts['objects'] += linkcount - 2
else:
qcounts[qtype] += linkcount - 2
return qcounts
def get_socket_info(self, openr=open):
"""
get info from /proc/net/sockstat and sockstat6
Note: The mem value is actually kernel pages, but we return bytes
allocated based on the systems page size.
"""
sockstat = {}
try:
with openr('/proc/net/sockstat', 'r') as proc_sockstat:
for entry in proc_sockstat:
if entry.startswith("TCP: inuse"):
tcpstats = entry.split()
sockstat['tcp_in_use'] = int(tcpstats[2])
sockstat['orphan'] = int(tcpstats[4])
sockstat['time_wait'] = int(tcpstats[6])
sockstat['tcp_mem_allocated_bytes'] = \
int(tcpstats[10]) * getpagesize()
except IOError as e:
if e.errno != errno.ENOENT:
raise
try:
with openr('/proc/net/sockstat6', 'r') as proc_sockstat6:
for entry in proc_sockstat6:
if entry.startswith("TCP6: inuse"):
sockstat['tcp6_in_use'] = int(entry.split()[2])
except IOError as e:
if e.errno != errno.ENOENT:
raise
return sockstat
def get_time(self):
"""get current time"""
return time.time()
def get_relinker_info(self):
"""get relinker info, if any"""
stat_keys = ['devices', 'workers']
return self._from_recon_cache(stat_keys,
self.relink_recon_cache,
ignore_missing=True)
def GET(self, req):
root, rcheck, rtype = req.split_path(1, 3, True)
all_rtypes = ['account', 'container', 'object']
if rcheck == "mem":
content = self.get_mem()
elif rcheck == "load":
content = self.get_load()
elif rcheck == "async":
content = self.get_async_info()
elif rcheck == 'replication' and rtype in all_rtypes:
content = self.get_replication_info(rtype)
elif rcheck == 'replication' and rtype is None:
# handle old style object replication requests
content = self.get_replication_info('object')
elif rcheck == "devices":
content = self.get_device_info()
elif rcheck == "updater" and rtype in ['container', 'object']:
content = self.get_updater_info(rtype)
elif rcheck == "auditor" and rtype in all_rtypes:
content = self.get_auditor_info(rtype)
elif rcheck == "expirer" and rtype == 'object':
content = self.get_expirer_info(rtype)
elif rcheck == "mounted":
content = self.get_mounted()
elif rcheck == "unmounted":
content = self.get_unmounted()
elif rcheck == "diskusage":
content = self.get_diskusage()
elif rcheck == "ringmd5":
content = self.get_ring_md5()
elif rcheck == "swiftconfmd5":
content = self.get_swift_conf_md5()
elif rcheck == "quarantined":
content = self.get_quarantine_count()
elif rcheck == "sockstat":
content = self.get_socket_info()
elif rcheck == "version":
content = self.get_version()
elif rcheck == "driveaudit":
content = self.get_driveaudit_error()
elif rcheck == "time":
content = self.get_time()
elif rcheck == "sharding":
content = self.get_sharding_info()
elif rcheck == "relinker":
content = self.get_relinker_info()
elif rcheck == "reconstruction" and rtype == 'object':
content = self.get_reconstruction_info()
else:
content = "Invalid path: %s" % req.path
return Response(request=req, status="404 Not Found",
body=content, content_type="text/plain")
if content is not None:
return Response(request=req, body=json.dumps(content),
content_type="application/json")
else:
return Response(request=req, status="500 Server Error",
body="Internal server error.",
content_type="text/plain")
def __call__(self, env, start_response):
req = Request(env)
if req.path.startswith('/recon/'):
return self.GET(req)(env, start_response)
else:
return self.app(env, start_response)
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def recon_filter(app):
return ReconMiddleware(app, conf)
return recon_filter
| swift-master | swift/common/middleware/recon.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
The s3api middleware will emulate the S3 REST api on top of swift.
To enable this middleware to your configuration, add the s3api middleware
in front of the auth middleware. See ``proxy-server.conf-sample`` for more
detail and configurable options.
To set up your client, ensure you are using the tempauth or keystone auth
system for swift project.
When your swift on a SAIO environment, make sure you have setting the tempauth
middleware configuration in ``proxy-server.conf``, and the access key will be
the concatenation of the account and user strings that should look like
test:tester, and the secret access key is the account password. The host should
also point to the swift storage hostname.
The tempauth option example:
.. code-block:: ini
[filter:tempauth]
use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing
An example client using tempauth with the python boto library is as follows:
.. code-block:: python
from boto.s3.connection import S3Connection
connection = S3Connection(
aws_access_key_id='test:tester',
aws_secret_access_key='testing',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
And if you using keystone auth, you need the ec2 credentials, which can
be downloaded from the API Endpoints tab of the dashboard or by openstack
ec2 command.
Here is showing to create an EC2 credential:
.. code-block:: console
# openstack ec2 credentials create
+------------+---------------------------------------------------+
| Field | Value |
+------------+---------------------------------------------------+
| access | c2e30f2cd5204b69a39b3f1130ca8f61 |
| links | {u'self': u'http://controller:5000/v3/......'} |
| project_id | 407731a6c2d0425c86d1e7f12a900488 |
| secret | baab242d192a4cd6b68696863e07ed59 |
| trust_id | None |
| user_id | 00f0ee06afe74f81b410f3fe03d34fbc |
+------------+---------------------------------------------------+
An example client using keystone auth with the python boto library will be:
.. code-block:: python
from boto.s3.connection import S3Connection
connection = S3Connection(
aws_access_key_id='c2e30f2cd5204b69a39b3f1130ca8f61',
aws_secret_access_key='baab242d192a4cd6b68696863e07ed59',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
----------
Deployment
----------
Proxy-Server Setting
^^^^^^^^^^^^^^^^^^^^
Set s3api before your auth in your pipeline in ``proxy-server.conf`` file.
To enable all compatibility currently supported, you should make sure that
bulk, slo, and your auth middleware are also included in your proxy
pipeline setting.
Using tempauth, the minimum example config is:
.. code-block:: ini
[pipeline:main]
pipeline = proxy-logging cache s3api tempauth bulk slo proxy-logging \
proxy-server
When using keystone, the config will be:
.. code-block:: ini
[pipeline:main]
pipeline = proxy-logging cache authtoken s3api s3token keystoneauth bulk \
slo proxy-logging proxy-server
Finally, add the s3api middleware section:
.. code-block:: ini
[filter:s3api]
use = egg:swift#s3api
.. note::
``keystonemiddleware.authtoken`` can be located before/after s3api but
we recommend to put it before s3api because when authtoken is after s3api,
both authtoken and s3token will issue the acceptable token to keystone
(i.e. authenticate twice). And in the ``keystonemiddleware.authtoken``
middleware , you should set ``delay_auth_decision`` option to ``True``.
-----------
Constraints
-----------
Currently, the s3api is being ported from https://github.com/openstack/swift3
so any existing issues in swift3 are still remaining. Please make sure
descriptions in the example ``proxy-server.conf`` and what happens with the
config, before enabling the options.
-------------
Supported API
-------------
The compatibility will continue to be improved upstream, you can keep and
eye on compatibility via a check tool build by SwiftStack. See
https://github.com/swiftstack/s3compat in detail.
"""
from cgi import parse_header
import json
from paste.deploy import loadwsgi
from six.moves.urllib.parse import parse_qs
from swift.common.constraints import valid_api_version
from swift.common.middleware.listing_formats import \
MAX_CONTAINER_LISTING_CONTENT_LENGTH
from swift.common.wsgi import PipelineWrapper, loadcontext, WSGIContext
from swift.common.middleware import app_property
from swift.common.middleware.s3api.exception import NotS3Request, \
InvalidSubresource
from swift.common.middleware.s3api.s3request import get_request_class
from swift.common.middleware.s3api.s3response import ErrorResponse, \
InternalError, MethodNotAllowed, S3ResponseBase, S3NotImplemented
from swift.common.utils import get_logger, config_true_value, \
config_positive_int_value, split_path, closing_if_possible, list_from_csv
from swift.common.middleware.s3api.utils import Config
from swift.common.middleware.s3api.acl_handlers import get_acl_handler
from swift.common.registry import register_swift_info, \
register_sensitive_header, register_sensitive_param
class ListingEtagMiddleware(object):
def __init__(self, app):
self.app = app
# Pass these along so get_container_info will have the configured
# odds to skip cache
_pipeline_final_app = app_property('_pipeline_final_app')
_pipeline_request_logging_app = app_property(
'_pipeline_request_logging_app')
def __call__(self, env, start_response):
# a lot of this is cribbed from listing_formats / swob.Request
if env['REQUEST_METHOD'] != 'GET':
# Nothing to translate
return self.app(env, start_response)
try:
v, a, c = split_path(env.get('SCRIPT_NAME', '') +
env['PATH_INFO'], 3, 3)
if not valid_api_version(v):
raise ValueError
except ValueError:
is_container_req = False
else:
is_container_req = True
if not is_container_req:
# pass through
return self.app(env, start_response)
ctx = WSGIContext(self.app)
resp_iter = ctx._app_call(env)
content_type = content_length = cl_index = None
for index, (header, value) in enumerate(ctx._response_headers):
header = header.lower()
if header == 'content-type':
content_type = value.split(';', 1)[0].strip()
if content_length:
break
elif header == 'content-length':
cl_index = index
try:
content_length = int(value)
except ValueError:
pass # ignore -- we'll bail later
if content_type:
break
if content_type != 'application/json' or content_length is None or \
content_length > MAX_CONTAINER_LISTING_CONTENT_LENGTH:
start_response(ctx._response_status, ctx._response_headers,
ctx._response_exc_info)
return resp_iter
# We've done our sanity checks, slurp the response into memory
with closing_if_possible(resp_iter):
body = b''.join(resp_iter)
try:
listing = json.loads(body)
for item in listing:
if 'subdir' in item:
continue
value, params = parse_header(item['hash'])
if 's3_etag' in params:
item['s3_etag'] = '"%s"' % params.pop('s3_etag')
item['hash'] = value + ''.join(
'; %s=%s' % kv for kv in params.items())
except (TypeError, KeyError, ValueError):
# If anything goes wrong above, drop back to original response
start_response(ctx._response_status, ctx._response_headers,
ctx._response_exc_info)
return [body]
body = json.dumps(listing).encode('ascii')
ctx._response_headers[cl_index] = (
ctx._response_headers[cl_index][0],
str(len(body)),
)
start_response(ctx._response_status, ctx._response_headers,
ctx._response_exc_info)
return [body]
class S3ApiMiddleware(object):
"""S3Api: S3 compatibility middleware"""
def __init__(self, app, wsgi_conf, *args, **kwargs):
self.app = app
self.conf = Config()
# Set default values if they are not configured
self.conf.allow_no_owner = config_true_value(
wsgi_conf.get('allow_no_owner', False))
self.conf.location = wsgi_conf.get('location', 'us-east-1')
self.conf.dns_compliant_bucket_names = config_true_value(
wsgi_conf.get('dns_compliant_bucket_names', True))
self.conf.max_bucket_listing = config_positive_int_value(
wsgi_conf.get('max_bucket_listing', 1000))
self.conf.max_parts_listing = config_positive_int_value(
wsgi_conf.get('max_parts_listing', 1000))
self.conf.max_multi_delete_objects = config_positive_int_value(
wsgi_conf.get('max_multi_delete_objects', 1000))
self.conf.multi_delete_concurrency = config_positive_int_value(
wsgi_conf.get('multi_delete_concurrency', 2))
self.conf.s3_acl = config_true_value(
wsgi_conf.get('s3_acl', False))
self.conf.storage_domains = list_from_csv(
wsgi_conf.get('storage_domain', ''))
self.conf.auth_pipeline_check = config_true_value(
wsgi_conf.get('auth_pipeline_check', True))
self.conf.max_upload_part_num = config_positive_int_value(
wsgi_conf.get('max_upload_part_num', 1000))
self.conf.check_bucket_owner = config_true_value(
wsgi_conf.get('check_bucket_owner', False))
self.conf.force_swift_request_proxy_log = config_true_value(
wsgi_conf.get('force_swift_request_proxy_log', False))
self.conf.allow_multipart_uploads = config_true_value(
wsgi_conf.get('allow_multipart_uploads', True))
self.conf.min_segment_size = config_positive_int_value(
wsgi_conf.get('min_segment_size', 5242880))
self.conf.allowable_clock_skew = config_positive_int_value(
wsgi_conf.get('allowable_clock_skew', 15 * 60))
self.conf.cors_preflight_allow_origin = list_from_csv(wsgi_conf.get(
'cors_preflight_allow_origin', ''))
if '*' in self.conf.cors_preflight_allow_origin and \
len(self.conf.cors_preflight_allow_origin) > 1:
raise ValueError('if cors_preflight_allow_origin should include '
'all domains, * must be the only entry')
self.conf.ratelimit_as_client_error = config_true_value(
wsgi_conf.get('ratelimit_as_client_error', False))
self.logger = get_logger(
wsgi_conf, log_route='s3api', statsd_tail_prefix='s3api')
self.check_pipeline(wsgi_conf)
def is_s3_cors_preflight(self, env):
if env['REQUEST_METHOD'] != 'OPTIONS' or not env.get('HTTP_ORIGIN'):
# Not a CORS preflight
return False
acrh = env.get('HTTP_ACCESS_CONTROL_REQUEST_HEADERS', '').lower()
if 'authorization' in acrh and \
not env['PATH_INFO'].startswith(('/v1/', '/v1.0/')):
return True
q = parse_qs(env.get('QUERY_STRING', ''))
if 'AWSAccessKeyId' in q or 'X-Amz-Credential' in q:
return True
# Not S3, apparently
return False
def __call__(self, env, start_response):
origin = env.get('HTTP_ORIGIN')
if self.conf.cors_preflight_allow_origin and \
self.is_s3_cors_preflight(env):
# I guess it's likely going to be an S3 request? *shrug*
if self.conf.cors_preflight_allow_origin != ['*'] and \
origin not in self.conf.cors_preflight_allow_origin:
start_response('401 Unauthorized', [
('Allow', 'GET, HEAD, PUT, POST, DELETE, OPTIONS'),
])
return [b'']
headers = [
('Allow', 'GET, HEAD, PUT, POST, DELETE, OPTIONS'),
('Access-Control-Allow-Origin', origin),
('Access-Control-Allow-Methods',
'GET, HEAD, PUT, POST, DELETE, OPTIONS'),
('Vary', 'Origin, Access-Control-Request-Headers'),
]
acrh = set(list_from_csv(
env.get('HTTP_ACCESS_CONTROL_REQUEST_HEADERS', '').lower()))
if acrh:
headers.append((
'Access-Control-Allow-Headers',
', '.join(acrh)))
start_response('200 OK', headers)
return [b'']
try:
req_class = get_request_class(env, self.conf.s3_acl)
req = req_class(env, self.app, self.conf)
resp = self.handle_request(req)
except NotS3Request:
resp = self.app
except InvalidSubresource as e:
self.logger.debug(e.cause)
except ErrorResponse as err_resp:
self.logger.increment(err_resp.metric_name)
if isinstance(err_resp, InternalError):
self.logger.exception(err_resp)
resp = err_resp
except Exception as e:
self.logger.exception(e)
resp = InternalError(reason=str(e))
if isinstance(resp, S3ResponseBase) and 'swift.trans_id' in env:
resp.headers['x-amz-id-2'] = env['swift.trans_id']
resp.headers['x-amz-request-id'] = env['swift.trans_id']
if 's3api.backend_path' in env and 'swift.backend_path' not in env:
env['swift.backend_path'] = env['s3api.backend_path']
return resp(env, start_response)
def handle_request(self, req):
self.logger.debug('Calling S3Api Middleware')
try:
controller = req.controller(self.app, self.conf, self.logger)
except S3NotImplemented:
# TODO: Probably we should distinct the error to log this warning
self.logger.warning('multipart: No SLO middleware in pipeline')
raise
acl_handler = get_acl_handler(req.controller_name)(req, self.logger)
req.set_acl_handler(acl_handler)
if hasattr(controller, req.method):
handler = getattr(controller, req.method)
if not getattr(handler, 'publicly_accessible', False):
raise MethodNotAllowed(req.method,
req.controller.resource_type())
res = handler(req)
else:
raise MethodNotAllowed(req.method,
req.controller.resource_type())
return res
def check_pipeline(self, wsgi_conf):
"""
Check that proxy-server.conf has an appropriate pipeline for s3api.
"""
if wsgi_conf.get('__file__', None) is None:
return
ctx = loadcontext(loadwsgi.APP, wsgi_conf['__file__'])
pipeline = str(PipelineWrapper(ctx)).split(' ')
# Add compatible with 3rd party middleware.
self.check_filter_order(pipeline, ['s3api', 'proxy-server'])
auth_pipeline = pipeline[pipeline.index('s3api') + 1:
pipeline.index('proxy-server')]
# Check SLO middleware
if self.conf.allow_multipart_uploads and 'slo' not in auth_pipeline:
self.conf.allow_multipart_uploads = False
self.logger.warning('s3api middleware requires SLO middleware '
'to support multi-part upload, please add it '
'in pipeline')
if not self.conf.auth_pipeline_check:
self.logger.debug('Skip pipeline auth check.')
return
if 'tempauth' in auth_pipeline:
self.logger.debug('Use tempauth middleware.')
elif 'keystoneauth' in auth_pipeline:
self.check_filter_order(
auth_pipeline,
['s3token', 'keystoneauth'])
self.logger.debug('Use keystone middleware.')
elif len(auth_pipeline):
self.logger.debug('Use third party(unknown) auth middleware.')
else:
raise ValueError('Invalid pipeline %r: expected auth between '
's3api and proxy-server ' % pipeline)
def check_filter_order(self, pipeline, required_filters):
"""
Check that required filters are present in order in the pipeline.
"""
indexes = []
missing_filters = []
for required_filter in required_filters:
try:
indexes.append(pipeline.index(required_filter))
except ValueError as e:
self.logger.debug(e)
missing_filters.append(required_filter)
if missing_filters:
raise ValueError('Invalid pipeline %r: missing filters %r' % (
pipeline, missing_filters))
if indexes != sorted(indexes):
raise ValueError('Invalid pipeline %r: expected filter %s' % (
pipeline, ' before '.join(required_filters)))
def filter_factory(global_conf, **local_conf):
"""Standard filter factory to use the middleware with paste.deploy"""
conf = global_conf.copy()
conf.update(local_conf)
register_swift_info(
's3api',
# TODO: make default values as variables
max_bucket_listing=int(conf.get('max_bucket_listing', 1000)),
max_parts_listing=int(conf.get('max_parts_listing', 1000)),
max_upload_part_num=int(conf.get('max_upload_part_num', 1000)),
max_multi_delete_objects=int(
conf.get('max_multi_delete_objects', 1000)),
allow_multipart_uploads=config_true_value(
conf.get('allow_multipart_uploads', True)),
min_segment_size=int(conf.get('min_segment_size', 5242880)),
s3_acl=config_true_value(conf.get('s3_acl', False)),
)
register_sensitive_header('authorization')
register_sensitive_param('Signature')
register_sensitive_param('X-Amz-Signature')
def s3api_filter(app):
return S3ApiMiddleware(ListingEtagMiddleware(app), conf)
return s3api_filter
| swift-master | swift/common/middleware/s3api/s3api.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.middleware.s3api.exception import ACLError
from swift.common.middleware.s3api.etree import fromstring, XMLSyntaxError, \
DocumentInvalid, XMLNS_XSI
from swift.common.middleware.s3api.s3response import S3NotImplemented, \
MalformedACLError, InvalidArgument
def swift_acl_translate(acl, group='', user='', xml=False):
"""
Takes an S3 style ACL and returns a list of header/value pairs that
implement that ACL in Swift, or "NotImplemented" if there isn't a way to do
that yet.
"""
swift_acl = {}
swift_acl['public-read'] = [['X-Container-Read', '.r:*,.rlistings']]
# Swift does not support public write:
# https://answers.launchpad.net/swift/+question/169541
swift_acl['public-read-write'] = [['X-Container-Write', '.r:*'],
['X-Container-Read',
'.r:*,.rlistings']]
# TODO: if there's a way to get group and user, this should work for
# private:
# swift_acl['private'] = \
# [['HTTP_X_CONTAINER_WRITE', group + ':' + user], \
# ['HTTP_X_CONTAINER_READ', group + ':' + user]]
swift_acl['private'] = [['X-Container-Write', '.'],
['X-Container-Read', '.']]
# Swift doesn't have per-object ACLs, so this is best-effort
swift_acl['bucket-owner-full-control'] = swift_acl['private']
swift_acl['bucket-owner-read'] = swift_acl['private']
if xml:
# We are working with XML and need to parse it
try:
elem = fromstring(acl, 'AccessControlPolicy')
except (XMLSyntaxError, DocumentInvalid):
raise MalformedACLError()
acl = 'unknown'
for grant in elem.findall('./AccessControlList/Grant'):
permission = grant.find('./Permission').text
grantee = grant.find('./Grantee').get('{%s}type' % XMLNS_XSI)
if permission == "FULL_CONTROL" and grantee == 'CanonicalUser' and\
acl != 'public-read' and acl != 'public-read-write':
acl = 'private'
elif permission == "READ" and grantee == 'Group' and\
acl != 'public-read-write':
acl = 'public-read'
elif permission == "WRITE" and grantee == 'Group':
acl = 'public-read-write'
else:
acl = 'unsupported'
if acl in ('authenticated-read', 'log-delivery-write'):
raise S3NotImplemented()
elif acl not in swift_acl:
raise ACLError()
return swift_acl[acl]
def handle_acl_header(req):
"""
Handle the x-amz-acl header.
Note that this header currently used for only normal-acl
(not implemented) on s3acl.
TODO: add translation to swift acl like as x-container-read to s3acl
"""
amz_acl = req.environ['HTTP_X_AMZ_ACL']
# Translate the Amazon ACL to something that can be
# implemented in Swift, 501 otherwise. Swift uses POST
# for ACLs, whereas S3 uses PUT.
del req.environ['HTTP_X_AMZ_ACL']
if req.query_string:
req.query_string = ''
try:
translated_acl = swift_acl_translate(amz_acl)
except ACLError:
raise InvalidArgument('x-amz-acl', amz_acl)
for header, acl in translated_acl:
req.headers[header] = acl
| swift-master | swift/common/middleware/s3api/acl_utils.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
class S3Exception(Exception):
pass
class NotS3Request(S3Exception):
pass
class ACLError(S3Exception):
pass
class InvalidSubresource(S3Exception):
def __init__(self, resource, cause):
self.resource = resource
self.cause = cause
| swift-master | swift/common/middleware/s3api/exception.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
try:
from collections.abc import MutableMapping
except ImportError:
from collections import MutableMapping # py2
from functools import partial
from swift.common import header_key_dict
from swift.common import swob
from swift.common.utils import config_true_value
from swift.common.request_helpers import is_sys_meta
from swift.common.middleware.s3api.utils import snake_to_camel, \
sysmeta_prefix, sysmeta_header
from swift.common.middleware.s3api.etree import Element, SubElement, tostring
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
class HeaderKeyDict(header_key_dict.HeaderKeyDict):
"""
Similar to the Swift's normal HeaderKeyDict class, but its key name is
normalized as S3 clients expect.
"""
@staticmethod
def _title(s):
s = header_key_dict.HeaderKeyDict._title(s)
if s.lower() == 'etag':
# AWS Java SDK expects only 'ETag'.
return 'ETag'
if s.lower().startswith('x-amz-'):
# AWS headers returned by S3 are lowercase.
return swob.bytes_to_wsgi(swob.wsgi_to_bytes(s).lower())
return s
def translate_swift_to_s3(key, val):
_key = swob.bytes_to_wsgi(swob.wsgi_to_bytes(key).lower())
def translate_meta_key(_key):
if not _key.startswith('x-object-meta-'):
return _key
# Note that AWS allows user-defined metadata with underscores in the
# header, while WSGI (and other protocols derived from CGI) does not
# differentiate between an underscore and a dash. Fortunately,
# eventlet exposes the raw headers from the client, so we could
# translate '_' to '=5F' on the way in. Now, we translate back.
return 'x-amz-meta-' + _key[14:].replace('=5f', '_')
if _key.startswith('x-object-meta-'):
return translate_meta_key(_key), val
elif _key in ('content-length', 'content-type',
'content-range', 'content-encoding',
'content-disposition', 'content-language',
'etag', 'last-modified', 'x-robots-tag',
'cache-control', 'expires'):
return key, val
elif _key == 'x-object-version-id':
return 'x-amz-version-id', val
elif _key == 'x-copied-from-version-id':
return 'x-amz-copy-source-version-id', val
elif _key == 'x-backend-content-type' and \
val == DELETE_MARKER_CONTENT_TYPE:
return 'x-amz-delete-marker', 'true'
elif _key == 'access-control-expose-headers':
exposed_headers = val.split(', ')
exposed_headers.extend([
'x-amz-request-id',
'x-amz-id-2',
])
return 'access-control-expose-headers', ', '.join(
translate_meta_key(h) for h in exposed_headers)
elif _key == 'access-control-allow-methods':
methods = val.split(', ')
try:
methods.remove('COPY') # that's not a thing in S3
except ValueError:
pass # not there? don't worry about it
return key, ', '.join(methods)
elif _key.startswith('access-control-'):
return key, val
# else, drop the header
return None
class S3ResponseBase(object):
"""
Base class for swift3 responses.
"""
pass
class S3Response(S3ResponseBase, swob.Response):
"""
Similar to the Response class in Swift, but uses our HeaderKeyDict for
headers instead of Swift's HeaderKeyDict. This also translates Swift
specific headers to S3 headers.
"""
def __init__(self, *args, **kwargs):
swob.Response.__init__(self, *args, **kwargs)
s3_sysmeta_headers = swob.HeaderKeyDict()
sw_headers = swob.HeaderKeyDict()
headers = HeaderKeyDict()
self.is_slo = False
def is_swift3_sysmeta(sysmeta_key, server_type):
swift3_sysmeta_prefix = (
'x-%s-sysmeta-swift3' % server_type).lower()
return sysmeta_key.lower().startswith(swift3_sysmeta_prefix)
def is_s3api_sysmeta(sysmeta_key, server_type):
s3api_sysmeta_prefix = sysmeta_prefix(_server_type).lower()
return sysmeta_key.lower().startswith(s3api_sysmeta_prefix)
for key, val in self.headers.items():
if is_sys_meta('object', key) or is_sys_meta('container', key):
_server_type = key.split('-')[1]
if is_swift3_sysmeta(key, _server_type):
# To be compatible with older swift3, translate swift3
# sysmeta to s3api sysmeta here
key = sysmeta_prefix(_server_type) + \
key[len('x-%s-sysmeta-swift3-' % _server_type):]
if key not in s3_sysmeta_headers:
# To avoid overwrite s3api sysmeta by older swift3
# sysmeta set the key only when the key does not exist
s3_sysmeta_headers[key] = val
elif is_s3api_sysmeta(key, _server_type):
s3_sysmeta_headers[key] = val
else:
sw_headers[key] = val
else:
sw_headers[key] = val
# Handle swift headers
for key, val in sw_headers.items():
s3_pair = translate_swift_to_s3(key, val)
if s3_pair is None:
continue
headers[s3_pair[0]] = s3_pair[1]
self.is_slo = config_true_value(sw_headers.get(
'x-static-large-object'))
# Check whether we stored the AWS-style etag on upload
override_etag = s3_sysmeta_headers.get(
sysmeta_header('object', 'etag'))
if override_etag not in (None, ''):
# Multipart uploads in AWS have ETags like
# <MD5(part_etag1 || ... || part_etagN)>-<number of parts>
headers['etag'] = override_etag
elif self.is_slo and 'etag' in headers:
# Many AWS clients use the presence of a '-' to decide whether
# to attempt client-side download validation, so even if we
# didn't store the AWS-style header, tack on a '-N'. (Use 'N'
# because we don't actually know how many parts there are.)
headers['etag'] += '-N'
self.headers = headers
if self.etag:
# add double quotes to the etag header
self.etag = self.etag
# Used for pure swift header handling at the request layer
self.sw_headers = sw_headers
self.sysmeta_headers = s3_sysmeta_headers
@classmethod
def from_swift_resp(cls, sw_resp):
"""
Create a new S3 response object based on the given Swift response.
"""
if sw_resp.app_iter:
body = None
app_iter = sw_resp.app_iter
else:
body = sw_resp.body
app_iter = None
resp = cls(status=sw_resp.status, headers=sw_resp.headers,
request=sw_resp.request, body=body, app_iter=app_iter,
conditional_response=sw_resp.conditional_response)
resp.environ.update(sw_resp.environ)
return resp
def append_copy_resp_body(self, controller_name, last_modified):
elem = Element('Copy%sResult' % controller_name)
SubElement(elem, 'LastModified').text = last_modified
SubElement(elem, 'ETag').text = '"%s"' % self.etag
self.headers['Content-Type'] = 'application/xml'
self.body = tostring(elem)
self.etag = None
HTTPOk = partial(S3Response, status=200)
HTTPCreated = partial(S3Response, status=201)
HTTPAccepted = partial(S3Response, status=202)
HTTPNoContent = partial(S3Response, status=204)
HTTPPartialContent = partial(S3Response, status=206)
class ErrorResponse(S3ResponseBase, swob.HTTPException):
"""
S3 error object.
Reference information about S3 errors is available at:
http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html
"""
_status = ''
_msg = ''
_code = ''
xml_declaration = True
def __init__(self, msg=None, reason=None, *args, **kwargs):
if msg:
self._msg = msg
if not self._code:
self._code = self.__class__.__name__
self.reason = reason
self.info = kwargs.copy()
for reserved_key in ('headers', 'body'):
if self.info.get(reserved_key):
del(self.info[reserved_key])
swob.HTTPException.__init__(
self, status=kwargs.pop('status', self._status),
app_iter=self._body_iter(),
content_type='application/xml', *args,
**kwargs)
self.headers = HeaderKeyDict(self.headers)
@property
def metric_name(self):
parts = [str(self.status_int), self._code]
if self.reason:
parts.append(self.reason)
metric = '.'.join(parts)
return metric.replace(' ', '_')
def _body_iter(self):
error_elem = Element('Error')
SubElement(error_elem, 'Code').text = self._code
SubElement(error_elem, 'Message').text = self._msg
if 'swift.trans_id' in self.environ:
request_id = self.environ['swift.trans_id']
SubElement(error_elem, 'RequestId').text = request_id
self._dict_to_etree(error_elem, self.info)
yield tostring(error_elem, use_s3ns=False,
xml_declaration=self.xml_declaration)
def _dict_to_etree(self, parent, d):
for key, value in d.items():
tag = re.sub(r'\W', '', snake_to_camel(key))
elem = SubElement(parent, tag)
if isinstance(value, (dict, MutableMapping)):
self._dict_to_etree(elem, value)
else:
if isinstance(value, (int, float, bool)):
value = str(value)
try:
elem.text = value
except ValueError:
# We set an invalid string for XML.
elem.text = '(invalid string)'
class AccessDenied(ErrorResponse):
_status = '403 Forbidden'
_msg = 'Access Denied.'
class AccountProblem(ErrorResponse):
_status = '403 Forbidden'
_msg = 'There is a problem with your AWS account that prevents the ' \
'operation from completing successfully.'
class AmbiguousGrantByEmailAddress(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The e-mail address you provided is associated with more than ' \
'one account.'
class AuthorizationHeaderMalformed(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The authorization header is malformed; the authorization ' \
'header requires three components: Credential, SignedHeaders, ' \
'and Signature.'
class AuthorizationQueryParametersError(ErrorResponse):
_status = '400 Bad Request'
class BadDigest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The Content-MD5 you specified did not match what we received.'
class BucketAlreadyExists(ErrorResponse):
_status = '409 Conflict'
_msg = 'The requested bucket name is not available. The bucket ' \
'namespace is shared by all users of the system. Please select a ' \
'different name and try again.'
def __init__(self, bucket, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
class BucketAlreadyOwnedByYou(ErrorResponse):
_status = '409 Conflict'
_msg = 'Your previous request to create the named bucket succeeded and ' \
'you already own it.'
def __init__(self, bucket, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
class BucketNotEmpty(ErrorResponse):
_status = '409 Conflict'
_msg = 'The bucket you tried to delete is not empty'
class VersionedBucketNotEmpty(BucketNotEmpty):
_msg = 'The bucket you tried to delete is not empty. ' \
'You must delete all versions in the bucket.'
_code = 'BucketNotEmpty'
class CredentialsNotSupported(ErrorResponse):
_status = '400 Bad Request'
_msg = 'This request does not support credentials.'
class CrossLocationLoggingProhibited(ErrorResponse):
_status = '403 Forbidden'
_msg = 'Cross location logging not allowed. Buckets in one geographic ' \
'location cannot log information to a bucket in another location.'
class EntityTooSmall(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your proposed upload is smaller than the minimum allowed object ' \
'size.'
class EntityTooLarge(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your proposed upload exceeds the maximum allowed object size.'
class ExpiredToken(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The provided token has expired.'
class IllegalVersioningConfigurationException(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The Versioning configuration specified in the request is invalid.'
class IncompleteBody(ErrorResponse):
_status = '400 Bad Request'
_msg = 'You did not provide the number of bytes specified by the ' \
'Content-Length HTTP header.'
class IncorrectNumberOfFilesInPostRequest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'POST requires exactly one file upload per request.'
class InlineDataTooLarge(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Inline data exceeds the maximum allowed size.'
class InternalError(ErrorResponse):
_status = '500 Internal Server Error'
_msg = 'We encountered an internal error. Please try again.'
def __str__(self):
return '%s: %s (%s)' % (
self.__class__.__name__, self.status, self._msg)
class InvalidAccessKeyId(ErrorResponse):
_status = '403 Forbidden'
_msg = 'The AWS Access Key Id you provided does not exist in our records.'
class InvalidArgument(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Invalid Argument.'
def __init__(self, name, value, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, argument_name=name,
argument_value=value, *args, **kwargs)
class InvalidBucketName(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The specified bucket is not valid.'
def __init__(self, bucket, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
class InvalidBucketState(ErrorResponse):
_status = '409 Conflict'
_msg = 'The request is not valid with the current state of the bucket.'
class InvalidDigest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The Content-MD5 you specified was an invalid.'
class InvalidLocationConstraint(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The specified location constraint is not valid.'
class InvalidObjectState(ErrorResponse):
_status = '403 Forbidden'
_msg = 'The operation is not valid for the current state of the object.'
class InvalidPart(ErrorResponse):
_status = '400 Bad Request'
_msg = 'One or more of the specified parts could not be found. The part ' \
'might not have been uploaded, or the specified entity tag might ' \
'not have matched the part\'s entity tag.'
class InvalidPartOrder(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The list of parts was not in ascending order.Parts list must ' \
'specified in order by part number.'
class InvalidPayer(ErrorResponse):
_status = '403 Forbidden'
_msg = 'All access to this object has been disabled.'
class InvalidPolicyDocument(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The content of the form does not meet the conditions specified ' \
'in the policy document.'
class InvalidRange(ErrorResponse):
_status = '416 Requested Range Not Satisfiable'
_msg = 'The requested range cannot be satisfied.'
class InvalidRequest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Invalid Request.'
class InvalidSecurity(ErrorResponse):
_status = '403 Forbidden'
_msg = 'The provided security credentials are not valid.'
class InvalidSOAPRequest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The SOAP request body is invalid.'
class InvalidStorageClass(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The storage class you specified is not valid.'
class InvalidTargetBucketForLogging(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The target bucket for logging does not exist, is not owned by ' \
'you, or does not have the appropriate grants for the ' \
'log-delivery group.'
def __init__(self, bucket, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, target_bucket=bucket, *args,
**kwargs)
class InvalidToken(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The provided token is malformed or otherwise invalid.'
class InvalidURI(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Couldn\'t parse the specified URI.'
def __init__(self, uri, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, uri=uri, *args, **kwargs)
class KeyTooLongError(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your key is too long.'
class MalformedACLError(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The XML you provided was not well-formed or did not validate ' \
'against our published schema.'
class MalformedPOSTRequest(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The body of your POST request is not well-formed ' \
'multipart/form-data.'
class MalformedXML(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The XML you provided was not well-formed or did not validate ' \
'against our published schema'
class MaxMessageLengthExceeded(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your request was too big.'
class MaxPostPreDataLengthExceededError(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your POST request fields preceding the upload file were too large.'
class MetadataTooLarge(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your metadata headers exceed the maximum allowed metadata size.'
class MethodNotAllowed(ErrorResponse):
_status = '405 Method Not Allowed'
_msg = 'The specified method is not allowed against this resource.'
def __init__(self, method, resource_type, msg=None, *args, **kwargs):
ErrorResponse.__init__(self, msg, method=method,
resource_type=resource_type, *args, **kwargs)
class MissingContentLength(ErrorResponse):
_status = '411 Length Required'
_msg = 'You must provide the Content-Length HTTP header.'
class MissingRequestBodyError(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Request body is empty.'
class MissingSecurityElement(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The SOAP 1.1 request is missing a security element.'
class MissingSecurityHeader(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your request was missing a required header.'
class NoLoggingStatusForKey(ErrorResponse):
_status = '400 Bad Request'
_msg = 'There is no such thing as a logging status sub-resource for a key.'
class NoSuchBucket(ErrorResponse):
_status = '404 Not Found'
_msg = 'The specified bucket does not exist.'
def __init__(self, bucket, msg=None, *args, **kwargs):
if not bucket:
raise InternalError()
ErrorResponse.__init__(self, msg, bucket_name=bucket, *args, **kwargs)
class NoSuchKey(ErrorResponse):
_status = '404 Not Found'
_msg = 'The specified key does not exist.'
def __init__(self, key, msg=None, *args, **kwargs):
if not key:
raise InternalError()
ErrorResponse.__init__(self, msg, key=key, *args, **kwargs)
class NoSuchLifecycleConfiguration(ErrorResponse):
_status = '404 Not Found'
_msg = 'The lifecycle configuration does not exist. .'
class NoSuchUpload(ErrorResponse):
_status = '404 Not Found'
_msg = 'The specified multipart upload does not exist. The upload ID ' \
'might be invalid, or the multipart upload might have been ' \
'aborted or completed.'
class NoSuchVersion(ErrorResponse):
_status = '404 Not Found'
_msg = 'The specified version does not exist.'
def __init__(self, key, version_id, msg=None, *args, **kwargs):
if not key:
raise InternalError()
ErrorResponse.__init__(self, msg, key=key, version_id=version_id,
*args, **kwargs)
# NotImplemented is a python built-in constant. Use S3NotImplemented instead.
class S3NotImplemented(ErrorResponse):
_status = '501 Not Implemented'
_msg = 'Not implemented.'
_code = 'NotImplemented'
class NotSignedUp(ErrorResponse):
_status = '403 Forbidden'
_msg = 'Your account is not signed up for the Amazon S3 service.'
class NotSuchBucketPolicy(ErrorResponse):
_status = '404 Not Found'
_msg = 'The specified bucket does not have a bucket policy.'
class OperationAborted(ErrorResponse):
_status = '409 Conflict'
_msg = 'A conflicting conditional operation is currently in progress ' \
'against this resource. Please try again.'
class PermanentRedirect(ErrorResponse):
_status = '301 Moved Permanently'
_msg = 'The bucket you are attempting to access must be addressed using ' \
'the specified endpoint. Please send all future requests to this ' \
'endpoint.'
class PreconditionFailed(ErrorResponse):
_status = '412 Precondition Failed'
_msg = 'At least one of the preconditions you specified did not hold.'
class Redirect(ErrorResponse):
_status = '307 Moved Temporarily'
_msg = 'Temporary redirect.'
class RestoreAlreadyInProgress(ErrorResponse):
_status = '409 Conflict'
_msg = 'Object restore is already in progress.'
class RequestIsNotMultiPartContent(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Bucket POST must be of the enclosure-type multipart/form-data.'
class RequestTimeout(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Your socket connection to the server was not read from or ' \
'written to within the timeout period.'
class RequestTimeTooSkewed(ErrorResponse):
_status = '403 Forbidden'
_msg = 'The difference between the request time and the current time ' \
'is too large.'
class RequestTorrentOfBucketError(ErrorResponse):
_status = '400 Bad Request'
_msg = 'Requesting the torrent file of a bucket is not permitted.'
class SignatureDoesNotMatch(ErrorResponse):
_status = '403 Forbidden'
_msg = 'The request signature we calculated does not match the ' \
'signature you provided. Check your key and signing method.'
class ServiceUnavailable(ErrorResponse):
_status = '503 Service Unavailable'
_msg = 'Please reduce your request rate.'
class SlowDown(ErrorResponse):
_status = '503 Slow Down'
_msg = 'Please reduce your request rate.'
class TemporaryRedirect(ErrorResponse):
_status = '307 Moved Temporarily'
_msg = 'You are being redirected to the bucket while DNS updates.'
class TokenRefreshRequired(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The provided token must be refreshed.'
class TooManyBuckets(ErrorResponse):
_status = '400 Bad Request'
_msg = 'You have attempted to create more buckets than allowed.'
class UnexpectedContent(ErrorResponse):
_status = '400 Bad Request'
_msg = 'This request does not support content.'
class UnresolvableGrantByEmailAddress(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The e-mail address you provided does not match any account on ' \
'record.'
class UserKeyMustBeSpecified(ErrorResponse):
_status = '400 Bad Request'
_msg = 'The bucket POST must contain the specified field name. If it is ' \
'specified, please check the order of the fields.'
class BrokenMPU(ErrorResponse):
# This is very much a Swift-ism, and we wish we didn't need it
_status = '409 Conflict'
_msg = 'Multipart upload has broken segment data.'
| swift-master | swift/common/middleware/s3api/s3response.py |
swift-master | swift/common/middleware/s3api/__init__.py |
|
# Copyright 2012 OpenStack Foundation
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# Copyright 2011,2012 Akira YOSHIYAMA <[email protected]>
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This source code is based ./auth_token.py and ./ec2_token.py.
# See them for their copyright.
"""
-------------------
S3 Token Middleware
-------------------
s3token middleware is for authentication with s3api + keystone.
This middleware:
* Gets a request from the s3api middleware with an S3 Authorization
access key.
* Validates s3 token with Keystone.
* Transforms the account name to AUTH_%(tenant_name).
* Optionally can retrieve and cache secret from keystone
to validate signature locally
.. note::
If upgrading from swift3, the ``auth_version`` config option has been
removed, and the ``auth_uri`` option now includes the Keystone API
version. If you previously had a configuration like
.. code-block:: ini
[filter:s3token]
use = egg:swift3#s3token
auth_uri = https://keystonehost:35357
auth_version = 3
you should now use
.. code-block:: ini
[filter:s3token]
use = egg:swift#s3token
auth_uri = https://keystonehost:35357/v3
"""
import base64
import json
from keystoneclient.v3 import client as keystone_client
from keystoneauth1 import session as keystone_session
from keystoneauth1 import loading as keystone_loading
import requests
import six
from six.moves import urllib
from swift.common.swob import Request, HTTPBadRequest, HTTPUnauthorized, \
HTTPException
from swift.common.utils import config_true_value, split_path, get_logger, \
cache_from_env, append_underscore
from swift.common.wsgi import ConfigFileError
PROTOCOL_NAME = 'S3 Token Authentication'
# Headers to purge if they came from (or may have come from) the client
KEYSTONE_AUTH_HEADERS = (
'X-Identity-Status', 'X-Service-Identity-Status',
'X-Domain-Id', 'X-Service-Domain-Id',
'X-Domain-Name', 'X-Service-Domain-Name',
'X-Project-Id', 'X-Service-Project-Id',
'X-Project-Name', 'X-Service-Project-Name',
'X-Project-Domain-Id', 'X-Service-Project-Domain-Id',
'X-Project-Domain-Name', 'X-Service-Project-Domain-Name',
'X-User-Id', 'X-Service-User-Id',
'X-User-Name', 'X-Service-User-Name',
'X-User-Domain-Id', 'X-Service-User-Domain-Id',
'X-User-Domain-Name', 'X-Service-User-Domain-Name',
'X-Roles', 'X-Service-Roles',
'X-Is-Admin-Project',
'X-Service-Catalog',
# Deprecated headers, too...
'X-Tenant-Id',
'X-Tenant-Name',
'X-Tenant',
'X-User',
'X-Role',
)
def parse_v2_response(token):
access_info = token['access']
headers = {
'X-Identity-Status': 'Confirmed',
'X-Roles': ','.join(r['name']
for r in access_info['user']['roles']),
'X-User-Id': access_info['user']['id'],
'X-User-Name': access_info['user']['name'],
'X-Tenant-Id': access_info['token']['tenant']['id'],
'X-Tenant-Name': access_info['token']['tenant']['name'],
'X-Project-Id': access_info['token']['tenant']['id'],
'X-Project-Name': access_info['token']['tenant']['name'],
}
return headers, access_info['token']['tenant']
def parse_v3_response(token):
token = token['token']
headers = {
'X-Identity-Status': 'Confirmed',
'X-Roles': ','.join(r['name']
for r in token['roles']),
'X-User-Id': token['user']['id'],
'X-User-Name': token['user']['name'],
'X-User-Domain-Id': token['user']['domain']['id'],
'X-User-Domain-Name': token['user']['domain']['name'],
'X-Tenant-Id': token['project']['id'],
'X-Tenant-Name': token['project']['name'],
'X-Project-Id': token['project']['id'],
'X-Project-Name': token['project']['name'],
'X-Project-Domain-Id': token['project']['domain']['id'],
'X-Project-Domain-Name': token['project']['domain']['name'],
}
return headers, token['project']
class S3Token(object):
"""Middleware that handles S3 authentication."""
def __init__(self, app, conf):
"""Common initialization code."""
self._app = app
self._logger = get_logger(
conf, log_route=conf.get('log_name', 's3token'))
self._logger.debug('Starting the %s component', PROTOCOL_NAME)
self._timeout = float(conf.get('http_timeout', '10.0'))
if not (0 < self._timeout <= 60):
raise ValueError('http_timeout must be between 0 and 60 seconds')
self._reseller_prefix = append_underscore(
conf.get('reseller_prefix', 'AUTH'))
self._delay_auth_decision = config_true_value(
conf.get('delay_auth_decision'))
# where to find the auth service (we use this to validate tokens)
self._request_uri = conf.get('auth_uri', '').rstrip('/') + '/s3tokens'
parsed = urllib.parse.urlsplit(self._request_uri)
if not parsed.scheme or not parsed.hostname:
raise ConfigFileError(
'Invalid auth_uri; must include scheme and host')
if parsed.scheme not in ('http', 'https'):
raise ConfigFileError(
'Invalid auth_uri; scheme must be http or https')
if parsed.query or parsed.fragment or '@' in parsed.netloc:
raise ConfigFileError('Invalid auth_uri; must not include '
'username, query, or fragment')
# SSL
insecure = config_true_value(conf.get('insecure'))
cert_file = conf.get('certfile')
key_file = conf.get('keyfile')
if insecure:
self._verify = False
elif cert_file and key_file:
self._verify = (cert_file, key_file)
elif cert_file:
self._verify = cert_file
else:
self._verify = None
self._secret_cache_duration = int(conf.get('secret_cache_duration', 0))
if self._secret_cache_duration < 0:
raise ValueError('secret_cache_duration must be non-negative')
if self._secret_cache_duration:
try:
auth_plugin = keystone_loading.get_plugin_loader(
conf.get('auth_type', 'password'))
available_auth_options = auth_plugin.get_options()
auth_options = {}
for option in available_auth_options:
name = option.name.replace('-', '_')
value = conf.get(name)
if value:
auth_options[name] = value
auth = auth_plugin.load_from_options(**auth_options)
session = keystone_session.Session(auth=auth)
self.keystoneclient = keystone_client.Client(
session=session,
region_name=conf.get('region_name'))
self._logger.info("Caching s3tokens for %s seconds",
self._secret_cache_duration)
except Exception:
self._logger.warning("Unable to load keystone auth_plugin. "
"Secret caching will be unavailable.",
exc_info=True)
self.keystoneclient = None
self._secret_cache_duration = 0
def _deny_request(self, code):
error_cls, message = {
'AccessDenied': (HTTPUnauthorized, 'Access denied'),
'InvalidURI': (HTTPBadRequest,
'Could not parse the specified URI'),
}[code]
resp = error_cls(content_type='text/xml')
error_msg = ('<?xml version="1.0" encoding="UTF-8"?>\r\n'
'<Error>\r\n <Code>%s</Code>\r\n '
'<Message>%s</Message>\r\n</Error>\r\n' %
(code, message))
if six.PY3:
error_msg = error_msg.encode()
resp.body = error_msg
return resp
def _json_request(self, creds_json):
headers = {'Content-Type': 'application/json'}
try:
response = requests.post(self._request_uri,
headers=headers, data=creds_json,
verify=self._verify,
timeout=self._timeout)
except requests.exceptions.RequestException as e:
self._logger.info('HTTP connection exception: %s', e)
raise self._deny_request('InvalidURI')
if response.status_code < 200 or response.status_code >= 300:
self._logger.debug('Keystone reply error: status=%s reason=%s',
response.status_code, response.reason)
raise self._deny_request('AccessDenied')
return response
def __call__(self, environ, start_response):
"""Handle incoming request. authenticate and send downstream."""
req = Request(environ)
self._logger.debug('Calling S3Token middleware.')
# Always drop auth headers if we're first in the pipeline
if 'keystone.token_info' not in req.environ:
req.headers.update({h: None for h in KEYSTONE_AUTH_HEADERS})
try:
parts = split_path(urllib.parse.unquote(req.path), 1, 4, True)
version, account, container, obj = parts
except ValueError:
msg = 'Not a path query: %s, skipping.' % req.path
self._logger.debug(msg)
return self._app(environ, start_response)
# Read request signature and access id.
s3_auth_details = req.environ.get('s3api.auth_details')
if not s3_auth_details:
msg = 'No authorization details from s3api. skipping.'
self._logger.debug(msg)
return self._app(environ, start_response)
access = s3_auth_details['access_key']
if isinstance(access, six.binary_type):
access = access.decode('utf-8')
signature = s3_auth_details['signature']
if isinstance(signature, six.binary_type):
signature = signature.decode('utf-8')
string_to_sign = s3_auth_details['string_to_sign']
if isinstance(string_to_sign, six.text_type):
string_to_sign = string_to_sign.encode('utf-8')
token = base64.urlsafe_b64encode(string_to_sign)
if isinstance(token, six.binary_type):
token = token.decode('ascii')
# NOTE(chmou): This is to handle the special case with nova
# when we have the option s3_affix_tenant. We will force it to
# connect to another account than the one
# authenticated. Before people start getting worried about
# security, I should point that we are connecting with
# username/token specified by the user but instead of
# connecting to its own account we will force it to go to an
# another account. In a normal scenario if that user don't
# have the reseller right it will just fail but since the
# reseller account can connect to every account it is allowed
# by the swift_auth middleware.
force_tenant = None
if ':' in access:
access, force_tenant = access.split(':')
# Authenticate request.
creds = {'credentials': {'access': access,
'token': token,
'signature': signature}}
memcache_client = None
memcache_token_key = 's3secret/%s' % access
if self._secret_cache_duration > 0:
memcache_client = cache_from_env(environ)
cached_auth_data = None
if memcache_client:
cached_auth_data = memcache_client.get(memcache_token_key)
if cached_auth_data:
if len(cached_auth_data) == 4:
# Old versions of swift may have cached token, too,
# but we don't need it
headers, _token, tenant, secret = cached_auth_data
else:
headers, tenant, secret = cached_auth_data
if s3_auth_details['check_signature'](secret):
self._logger.debug("Cached creds valid")
else:
self._logger.debug("Cached creds invalid")
cached_auth_data = None
if not cached_auth_data:
creds_json = json.dumps(creds)
self._logger.debug('Connecting to Keystone sending this JSON: %s',
creds_json)
# NOTE(vish): We could save a call to keystone by having
# keystone return token, tenant, user, and roles
# from this call.
#
# NOTE(chmou): We still have the same problem we would need to
# change token_auth to detect if we already
# identified and not doing a second query and just
# pass it through to swiftauth in this case.
try:
# NB: requests.Response, not swob.Response
resp = self._json_request(creds_json)
except HTTPException as e_resp:
if self._delay_auth_decision:
msg = ('Received error, deferring rejection based on '
'error: %s')
self._logger.debug(msg, e_resp.status)
return self._app(environ, start_response)
else:
msg = 'Received error, rejecting request with error: %s'
self._logger.debug(msg, e_resp.status)
# NB: swob.Response, not requests.Response
return e_resp(environ, start_response)
self._logger.debug('Keystone Reply: Status: %d, Output: %s',
resp.status_code, resp.content)
try:
token = resp.json()
if 'access' in token:
headers, tenant = parse_v2_response(token)
elif 'token' in token:
headers, tenant = parse_v3_response(token)
else:
raise ValueError
if memcache_client:
user_id = headers.get('X-User-Id')
if not user_id:
raise ValueError
try:
cred_ref = self.keystoneclient.ec2.get(
user_id=user_id,
access=access)
memcache_client.set(
memcache_token_key,
(headers, tenant, cred_ref.secret),
time=self._secret_cache_duration)
self._logger.debug("Cached keystone credentials")
except Exception:
self._logger.warning("Unable to cache secret",
exc_info=True)
# Populate the environment similar to auth_token,
# so we don't have to contact Keystone again.
#
# Note that although the strings are unicode following json
# deserialization, Swift's HeaderEnvironProxy handles ensuring
# they're stored as native strings
req.environ['keystone.token_info'] = token
except (ValueError, KeyError, TypeError):
if self._delay_auth_decision:
error = ('Error on keystone reply: %d %s - '
'deferring rejection downstream')
self._logger.debug(error, resp.status_code, resp.content)
return self._app(environ, start_response)
else:
error = ('Error on keystone reply: %d %s - '
'rejecting request')
self._logger.debug(error, resp.status_code, resp.content)
return self._deny_request('InvalidURI')(
environ, start_response)
req.headers.update(headers)
tenant_to_connect = force_tenant or tenant['id']
if six.PY2 and isinstance(tenant_to_connect, six.text_type):
tenant_to_connect = tenant_to_connect.encode('utf-8')
self._logger.debug('Connecting with tenant: %s', tenant_to_connect)
new_tenant_name = '%s%s' % (self._reseller_prefix, tenant_to_connect)
environ['PATH_INFO'] = environ['PATH_INFO'].replace(
account, new_tenant_name, 1)
return self._app(environ, start_response)
def filter_factory(global_conf, **local_conf):
"""Returns a WSGI filter app for use with paste.deploy."""
conf = global_conf.copy()
conf.update(local_conf)
def auth_filter(app):
return S3Token(app, conf)
return auth_filter
| swift-master | swift/common/middleware/s3api/s3token.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import binascii
from collections import defaultdict, OrderedDict
from email.header import Header
from hashlib import sha1, sha256
import hmac
import re
import six
# pylint: disable-msg=import-error
from six.moves.urllib.parse import quote, unquote, parse_qsl
import string
from swift.common.utils import split_path, json, close_if_possible, md5, \
streq_const_time
from swift.common.registry import get_swift_info
from swift.common import swob
from swift.common.http import HTTP_OK, HTTP_CREATED, HTTP_ACCEPTED, \
HTTP_NO_CONTENT, HTTP_UNAUTHORIZED, HTTP_FORBIDDEN, HTTP_NOT_FOUND, \
HTTP_CONFLICT, HTTP_UNPROCESSABLE_ENTITY, HTTP_REQUEST_ENTITY_TOO_LARGE, \
HTTP_PARTIAL_CONTENT, HTTP_NOT_MODIFIED, HTTP_PRECONDITION_FAILED, \
HTTP_REQUESTED_RANGE_NOT_SATISFIABLE, HTTP_LENGTH_REQUIRED, \
HTTP_BAD_REQUEST, HTTP_REQUEST_TIMEOUT, HTTP_SERVICE_UNAVAILABLE, \
HTTP_TOO_MANY_REQUESTS, HTTP_RATE_LIMITED, is_success, \
HTTP_CLIENT_CLOSED_REQUEST
from swift.common.constraints import check_utf8
from swift.proxy.controllers.base import get_container_info
from swift.common.request_helpers import check_path_header
from swift.common.middleware.s3api.controllers import ServiceController, \
ObjectController, AclController, MultiObjectDeleteController, \
LocationController, LoggingStatusController, PartController, \
UploadController, UploadsController, VersioningController, \
UnsupportedController, S3AclController, BucketController, \
TaggingController
from swift.common.middleware.s3api.s3response import AccessDenied, \
InvalidArgument, InvalidDigest, BucketAlreadyOwnedByYou, \
RequestTimeTooSkewed, S3Response, SignatureDoesNotMatch, \
BucketAlreadyExists, BucketNotEmpty, EntityTooLarge, \
InternalError, NoSuchBucket, NoSuchKey, PreconditionFailed, InvalidRange, \
MissingContentLength, InvalidStorageClass, S3NotImplemented, InvalidURI, \
MalformedXML, InvalidRequest, RequestTimeout, InvalidBucketName, \
BadDigest, AuthorizationHeaderMalformed, SlowDown, \
AuthorizationQueryParametersError, ServiceUnavailable, BrokenMPU
from swift.common.middleware.s3api.exception import NotS3Request
from swift.common.middleware.s3api.utils import utf8encode, \
S3Timestamp, mktime, MULTIUPLOAD_SUFFIX
from swift.common.middleware.s3api.subresource import decode_acl, encode_acl
from swift.common.middleware.s3api.utils import sysmeta_header, \
validate_bucket_name, Config
from swift.common.middleware.s3api.acl_utils import handle_acl_header
# List of sub-resources that must be maintained as part of the HMAC
# signature string.
ALLOWED_SUB_RESOURCES = sorted([
'acl', 'delete', 'lifecycle', 'location', 'logging', 'notification',
'partNumber', 'policy', 'requestPayment', 'torrent', 'uploads', 'uploadId',
'versionId', 'versioning', 'versions', 'website',
'response-cache-control', 'response-content-disposition',
'response-content-encoding', 'response-content-language',
'response-content-type', 'response-expires', 'cors', 'tagging', 'restore'
])
MAX_32BIT_INT = 2147483647
SIGV2_TIMESTAMP_FORMAT = '%Y-%m-%dT%H:%M:%S'
SIGV4_X_AMZ_DATE_FORMAT = '%Y%m%dT%H%M%SZ'
SERVICE = 's3' # useful for mocking out in tests
def _header_strip(value):
# S3 seems to strip *all* control characters
if value is None:
return None
stripped = _header_strip.re.sub('', value)
if value and not stripped:
# If there's nothing left after stripping,
# behave as though it wasn't provided
return None
return stripped
_header_strip.re = re.compile('^[\x00-\x20]*|[\x00-\x20]*$')
def _header_acl_property(resource):
"""
Set and retrieve the acl in self.headers
"""
def getter(self):
return getattr(self, '_%s' % resource)
def setter(self, value):
self.headers.update(encode_acl(resource, value))
setattr(self, '_%s' % resource, value)
def deleter(self):
self.headers[sysmeta_header(resource, 'acl')] = ''
return property(getter, setter, deleter,
doc='Get and set the %s acl property' % resource)
class HashingInput(object):
"""
wsgi.input wrapper to verify the hash of the input as it's read.
"""
def __init__(self, reader, content_length, hasher, expected_hex_hash):
self._input = reader
self._to_read = content_length
self._hasher = hasher()
self._expected = expected_hex_hash
def read(self, size=None):
chunk = self._input.read(size)
self._hasher.update(chunk)
self._to_read -= len(chunk)
short_read = bool(chunk) if size is None else (len(chunk) < size)
if self._to_read < 0 or (short_read and self._to_read) or (
self._to_read == 0 and
self._hasher.hexdigest() != self._expected):
self.close()
# Since we don't return the last chunk, the PUT never completes
raise swob.HTTPUnprocessableEntity(
'The X-Amz-Content-SHA56 you specified did not match '
'what we received.')
return chunk
def close(self):
close_if_possible(self._input)
class SigV4Mixin(object):
"""
A request class mixin to provide S3 signature v4 functionality
"""
def check_signature(self, secret):
secret = utf8encode(secret)
user_signature = self.signature
derived_secret = b'AWS4' + secret
for scope_piece in self.scope.values():
derived_secret = hmac.new(
derived_secret, scope_piece.encode('utf8'), sha256).digest()
valid_signature = hmac.new(
derived_secret, self.string_to_sign, sha256).hexdigest()
return streq_const_time(user_signature, valid_signature)
@property
def _is_query_auth(self):
return 'X-Amz-Credential' in self.params
@property
def timestamp(self):
"""
Return timestamp string according to the auth type
The difference from v2 is v4 have to see 'X-Amz-Date' even though
it's query auth type.
"""
if not self._timestamp:
try:
if self._is_query_auth and 'X-Amz-Date' in self.params:
# NOTE(andrey-mp): Date in Signature V4 has different
# format
timestamp = mktime(
self.params['X-Amz-Date'], SIGV4_X_AMZ_DATE_FORMAT)
else:
if self.headers.get('X-Amz-Date'):
timestamp = mktime(
self.headers.get('X-Amz-Date'),
SIGV4_X_AMZ_DATE_FORMAT)
else:
timestamp = mktime(self.headers.get('Date'))
except (ValueError, TypeError):
raise AccessDenied('AWS authentication requires a valid Date '
'or x-amz-date header',
reason='invalid_date')
if timestamp < 0:
raise AccessDenied('AWS authentication requires a valid Date '
'or x-amz-date header',
reason='invalid_date')
try:
self._timestamp = S3Timestamp(timestamp)
except ValueError:
# Must be far-future; blame clock skew
raise RequestTimeTooSkewed()
return self._timestamp
def _validate_expire_param(self):
"""
Validate X-Amz-Expires in query parameter
:raises: AccessDenied
:raises: AuthorizationQueryParametersError
:raises: AccessDenined
"""
err = None
try:
expires = int(self.params['X-Amz-Expires'])
except KeyError:
raise AccessDenied(reason='invalid_expires')
except ValueError:
err = 'X-Amz-Expires should be a number'
else:
if expires < 0:
err = 'X-Amz-Expires must be non-negative'
elif expires >= 2 ** 63:
err = 'X-Amz-Expires should be a number'
elif expires > 604800:
err = ('X-Amz-Expires must be less than a week (in seconds); '
'that is, the given X-Amz-Expires must be less than '
'604800 seconds')
if err:
raise AuthorizationQueryParametersError(err)
if int(self.timestamp) + expires < S3Timestamp.now():
raise AccessDenied('Request has expired', reason='expired')
def _parse_credential(self, credential_string):
parts = credential_string.split("/")
# credential must be in following format:
# <access-key-id>/<date>/<AWS-region>/<AWS-service>/aws4_request
if not parts[0] or len(parts) != 5:
raise AccessDenied(reason='invalid_credential')
return dict(zip(['access', 'date', 'region', 'service', 'terminal'],
parts))
def _parse_query_authentication(self):
"""
Parse v4 query authentication
- version 4:
'X-Amz-Credential' and 'X-Amz-Signature' should be in param
:raises: AccessDenied
:raises: AuthorizationHeaderMalformed
"""
if self.params.get('X-Amz-Algorithm') != 'AWS4-HMAC-SHA256':
raise InvalidArgument('X-Amz-Algorithm',
self.params.get('X-Amz-Algorithm'))
try:
cred_param = self._parse_credential(
swob.wsgi_to_str(self.params['X-Amz-Credential']))
sig = swob.wsgi_to_str(self.params['X-Amz-Signature'])
if not sig:
raise AccessDenied(reason='invalid_query_auth')
except KeyError:
raise AccessDenied(reason='invalid_query_auth')
try:
signed_headers = swob.wsgi_to_str(
self.params['X-Amz-SignedHeaders'])
except KeyError:
# TODO: make sure if is it malformed request?
raise AuthorizationHeaderMalformed()
self._signed_headers = set(signed_headers.split(';'))
invalid_messages = {
'date': 'Invalid credential date "%s". This date is not the same '
'as X-Amz-Date: "%s".',
'region': "Error parsing the X-Amz-Credential parameter; "
"the region '%s' is wrong; expecting '%s'",
'service': 'Error parsing the X-Amz-Credential parameter; '
'incorrect service "%s". This endpoint belongs to "%s".',
'terminal': 'Error parsing the X-Amz-Credential parameter; '
'incorrect terminal "%s". This endpoint uses "%s".',
}
for key in ('date', 'region', 'service', 'terminal'):
if cred_param[key] != self.scope[key]:
kwargs = {}
if key == 'region':
# Allow lowercase region name
# for AWS .NET SDK compatibility
if not self.scope[key].islower() and \
cred_param[key] == self.scope[key].lower():
self.location = self.location.lower()
continue
kwargs = {'region': self.scope['region']}
raise AuthorizationQueryParametersError(
invalid_messages[key] % (cred_param[key], self.scope[key]),
**kwargs)
return cred_param['access'], sig
def _parse_header_authentication(self):
"""
Parse v4 header authentication
- version 4:
'X-Amz-Credential' and 'X-Amz-Signature' should be in param
:raises: AccessDenied
:raises: AuthorizationHeaderMalformed
"""
auth_str = swob.wsgi_to_str(self.headers['Authorization'])
cred_param = self._parse_credential(auth_str.partition(
"Credential=")[2].split(',')[0])
sig = auth_str.partition("Signature=")[2].split(',')[0]
if not sig:
raise AccessDenied(reason='invalid_header_auth')
signed_headers = auth_str.partition(
"SignedHeaders=")[2].split(',', 1)[0]
if not signed_headers:
# TODO: make sure if is it Malformed?
raise AuthorizationHeaderMalformed()
invalid_messages = {
'date': 'Invalid credential date "%s". This date is not the same '
'as X-Amz-Date: "%s".',
'region': "The authorization header is malformed; the region '%s' "
"is wrong; expecting '%s'",
'service': 'The authorization header is malformed; incorrect '
'service "%s". This endpoint belongs to "%s".',
'terminal': 'The authorization header is malformed; incorrect '
'terminal "%s". This endpoint uses "%s".',
}
for key in ('date', 'region', 'service', 'terminal'):
if cred_param[key] != self.scope[key]:
kwargs = {}
if key == 'region':
# Allow lowercase region name
# for AWS .NET SDK compatibility
if not self.scope[key].islower() and \
cred_param[key] == self.scope[key].lower():
self.location = self.location.lower()
continue
kwargs = {'region': self.scope['region']}
raise AuthorizationHeaderMalformed(
invalid_messages[key] % (cred_param[key], self.scope[key]),
**kwargs)
self._signed_headers = set(signed_headers.split(';'))
return cred_param['access'], sig
def _canonical_query_string(self):
return '&'.join(
'%s=%s' % (swob.wsgi_quote(key, safe='-_.~'),
swob.wsgi_quote(value, safe='-_.~'))
for key, value in sorted(self.params.items())
if key not in ('Signature', 'X-Amz-Signature')).encode('ascii')
def _headers_to_sign(self):
"""
Select the headers from the request that need to be included
in the StringToSign.
:return : dict of headers to sign, the keys are all lower case
"""
if 'headers_raw' in self.environ: # eventlet >= 0.19.0
# See https://github.com/eventlet/eventlet/commit/67ec999
headers_lower_dict = defaultdict(list)
for key, value in self.environ['headers_raw']:
headers_lower_dict[key.lower().strip()].append(
' '.join(_header_strip(value or '').split()))
headers_lower_dict = {k: ','.join(v)
for k, v in headers_lower_dict.items()}
else: # mostly-functional fallback
headers_lower_dict = dict(
(k.lower().strip(), ' '.join(_header_strip(v or '').split()))
for (k, v) in six.iteritems(self.headers))
if 'host' in headers_lower_dict and re.match(
'Boto/2.[0-9].[0-2]',
headers_lower_dict.get('user-agent', '')):
# Boto versions < 2.9.3 strip the port component of the host:port
# header, so detect the user-agent via the header and strip the
# port if we detect an old boto version.
headers_lower_dict['host'] = \
headers_lower_dict['host'].split(':')[0]
headers_to_sign = [
(key, value) for key, value in sorted(headers_lower_dict.items())
if swob.wsgi_to_str(key) in self._signed_headers]
if len(headers_to_sign) != len(self._signed_headers):
# NOTE: if we are missing the header suggested via
# signed_header in actual header, it results in
# SignatureDoesNotMatch in actual S3 so we can raise
# the error immediately here to save redundant check
# process.
raise SignatureDoesNotMatch()
return headers_to_sign
def _canonical_uri(self):
"""
It won't require bucket name in canonical_uri for v4.
"""
return swob.wsgi_to_bytes(swob.wsgi_quote(
self.environ.get('PATH_INFO', self.path), safe='-_.~/'))
def _canonical_request(self):
# prepare 'canonical_request'
# Example requests are like following:
#
# GET
# /
# Action=ListUsers&Version=2010-05-08
# content-type:application/x-www-form-urlencoded; charset=utf-8
# host:iam.amazonaws.com
# x-amz-date:20150830T123600Z
#
# content-type;host;x-amz-date
# e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
#
# 1. Add verb like: GET
cr = [swob.wsgi_to_bytes(self.method.upper())]
# 2. Add path like: /
path = self._canonical_uri()
cr.append(path)
# 3. Add query like: Action=ListUsers&Version=2010-05-08
cr.append(self._canonical_query_string())
# 4. Add headers like:
# content-type:application/x-www-form-urlencoded; charset=utf-8
# host:iam.amazonaws.com
# x-amz-date:20150830T123600Z
headers_to_sign = self._headers_to_sign()
cr.append(b''.join(swob.wsgi_to_bytes('%s:%s\n' % (key, value))
for key, value in headers_to_sign))
# 5. Add signed headers into canonical request like
# content-type;host;x-amz-date
cr.append(b';'.join(swob.wsgi_to_bytes(k) for k, v in headers_to_sign))
# 6. Add payload string at the tail
if 'X-Amz-Credential' in self.params:
# V4 with query parameters only
hashed_payload = 'UNSIGNED-PAYLOAD'
elif 'X-Amz-Content-SHA256' not in self.headers:
msg = 'Missing required header for this request: ' \
'x-amz-content-sha256'
raise InvalidRequest(msg)
else:
hashed_payload = self.headers['X-Amz-Content-SHA256']
if hashed_payload != 'UNSIGNED-PAYLOAD':
if self.content_length == 0:
if hashed_payload.lower() != sha256().hexdigest():
raise BadDigest(
'The X-Amz-Content-SHA56 you specified did not '
'match what we received.')
elif self.content_length:
self.environ['wsgi.input'] = HashingInput(
self.environ['wsgi.input'],
self.content_length,
sha256,
hashed_payload.lower())
# else, length not provided -- Swift will kick out a
# 411 Length Required which will get translated back
# to a S3-style response in S3Request._swift_error_codes
cr.append(swob.wsgi_to_bytes(hashed_payload))
return b'\n'.join(cr)
@property
def scope(self):
return OrderedDict([
('date', self.timestamp.amz_date_format.split('T')[0]),
('region', self.location),
('service', SERVICE),
('terminal', 'aws4_request'),
])
def _string_to_sign(self):
"""
Create 'StringToSign' value in Amazon terminology for v4.
"""
return b'\n'.join([
b'AWS4-HMAC-SHA256',
self.timestamp.amz_date_format.encode('ascii'),
'/'.join(self.scope.values()).encode('utf8'),
sha256(self._canonical_request()).hexdigest().encode('ascii')])
def signature_does_not_match_kwargs(self):
kwargs = super(SigV4Mixin, self).signature_does_not_match_kwargs()
cr = self._canonical_request()
kwargs.update({
'canonical_request': cr,
'canonical_request_bytes': ' '.join(
format(ord(c), '02x') for c in cr.decode('latin1')),
})
return kwargs
def get_request_class(env, s3_acl):
"""
Helper function to find a request class to use from Map
"""
if s3_acl:
request_classes = (S3AclRequest, SigV4S3AclRequest)
else:
request_classes = (S3Request, SigV4Request)
req = swob.Request(env)
if 'X-Amz-Credential' in req.params or \
req.headers.get('Authorization', '').startswith(
'AWS4-HMAC-SHA256 '):
# This is an Amazon SigV4 request
return request_classes[1]
else:
# The others using Amazon SigV2 class
return request_classes[0]
class S3Request(swob.Request):
"""
S3 request object.
"""
bucket_acl = _header_acl_property('container')
object_acl = _header_acl_property('object')
def __init__(self, env, app=None, conf=None):
# NOTE: app is not used by this class, need for compatibility of S3acl
swob.Request.__init__(self, env)
self.conf = conf or Config()
self.location = self.conf.location
self._timestamp = None
self.access_key, self.signature = self._parse_auth_info()
self.bucket_in_host = self._parse_host()
self.container_name, self.object_name = self._parse_uri()
self._validate_headers()
# Lock in string-to-sign now, before we start messing with query params
self.string_to_sign = self._string_to_sign()
self.environ['s3api.auth_details'] = {
'access_key': self.access_key,
'signature': self.signature,
'string_to_sign': self.string_to_sign,
'check_signature': self.check_signature,
}
self.account = None
self.user_id = None
# Avoids that swift.swob.Response replaces Location header value
# by full URL when absolute path given. See swift.swob for more detail.
self.environ['swift.leave_relative_location'] = True
def check_signature(self, secret):
secret = utf8encode(secret)
user_signature = self.signature
valid_signature = base64.b64encode(hmac.new(
secret, self.string_to_sign, sha1).digest()).strip()
if not six.PY2:
valid_signature = valid_signature.decode('ascii')
return streq_const_time(user_signature, valid_signature)
@property
def timestamp(self):
"""
S3Timestamp from Date header. If X-Amz-Date header specified, it
will be prior to Date header.
:return : S3Timestamp instance
"""
if not self._timestamp:
try:
if self._is_query_auth and 'Timestamp' in self.params:
# If Timestamp specified in query, it should be prior
# to any Date header (is this right?)
timestamp = mktime(
self.params['Timestamp'], SIGV2_TIMESTAMP_FORMAT)
else:
timestamp = mktime(
self.headers.get('X-Amz-Date',
self.headers.get('Date')))
except ValueError:
raise AccessDenied('AWS authentication requires a valid Date '
'or x-amz-date header',
reason='invalid_date')
if timestamp < 0:
raise AccessDenied('AWS authentication requires a valid Date '
'or x-amz-date header',
reason='invalid_date')
try:
self._timestamp = S3Timestamp(timestamp)
except ValueError:
# Must be far-future; blame clock skew
raise RequestTimeTooSkewed()
return self._timestamp
@property
def _is_header_auth(self):
return 'Authorization' in self.headers
@property
def _is_query_auth(self):
return 'AWSAccessKeyId' in self.params
def _parse_host(self):
if not self.conf.storage_domains:
return None
if 'HTTP_HOST' in self.environ:
given_domain = self.environ['HTTP_HOST']
elif 'SERVER_NAME' in self.environ:
given_domain = self.environ['SERVER_NAME']
else:
return None
port = ''
if ':' in given_domain:
given_domain, port = given_domain.rsplit(':', 1)
for storage_domain in self.conf.storage_domains:
if not storage_domain.startswith('.'):
storage_domain = '.' + storage_domain
if given_domain.endswith(storage_domain):
return given_domain[:-len(storage_domain)]
return None
def _parse_uri(self):
# NB: returns WSGI strings
if not check_utf8(swob.wsgi_to_str(self.environ['PATH_INFO'])):
raise InvalidURI(self.path)
if self.bucket_in_host:
obj = self.environ['PATH_INFO'][1:] or None
return self.bucket_in_host, obj
bucket, obj = self.split_path(0, 2, True)
if bucket and not validate_bucket_name(
bucket, self.conf.dns_compliant_bucket_names):
# Ignore GET service case
raise InvalidBucketName(bucket)
return bucket, obj
def _parse_query_authentication(self):
"""
Parse v2 authentication query args
TODO: make sure if 0, 1, 3 is supported?
- version 0, 1, 2, 3:
'AWSAccessKeyId' and 'Signature' should be in param
:return: a tuple of access_key and signature
:raises: AccessDenied
"""
try:
access = swob.wsgi_to_str(self.params['AWSAccessKeyId'])
expires = swob.wsgi_to_str(self.params['Expires'])
sig = swob.wsgi_to_str(self.params['Signature'])
except KeyError:
raise AccessDenied(reason='invalid_query_auth')
if not all([access, sig, expires]):
raise AccessDenied(reason='invalid_query_auth')
return access, sig
def _parse_header_authentication(self):
"""
Parse v2 header authentication info
:returns: a tuple of access_key and signature
:raises: AccessDenied
"""
auth_str = swob.wsgi_to_str(self.headers['Authorization'])
if not auth_str.startswith('AWS ') or ':' not in auth_str:
raise AccessDenied(reason='invalid_header_auth')
# This means signature format V2
access, sig = auth_str.split(' ', 1)[1].rsplit(':', 1)
return access, sig
def _parse_auth_info(self):
"""Extract the access key identifier and signature.
:returns: a tuple of access_key and signature
:raises: NotS3Request
"""
if self._is_query_auth:
self._validate_expire_param()
return self._parse_query_authentication()
elif self._is_header_auth:
self._validate_dates()
return self._parse_header_authentication()
else:
# if this request is neither query auth nor header auth
# s3api regard this as not s3 request
raise NotS3Request()
def _validate_expire_param(self):
"""
Validate Expires in query parameters
:raises: AccessDenied
"""
# Expires header is a float since epoch
try:
ex = S3Timestamp(float(self.params['Expires']))
except (KeyError, ValueError):
raise AccessDenied(reason='invalid_expires')
if S3Timestamp.now() > ex:
raise AccessDenied('Request has expired', reason='expired')
if ex >= 2 ** 31:
raise AccessDenied(
'Invalid date (should be seconds since epoch): %s' %
self.params['Expires'], reason='invalid_expires')
def _validate_dates(self):
"""
Validate Date/X-Amz-Date headers for signature v2
:raises: AccessDenied
:raises: RequestTimeTooSkewed
"""
date_header = self.headers.get('Date')
amz_date_header = self.headers.get('X-Amz-Date')
if not date_header and not amz_date_header:
raise AccessDenied('AWS authentication requires a valid Date '
'or x-amz-date header',
reason='invalid_date')
# Anyways, request timestamp should be validated
epoch = S3Timestamp(0)
if self.timestamp < epoch:
raise AccessDenied(reason='invalid_date')
# If the standard date is too far ahead or behind, it is an
# error
delta = abs(int(self.timestamp) - int(S3Timestamp.now()))
if delta > self.conf.allowable_clock_skew:
raise RequestTimeTooSkewed()
def _validate_headers(self):
if 'CONTENT_LENGTH' in self.environ:
try:
if self.content_length < 0:
raise InvalidArgument('Content-Length',
self.content_length)
except (ValueError, TypeError):
raise InvalidArgument('Content-Length',
self.environ['CONTENT_LENGTH'])
value = _header_strip(self.headers.get('Content-MD5'))
if value is not None:
if not re.match('^[A-Za-z0-9+/]+={0,2}$', value):
# Non-base64-alphabet characters in value.
raise InvalidDigest(content_md5=value)
try:
self.headers['ETag'] = binascii.b2a_hex(
binascii.a2b_base64(value))
except binascii.Error:
# incorrect padding, most likely
raise InvalidDigest(content_md5=value)
if len(self.headers['ETag']) != 32:
raise InvalidDigest(content_md5=value)
if self.method == 'PUT' and any(h in self.headers for h in (
'If-Match', 'If-None-Match',
'If-Modified-Since', 'If-Unmodified-Since')):
raise S3NotImplemented(
'Conditional object PUTs are not supported.')
if 'X-Amz-Copy-Source' in self.headers:
try:
check_path_header(self, 'X-Amz-Copy-Source', 2, '')
except swob.HTTPException:
msg = 'Copy Source must mention the source bucket and key: ' \
'sourcebucket/sourcekey'
raise InvalidArgument('x-amz-copy-source',
self.headers['X-Amz-Copy-Source'],
msg)
if 'x-amz-metadata-directive' in self.headers:
value = self.headers['x-amz-metadata-directive']
if value not in ('COPY', 'REPLACE'):
err_msg = 'Unknown metadata directive.'
raise InvalidArgument('x-amz-metadata-directive', value,
err_msg)
if 'x-amz-storage-class' in self.headers:
# Only STANDARD is supported now.
if self.headers['x-amz-storage-class'] != 'STANDARD':
raise InvalidStorageClass()
if 'x-amz-mfa' in self.headers:
raise S3NotImplemented('MFA Delete is not supported.')
sse_value = self.headers.get('x-amz-server-side-encryption')
if sse_value is not None:
if sse_value not in ('aws:kms', 'AES256'):
raise InvalidArgument(
'x-amz-server-side-encryption', sse_value,
'The encryption method specified is not supported')
encryption_enabled = get_swift_info(admin=True)['admin'].get(
'encryption', {}).get('enabled')
if not encryption_enabled or sse_value != 'AES256':
raise S3NotImplemented(
'Server-side encryption is not supported.')
if 'x-amz-website-redirect-location' in self.headers:
raise S3NotImplemented('Website redirection is not supported.')
# https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
# describes some of what would be required to support this
if any(['aws-chunked' in self.headers.get('content-encoding', ''),
'STREAMING-AWS4-HMAC-SHA256-PAYLOAD' == self.headers.get(
'x-amz-content-sha256', ''),
'x-amz-decoded-content-length' in self.headers]):
raise S3NotImplemented('Transfering payloads in multiple chunks '
'using aws-chunked is not supported.')
if 'x-amz-tagging' in self.headers:
raise S3NotImplemented('Object tagging is not supported.')
@property
def body(self):
"""
swob.Request.body is not secure against malicious input. It consumes
too much memory without any check when the request body is excessively
large. Use xml() instead.
"""
raise AttributeError("No attribute 'body'")
def xml(self, max_length):
"""
Similar to swob.Request.body, but it checks the content length before
creating a body string.
"""
te = self.headers.get('transfer-encoding', '')
te = [x.strip() for x in te.split(',') if x.strip()]
if te and (len(te) > 1 or te[-1] != 'chunked'):
raise S3NotImplemented('A header you provided implies '
'functionality that is not implemented',
header='Transfer-Encoding')
ml = self.message_length()
if ml and ml > max_length:
raise MalformedXML()
if te or ml:
# Limit the read similar to how SLO handles manifests
try:
body = self.body_file.read(max_length)
except swob.HTTPException as err:
if err.status_int == HTTP_UNPROCESSABLE_ENTITY:
# Special case for HashingInput check
raise BadDigest(
'The X-Amz-Content-SHA56 you specified did not '
'match what we received.')
raise
else:
# No (or zero) Content-Length provided, and not chunked transfer;
# no body. Assume zero-length, and enforce a required body below.
return None
return body
def check_md5(self, body):
if 'HTTP_CONTENT_MD5' not in self.environ:
raise InvalidRequest('Missing required header for this request: '
'Content-MD5')
digest = base64.b64encode(md5(
body, usedforsecurity=False).digest()).strip().decode('ascii')
if self.environ['HTTP_CONTENT_MD5'] != digest:
raise BadDigest(content_md5=self.environ['HTTP_CONTENT_MD5'])
def _copy_source_headers(self):
env = {}
for key, value in self.environ.items():
if key.startswith('HTTP_X_AMZ_COPY_SOURCE_'):
env[key.replace('X_AMZ_COPY_SOURCE_', '')] = value
return swob.HeaderEnvironProxy(env)
def check_copy_source(self, app):
"""
check_copy_source checks the copy source existence and if copying an
object to itself, for illegal request parameters
:returns: the source HEAD response
"""
try:
src_path = self.headers['X-Amz-Copy-Source']
except KeyError:
return None
src_path, qs = src_path.partition('?')[::2]
parsed = parse_qsl(qs, True)
if not parsed:
query = {}
elif len(parsed) == 1 and parsed[0][0] == 'versionId':
query = {'version-id': parsed[0][1]}
else:
raise InvalidArgument('X-Amz-Copy-Source',
self.headers['X-Amz-Copy-Source'],
'Unsupported copy source parameter.')
src_path = unquote(src_path)
src_path = src_path if src_path.startswith('/') else ('/' + src_path)
src_bucket, src_obj = split_path(src_path, 0, 2, True)
headers = swob.HeaderKeyDict()
headers.update(self._copy_source_headers())
src_resp = self.get_response(app, 'HEAD', src_bucket,
swob.str_to_wsgi(src_obj),
headers=headers, query=query)
# we can't let this HEAD req spoil our COPY
self.headers.pop('x-backend-storage-policy-index')
if src_resp.status_int == 304: # pylint: disable-msg=E1101
raise PreconditionFailed()
if (self.container_name == src_bucket and
self.object_name == src_obj and
self.headers.get('x-amz-metadata-directive',
'COPY') == 'COPY' and
not query):
raise InvalidRequest("This copy request is illegal "
"because it is trying to copy an "
"object to itself without "
"changing the object's metadata, "
"storage class, website redirect "
"location or encryption "
"attributes.")
# We've done some normalizing; write back so it's ready for
# to_swift_req
self.headers['X-Amz-Copy-Source'] = quote(src_path)
if query:
self.headers['X-Amz-Copy-Source'] += \
'?versionId=' + query['version-id']
return src_resp
def _canonical_uri(self):
"""
Require bucket name in canonical_uri for v2 in virtual hosted-style.
"""
raw_path_info = self.environ.get('RAW_PATH_INFO', self.path)
if self.bucket_in_host:
raw_path_info = '/' + self.bucket_in_host + raw_path_info
return raw_path_info
def _string_to_sign(self):
"""
Create 'StringToSign' value in Amazon terminology for v2.
"""
amz_headers = {}
buf = [swob.wsgi_to_bytes(wsgi_str) for wsgi_str in [
self.method,
_header_strip(self.headers.get('Content-MD5')) or '',
_header_strip(self.headers.get('Content-Type')) or '']]
if 'headers_raw' in self.environ: # eventlet >= 0.19.0
# See https://github.com/eventlet/eventlet/commit/67ec999
amz_headers = defaultdict(list)
for key, value in self.environ['headers_raw']:
key = key.lower()
if not key.startswith('x-amz-'):
continue
amz_headers[key.strip()].append(value.strip())
amz_headers = dict((key, ','.join(value))
for key, value in amz_headers.items())
else: # mostly-functional fallback
amz_headers = dict((key.lower(), value)
for key, value in self.headers.items()
if key.lower().startswith('x-amz-'))
if self._is_header_auth:
if 'x-amz-date' in amz_headers:
buf.append(b'')
elif 'Date' in self.headers:
buf.append(swob.wsgi_to_bytes(self.headers['Date']))
elif self._is_query_auth:
buf.append(swob.wsgi_to_bytes(self.params['Expires']))
else:
# Should have already raised NotS3Request in _parse_auth_info,
# but as a sanity check...
raise AccessDenied(reason='not_s3')
for key, value in sorted(amz_headers.items()):
buf.append(swob.wsgi_to_bytes("%s:%s" % (key, value)))
path = self._canonical_uri()
if self.query_string:
path += '?' + self.query_string
params = []
if '?' in path:
path, args = path.split('?', 1)
for key, value in sorted(self.params.items()):
if key in ALLOWED_SUB_RESOURCES:
params.append('%s=%s' % (key, value) if value else key)
if params:
buf.append(swob.wsgi_to_bytes('%s?%s' % (path, '&'.join(params))))
else:
buf.append(swob.wsgi_to_bytes(path))
return b'\n'.join(buf)
def signature_does_not_match_kwargs(self):
return {
'a_w_s_access_key_id': self.access_key,
'string_to_sign': self.string_to_sign,
'signature_provided': self.signature,
'string_to_sign_bytes': ' '.join(
format(ord(c), '02x')
for c in self.string_to_sign.decode('latin1')),
}
@property
def controller_name(self):
return self.controller.__name__[:-len('Controller')]
@property
def controller(self):
if self.is_service_request:
return ServiceController
if not self.conf.allow_multipart_uploads:
multi_part = ['partNumber', 'uploadId', 'uploads']
if len([p for p in multi_part if p in self.params]):
raise S3NotImplemented("Multi-part feature isn't support")
if 'acl' in self.params:
return AclController
if 'delete' in self.params:
return MultiObjectDeleteController
if 'location' in self.params:
return LocationController
if 'logging' in self.params:
return LoggingStatusController
if 'partNumber' in self.params:
return PartController
if 'uploadId' in self.params:
return UploadController
if 'uploads' in self.params:
return UploadsController
if 'versioning' in self.params:
return VersioningController
if 'tagging' in self.params:
return TaggingController
unsupported = ('notification', 'policy', 'requestPayment', 'torrent',
'website', 'cors', 'restore')
if set(unsupported) & set(self.params):
return UnsupportedController
if self.is_object_request:
return ObjectController
return BucketController
@property
def is_service_request(self):
return not self.container_name
@property
def is_bucket_request(self):
return self.container_name and not self.object_name
@property
def is_object_request(self):
return self.container_name and self.object_name
@property
def is_authenticated(self):
return self.account is not None
def to_swift_req(self, method, container, obj, query=None,
body=None, headers=None):
"""
Create a Swift request based on this request's environment.
"""
if self.account is None:
account = self.access_key
else:
account = self.account
env = self.environ.copy()
env['swift.infocache'] = self.environ.setdefault('swift.infocache', {})
def sanitize(value):
if set(value).issubset(string.printable):
return value
value = Header(value, 'UTF-8').encode()
if value.startswith('=?utf-8?q?'):
return '=?UTF-8?Q?' + value[10:]
elif value.startswith('=?utf-8?b?'):
return '=?UTF-8?B?' + value[10:]
else:
return value
if 'headers_raw' in env: # eventlet >= 0.19.0
# See https://github.com/eventlet/eventlet/commit/67ec999
for key, value in env['headers_raw']:
if not key.lower().startswith('x-amz-meta-'):
continue
# AWS ignores user-defined headers with these characters
if any(c in key for c in ' "),/;<=>?@[\\]{}'):
# NB: apparently, '(' *is* allowed
continue
# Note that this may have already been deleted, e.g. if the
# client sent multiple headers with the same name, or both
# x-amz-meta-foo-bar and x-amz-meta-foo_bar
env.pop('HTTP_' + key.replace('-', '_').upper(), None)
# Need to preserve underscores. Since we know '=' can't be
# present, quoted-printable seems appropriate.
key = key.replace('_', '=5F').replace('-', '_').upper()
key = 'HTTP_X_OBJECT_META_' + key[11:]
if key in env:
env[key] += ',' + sanitize(value)
else:
env[key] = sanitize(value)
else: # mostly-functional fallback
for key in self.environ:
if not key.startswith('HTTP_X_AMZ_META_'):
continue
# AWS ignores user-defined headers with these characters
if any(c in key for c in ' "),/;<=>?@[\\]{}'):
# NB: apparently, '(' *is* allowed
continue
env['HTTP_X_OBJECT_META_' + key[16:]] = sanitize(env[key])
del env[key]
copy_from_version_id = ''
if 'HTTP_X_AMZ_COPY_SOURCE' in env and env['REQUEST_METHOD'] == 'PUT':
env['HTTP_X_COPY_FROM'], copy_from_version_id = env[
'HTTP_X_AMZ_COPY_SOURCE'].partition('?versionId=')[::2]
del env['HTTP_X_AMZ_COPY_SOURCE']
env['CONTENT_LENGTH'] = '0'
if env.pop('HTTP_X_AMZ_METADATA_DIRECTIVE', None) == 'REPLACE':
env['HTTP_X_FRESH_METADATA'] = 'True'
else:
copy_exclude_headers = ('HTTP_CONTENT_DISPOSITION',
'HTTP_CONTENT_ENCODING',
'HTTP_CONTENT_LANGUAGE',
'CONTENT_TYPE',
'HTTP_EXPIRES',
'HTTP_CACHE_CONTROL',
'HTTP_X_ROBOTS_TAG')
for key in copy_exclude_headers:
env.pop(key, None)
for key in list(env.keys()):
if key.startswith('HTTP_X_OBJECT_META_'):
del env[key]
if self.conf.force_swift_request_proxy_log:
env['swift.proxy_access_log_made'] = False
env['swift.source'] = 'S3'
if method is not None:
env['REQUEST_METHOD'] = method
if obj:
path = '/v1/%s/%s/%s' % (account, container, obj)
elif container:
path = '/v1/%s/%s' % (account, container)
else:
path = '/v1/%s' % (account)
env['PATH_INFO'] = path
params = []
if query is not None:
for key, value in sorted(query.items()):
if value is not None:
params.append('%s=%s' % (key, quote(str(value))))
else:
params.append(key)
if copy_from_version_id and not (query and query.get('version-id')):
params.append('version-id=' + copy_from_version_id)
env['QUERY_STRING'] = '&'.join(params)
return swob.Request.blank(quote(path), environ=env, body=body,
headers=headers)
def _swift_success_codes(self, method, container, obj):
"""
Returns a list of expected success codes from Swift.
"""
if not container:
# Swift account access.
code_map = {
'GET': [
HTTP_OK,
],
}
elif not obj:
# Swift container access.
code_map = {
'HEAD': [
HTTP_NO_CONTENT,
],
'GET': [
HTTP_OK,
HTTP_NO_CONTENT,
],
'PUT': [
HTTP_CREATED,
],
'POST': [
HTTP_NO_CONTENT,
],
'DELETE': [
HTTP_NO_CONTENT,
],
}
else:
# Swift object access.
code_map = {
'HEAD': [
HTTP_OK,
HTTP_PARTIAL_CONTENT,
HTTP_NOT_MODIFIED,
],
'GET': [
HTTP_OK,
HTTP_PARTIAL_CONTENT,
HTTP_NOT_MODIFIED,
],
'PUT': [
HTTP_CREATED,
HTTP_ACCEPTED, # For SLO with heartbeating
],
'POST': [
HTTP_ACCEPTED,
],
'DELETE': [
HTTP_OK,
HTTP_NO_CONTENT,
],
}
return code_map[method]
def _bucket_put_accepted_error(self, container, app):
sw_req = self.to_swift_req('HEAD', container, None)
info = get_container_info(sw_req.environ, app, swift_source='S3')
sysmeta = info.get('sysmeta', {})
try:
acl = json.loads(sysmeta.get('s3api-acl',
sysmeta.get('swift3-acl', '{}')))
owner = acl.get('Owner')
except (ValueError, TypeError, KeyError):
owner = None
if owner is None or owner == self.user_id:
raise BucketAlreadyOwnedByYou(container)
raise BucketAlreadyExists(container)
def _swift_error_codes(self, method, container, obj, env, app):
"""
Returns a dict from expected Swift error codes to the corresponding S3
error responses.
"""
if not container:
# Swift account access.
code_map = {
'GET': {
},
}
elif not obj:
# Swift container access.
code_map = {
'HEAD': {
HTTP_NOT_FOUND: (NoSuchBucket, container),
},
'GET': {
HTTP_NOT_FOUND: (NoSuchBucket, container),
},
'PUT': {
HTTP_ACCEPTED: (self._bucket_put_accepted_error, container,
app),
},
'POST': {
HTTP_NOT_FOUND: (NoSuchBucket, container),
},
'DELETE': {
HTTP_NOT_FOUND: (NoSuchBucket, container),
HTTP_CONFLICT: BucketNotEmpty,
},
}
else:
# Swift object access.
# 404s differ depending upon whether the bucket exists
# Note that base-container-existence checks happen elsewhere for
# multi-part uploads, and get_container_info should be pulling
# from the env cache
def not_found_handler():
if container.endswith(MULTIUPLOAD_SUFFIX) or \
is_success(get_container_info(
env, app, swift_source='S3').get('status')):
return NoSuchKey(obj)
return NoSuchBucket(container)
code_map = {
'HEAD': {
HTTP_NOT_FOUND: not_found_handler,
HTTP_PRECONDITION_FAILED: PreconditionFailed,
},
'GET': {
HTTP_NOT_FOUND: not_found_handler,
HTTP_PRECONDITION_FAILED: PreconditionFailed,
HTTP_REQUESTED_RANGE_NOT_SATISFIABLE: InvalidRange,
},
'PUT': {
HTTP_NOT_FOUND: (NoSuchBucket, container),
HTTP_UNPROCESSABLE_ENTITY: BadDigest,
HTTP_REQUEST_ENTITY_TOO_LARGE: EntityTooLarge,
HTTP_LENGTH_REQUIRED: MissingContentLength,
HTTP_REQUEST_TIMEOUT: RequestTimeout,
HTTP_PRECONDITION_FAILED: PreconditionFailed,
HTTP_CLIENT_CLOSED_REQUEST: RequestTimeout,
},
'POST': {
HTTP_NOT_FOUND: not_found_handler,
HTTP_PRECONDITION_FAILED: PreconditionFailed,
},
'DELETE': {
HTTP_NOT_FOUND: (NoSuchKey, obj),
},
}
return code_map[method]
def _get_response(self, app, method, container, obj,
headers=None, body=None, query=None):
"""
Calls the application with this request's environment. Returns a
S3Response object that wraps up the application's result.
"""
method = method or self.environ['REQUEST_METHOD']
if container is None:
container = self.container_name
if obj is None:
obj = self.object_name
sw_req = self.to_swift_req(method, container, obj, headers=headers,
body=body, query=query)
try:
sw_resp = sw_req.get_response(app)
except swob.HTTPException as err:
# Maybe a 422 from HashingInput? Put something in
# s3api.backend_path - hopefully by now any modifications to the
# path (e.g. tenant to account translation) will have been made by
# auth middleware
self.environ['s3api.backend_path'] = sw_req.environ['PATH_INFO']
sw_resp = err
else:
# reuse account
_, self.account, _ = split_path(sw_resp.environ['PATH_INFO'],
2, 3, True)
# Update s3.backend_path from the response environ
self.environ['s3api.backend_path'] = sw_resp.environ['PATH_INFO']
# Propogate backend headers back into our req headers for logging
for k, v in sw_req.headers.items():
if k.lower().startswith('x-backend-'):
self.headers.setdefault(k, v)
resp = S3Response.from_swift_resp(sw_resp)
status = resp.status_int # pylint: disable-msg=E1101
if not self.user_id:
if 'HTTP_X_USER_NAME' in sw_resp.environ:
# keystone
self.user_id = "%s:%s" % (
sw_resp.environ['HTTP_X_TENANT_NAME'],
sw_resp.environ['HTTP_X_USER_NAME'])
if six.PY2 and not isinstance(self.user_id, bytes):
self.user_id = self.user_id.encode('utf8')
else:
# tempauth
self.user_id = self.access_key
success_codes = self._swift_success_codes(method, container, obj)
error_codes = self._swift_error_codes(method, container, obj,
sw_req.environ, app)
if status in success_codes:
return resp
err_msg = resp.body
if status in error_codes:
err_resp = \
error_codes[sw_resp.status_int] # pylint: disable-msg=E1101
if isinstance(err_resp, tuple):
raise err_resp[0](*err_resp[1:])
elif b'quota' in err_msg:
raise err_resp(err_msg)
else:
raise err_resp()
if status == HTTP_BAD_REQUEST:
err_str = err_msg.decode('utf8')
if 'X-Delete-At' in err_str:
raise InvalidArgument('X-Delete-At',
self.headers['X-Delete-At'],
err_str)
if 'X-Delete-After' in err_msg.decode('utf8'):
raise InvalidArgument('X-Delete-After',
self.headers['X-Delete-After'],
err_str)
else:
raise InvalidRequest(msg=err_str)
if status == HTTP_UNAUTHORIZED:
raise SignatureDoesNotMatch(
**self.signature_does_not_match_kwargs())
if status == HTTP_FORBIDDEN:
raise AccessDenied(reason='forbidden')
if status == HTTP_SERVICE_UNAVAILABLE:
raise ServiceUnavailable()
if status in (HTTP_RATE_LIMITED, HTTP_TOO_MANY_REQUESTS):
if self.conf.ratelimit_as_client_error:
raise SlowDown(status='429 Slow Down')
raise SlowDown()
if resp.status_int == HTTP_CONFLICT:
# TODO: validate that this actually came up out of SLO
raise BrokenMPU()
raise InternalError('unexpected status code %d' % status)
def get_response(self, app, method=None, container=None, obj=None,
headers=None, body=None, query=None):
"""
get_response is an entry point to be extended for child classes.
If additional tasks needed at that time of getting swift response,
we can override this method.
swift.common.middleware.s3api.s3request.S3Request need to just call
_get_response to get pure swift response.
"""
if 'HTTP_X_AMZ_ACL' in self.environ:
handle_acl_header(self)
return self._get_response(app, method, container, obj,
headers, body, query)
def get_validated_param(self, param, default, limit=MAX_32BIT_INT):
value = default
if param in self.params:
try:
value = int(self.params[param])
if value < 0:
err_msg = 'Argument %s must be an integer between 0 and' \
' %d' % (param, MAX_32BIT_INT)
raise InvalidArgument(param, self.params[param], err_msg)
if value > MAX_32BIT_INT:
# check the value because int() could build either a long
# instance or a 64bit integer.
raise ValueError()
if limit < value:
value = limit
except ValueError:
err_msg = 'Provided %s not an integer or within ' \
'integer range' % param
raise InvalidArgument(param, self.params[param], err_msg)
return value
def get_container_info(self, app):
"""
get_container_info will return a result dict of get_container_info
from the backend Swift.
:returns: a dictionary of container info from
swift.controllers.base.get_container_info
:raises: NoSuchBucket when the container doesn't exist
:raises: InternalError when the request failed without 404
"""
if not self.is_authenticated:
sw_req = self.to_swift_req('TEST', None, None, body='')
# don't show log message of this request
sw_req.environ['swift.proxy_access_log_made'] = True
sw_resp = sw_req.get_response(app)
if not sw_req.remote_user:
raise SignatureDoesNotMatch(
**self.signature_does_not_match_kwargs())
_, self.account, _ = split_path(sw_resp.environ['PATH_INFO'],
2, 3, True)
sw_req = self.to_swift_req('TEST', self.container_name, None)
info = get_container_info(sw_req.environ, app, swift_source='S3')
if is_success(info['status']):
return info
elif info['status'] == HTTP_NOT_FOUND:
raise NoSuchBucket(self.container_name)
elif info['status'] == HTTP_SERVICE_UNAVAILABLE:
raise ServiceUnavailable()
else:
raise InternalError(
'unexpected status code %d' % info['status'])
def gen_multipart_manifest_delete_query(self, app, obj=None, version=None):
if not self.conf.allow_multipart_uploads:
return {}
if not obj:
obj = self.object_name
query = {'symlink': 'get'}
if version is not None:
query['version-id'] = version
resp = self.get_response(app, 'HEAD', obj=obj, query=query)
if not resp.is_slo:
return {}
elif resp.sysmeta_headers.get(sysmeta_header('object', 'etag')):
# Even if allow_async_delete is turned off, SLO will just handle
# the delete synchronously, so we don't need to check before
# setting async=on
return {'multipart-manifest': 'delete', 'async': 'on'}
else:
return {'multipart-manifest': 'delete'}
def set_acl_handler(self, handler):
pass
class S3AclRequest(S3Request):
"""
S3Acl request object.
"""
def __init__(self, env, app=None, conf=None):
super(S3AclRequest, self).__init__(env, app, conf)
self.authenticate(app)
self.acl_handler = None
@property
def controller(self):
if 'acl' in self.params and not self.is_service_request:
return S3AclController
return super(S3AclRequest, self).controller
def authenticate(self, app):
"""
authenticate method will run pre-authenticate request and retrieve
account information.
Note that it currently supports only keystone and tempauth.
(no support for the third party authentication middleware)
"""
sw_req = self.to_swift_req('TEST', None, None, body='')
# don't show log message of this request
sw_req.environ['swift.proxy_access_log_made'] = True
sw_resp = sw_req.get_response(app)
if not sw_req.remote_user:
raise SignatureDoesNotMatch(
**self.signature_does_not_match_kwargs())
_, self.account, _ = split_path(sw_resp.environ['PATH_INFO'],
2, 3, True)
if 'HTTP_X_USER_NAME' in sw_resp.environ:
# keystone
self.user_id = "%s:%s" % (sw_resp.environ['HTTP_X_TENANT_NAME'],
sw_resp.environ['HTTP_X_USER_NAME'])
if six.PY2 and not isinstance(self.user_id, bytes):
self.user_id = self.user_id.encode('utf8')
else:
# tempauth
self.user_id = self.access_key
sw_req.environ.get('swift.authorize', lambda req: None)(sw_req)
self.environ['swift_owner'] = sw_req.environ.get('swift_owner', False)
if 'REMOTE_USER' in sw_req.environ:
self.environ['REMOTE_USER'] = sw_req.environ['REMOTE_USER']
# Need to skip S3 authorization on subsequent requests to prevent
# overwriting the account in PATH_INFO
del self.environ['s3api.auth_details']
def to_swift_req(self, method, container, obj, query=None,
body=None, headers=None):
sw_req = super(S3AclRequest, self).to_swift_req(
method, container, obj, query, body, headers)
if self.account:
sw_req.environ['swift_owner'] = True # needed to set ACL
sw_req.environ['swift.authorize_override'] = True
sw_req.environ['swift.authorize'] = lambda req: None
return sw_req
def get_acl_response(self, app, method=None, container=None, obj=None,
headers=None, body=None, query=None):
"""
Wrapper method of _get_response to add s3 acl information
from response sysmeta headers.
"""
resp = self._get_response(
app, method, container, obj, headers, body, query)
resp.bucket_acl = decode_acl(
'container', resp.sysmeta_headers, self.conf.allow_no_owner)
resp.object_acl = decode_acl(
'object', resp.sysmeta_headers, self.conf.allow_no_owner)
return resp
def get_response(self, app, method=None, container=None, obj=None,
headers=None, body=None, query=None):
"""
Wrap up get_response call to hook with acl handling method.
"""
if not self.acl_handler:
# we should set acl_handler all time before calling get_response
raise Exception('get_response called before set_acl_handler')
resp = self.acl_handler.handle_acl(
app, method, container, obj, headers)
# possible to skip recalling get_response_acl if resp is not
# None (e.g. HEAD)
if resp:
return resp
return self.get_acl_response(app, method, container, obj,
headers, body, query)
def set_acl_handler(self, acl_handler):
self.acl_handler = acl_handler
class SigV4Request(SigV4Mixin, S3Request):
pass
class SigV4S3AclRequest(SigV4Mixin, S3AclRequest):
pass
| swift-master | swift/common/middleware/s3api/s3request.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import calendar
import datetime
import email.utils
import re
import six
import time
import uuid
from swift.common import utils
MULTIUPLOAD_SUFFIX = '+segments'
def sysmeta_prefix(resource):
"""
Returns the system metadata prefix for given resource type.
"""
if resource.lower() == 'object':
return 'x-object-sysmeta-s3api-'
else:
return 'x-container-sysmeta-s3api-'
def sysmeta_header(resource, name):
"""
Returns the system metadata header for given resource type and name.
"""
return sysmeta_prefix(resource) + name
def camel_to_snake(camel):
return re.sub('(.)([A-Z])', r'\1_\2', camel).lower()
def snake_to_camel(snake):
return snake.title().replace('_', '')
def unique_id():
result = base64.urlsafe_b64encode(str(uuid.uuid4()).encode('ascii'))
if six.PY2:
return result
return result.decode('ascii')
def utf8encode(s):
if s is None or isinstance(s, bytes):
return s
return s.encode('utf8')
def utf8decode(s):
if isinstance(s, bytes):
s = s.decode('utf8')
return s
def validate_bucket_name(name, dns_compliant_bucket_names):
"""
Validates the name of the bucket against S3 criteria,
http://docs.amazonwebservices.com/AmazonS3/latest/BucketRestrictions.html
True is valid, False is invalid.
"""
valid_chars = '-.a-z0-9'
if not dns_compliant_bucket_names:
valid_chars += 'A-Z_'
max_len = 63 if dns_compliant_bucket_names else 255
if len(name) < 3 or len(name) > max_len or not name[0].isalnum():
# Bucket names should be between 3 and 63 (or 255) characters long
# Bucket names must start with a letter or a number
return False
elif dns_compliant_bucket_names and (
'.-' in name or '-.' in name or '..' in name or
not name[-1].isalnum()):
# Bucket names cannot contain dashes next to periods
# Bucket names cannot contain two adjacent periods
# Bucket names must end with a letter or a number
return False
elif name.endswith('.'):
# Bucket names must not end with dot
return False
elif re.match(r"^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.)"
r"{3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$",
name):
# Bucket names cannot be formatted as an IP Address
return False
elif not re.match("^[%s]*$" % valid_chars, name):
# Bucket names can contain lowercase letters, numbers, and hyphens.
return False
else:
return True
class S3Timestamp(utils.Timestamp):
S3_XML_FORMAT = "%Y-%m-%dT%H:%M:%S.000Z"
@property
def s3xmlformat(self):
dt = datetime.datetime.utcfromtimestamp(self.ceil())
return dt.strftime(self.S3_XML_FORMAT)
@classmethod
def from_s3xmlformat(cls, date_string):
dt = datetime.datetime.strptime(date_string, cls.S3_XML_FORMAT)
dt = dt.replace(tzinfo=utils.UTC)
seconds = calendar.timegm(dt.timetuple())
return cls(seconds)
@property
def amz_date_format(self):
"""
this format should be like 'YYYYMMDDThhmmssZ'
"""
return self.isoformat.replace(
'-', '').replace(':', '')[:-7] + 'Z'
def mktime(timestamp_str, time_format='%Y-%m-%dT%H:%M:%S'):
"""
mktime creates a float instance in epoch time really like as time.mktime
the difference from time.mktime is allowing to 2 formats string for the
argument for the S3 testing usage.
TODO: support
:param timestamp_str: a string of timestamp formatted as
(a) RFC2822 (e.g. date header)
(b) %Y-%m-%dT%H:%M:%S (e.g. copy result)
:param time_format: a string of format to parse in (b) process
:returns: a float instance in epoch time
"""
# time_tuple is the *remote* local time
time_tuple = email.utils.parsedate_tz(timestamp_str)
if time_tuple is None:
time_tuple = time.strptime(timestamp_str, time_format)
# add timezone info as utc (no time difference)
time_tuple += (0, )
# We prefer calendar.gmtime and a manual adjustment over
# email.utils.mktime_tz because older versions of Python (<2.7.4) may
# double-adjust for timezone in some situations (such when swift changes
# os.environ['TZ'] without calling time.tzset()).
epoch_time = calendar.timegm(time_tuple) - time_tuple[9]
return epoch_time
class Config(dict):
DEFAULTS = {
'storage_domains': [],
'location': 'us-east-1',
'force_swift_request_proxy_log': False,
'dns_compliant_bucket_names': True,
'allow_multipart_uploads': True,
'allow_no_owner': False,
'allowable_clock_skew': 900,
'ratelimit_as_client_error': False,
}
def __init__(self, base=None):
self.update(self.DEFAULTS)
if base is not None:
self.update(base)
def __getattr__(self, name):
if name not in self:
raise AttributeError("No attribute '%s'" % name)
return self[name]
def __setattr__(self, name, value):
self[name] = value
def __delattr__(self, name):
del self[name]
def update(self, other):
if hasattr(other, 'keys'):
for key in other.keys():
self[key] = other[key]
else:
for key, value in other:
self[key] = value
def __setitem__(self, key, value):
if isinstance(self.get(key), bool):
dict.__setitem__(self, key, utils.config_true_value(value))
elif isinstance(self.get(key), int):
try:
dict.__setitem__(self, key, int(value))
except ValueError:
if value: # No need to raise the error if value is ''
raise
else:
dict.__setitem__(self, key, value)
| swift-master | swift/common/middleware/s3api/utils.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import lxml.etree
from copy import deepcopy
try:
# importlib.resources was introduced in py37, but couldn't handle
# resources in subdirectories (which we use); files() added support
from importlib.resources import files
del files
except ImportError:
# python < 3.9
from pkg_resources import resource_stream # pylint: disable-msg=E0611
else:
import importlib.resources
resource_stream = None
import six
from swift.common.utils import get_logger
from swift.common.middleware.s3api.exception import S3Exception
from swift.common.middleware.s3api.utils import camel_to_snake, \
utf8encode, utf8decode
XMLNS_S3 = 'http://s3.amazonaws.com/doc/2006-03-01/'
XMLNS_XSI = 'http://www.w3.org/2001/XMLSchema-instance'
class XMLSyntaxError(S3Exception):
pass
class DocumentInvalid(S3Exception):
pass
def cleanup_namespaces(elem):
def remove_ns(tag, ns):
if tag.startswith('{%s}' % ns):
tag = tag[len('{%s}' % ns):]
return tag
if not isinstance(elem.tag, six.string_types):
# elem is a comment element.
return
# remove s3 namespace
elem.tag = remove_ns(elem.tag, XMLNS_S3)
# remove default namespace
if elem.nsmap and None in elem.nsmap:
elem.tag = remove_ns(elem.tag, elem.nsmap[None])
for e in elem.iterchildren():
cleanup_namespaces(e)
def fromstring(text, root_tag=None, logger=None):
try:
elem = lxml.etree.fromstring(text, parser)
except lxml.etree.XMLSyntaxError as e:
if logger:
logger.debug(e)
raise XMLSyntaxError(e)
cleanup_namespaces(elem)
if root_tag is not None:
# validate XML
try:
path = 'schema/%s.rng' % camel_to_snake(root_tag)
if resource_stream:
# python < 3.9
stream = resource_stream(__name__, path)
else:
stream = importlib.resources.files(
__name__.rsplit('.', 1)[0]).joinpath(path).open('rb')
with stream as rng:
lxml.etree.RelaxNG(file=rng).assertValid(elem)
except IOError as e:
# Probably, the schema file doesn't exist.
logger = logger or get_logger({}, log_route='s3api')
logger.error(e)
raise
except lxml.etree.DocumentInvalid as e:
if logger:
logger.debug(e)
raise DocumentInvalid(e)
return elem
def tostring(tree, use_s3ns=True, xml_declaration=True):
if use_s3ns:
nsmap = tree.nsmap.copy()
nsmap[None] = XMLNS_S3
root = Element(tree.tag, attrib=tree.attrib, nsmap=nsmap)
root.text = tree.text
root.extend(deepcopy(list(tree)))
tree = root
return lxml.etree.tostring(tree, xml_declaration=xml_declaration,
encoding='UTF-8')
class _Element(lxml.etree.ElementBase):
"""
Wrapper Element class of lxml.etree.Element to support
a utf-8 encoded non-ascii string as a text.
Why we need this?:
Original lxml.etree.Element supports only unicode for the text.
It declines maintainability because we have to call a lot of encode/decode
methods to apply account/container/object name (i.e. PATH_INFO) to each
Element instance. When using this class, we can remove such a redundant
codes from swift.common.middleware.s3api middleware.
"""
def __init__(self, *args, **kwargs):
# pylint: disable-msg=E1002
super(_Element, self).__init__(*args, **kwargs)
@property
def text(self):
"""
utf-8 wrapper property of lxml.etree.Element.text
"""
if six.PY2:
return utf8encode(lxml.etree.ElementBase.text.__get__(self))
return lxml.etree.ElementBase.text.__get__(self)
@text.setter
def text(self, value):
lxml.etree.ElementBase.text.__set__(self, utf8decode(value))
parser_lookup = lxml.etree.ElementDefaultClassLookup(element=_Element)
parser = lxml.etree.XMLParser(resolve_entities=False, no_network=True)
parser.set_element_class_lookup(parser_lookup)
Element = parser.makeelement
SubElement = lxml.etree.SubElement
| swift-master | swift/common/middleware/s3api/etree.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
---------------------------
s3api's ACLs implementation
---------------------------
s3api uses a different implementation approach to achieve S3 ACLs.
First, we should understand what we have to design to achieve real S3 ACLs.
Current s3api(real S3)'s ACLs Model is as follows::
AccessControlPolicy:
Owner:
AccessControlList:
Grant[n]:
(Grantee, Permission)
Each bucket or object has its own acl consisting of Owner and
AcessControlList. AccessControlList can contain some Grants.
By default, AccessControlList has only one Grant to allow FULL
CONTROLL to owner. Each Grant includes single pair with Grantee,
Permission. Grantee is the user (or user group) allowed the given permission.
This module defines the groups and the relation tree.
If you wanna get more information about S3's ACLs model in detail,
please see official documentation here,
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html
"""
from functools import partial
import six
from swift.common.utils import json
from swift.common.middleware.s3api.s3response import InvalidArgument, \
MalformedACLError, S3NotImplemented, InvalidRequest, AccessDenied
from swift.common.middleware.s3api.etree import Element, SubElement, tostring
from swift.common.middleware.s3api.utils import sysmeta_header
from swift.common.middleware.s3api.exception import InvalidSubresource
XMLNS_XSI = 'http://www.w3.org/2001/XMLSchema-instance'
PERMISSIONS = ['FULL_CONTROL', 'READ', 'WRITE', 'READ_ACP', 'WRITE_ACP']
LOG_DELIVERY_USER = '.log_delivery'
def encode_acl(resource, acl):
"""
Encode an ACL instance to Swift metadata.
Given a resource type and an ACL instance, this method returns HTTP
headers, which can be used for Swift metadata.
"""
header_value = {"Owner": acl.owner.id}
grants = []
for grant in acl.grants:
grant = {"Permission": grant.permission,
"Grantee": str(grant.grantee)}
grants.append(grant)
header_value.update({"Grant": grants})
headers = {}
key = sysmeta_header(resource, 'acl')
headers[key] = json.dumps(header_value, separators=(',', ':'))
return headers
def decode_acl(resource, headers, allow_no_owner):
"""
Decode Swift metadata to an ACL instance.
Given a resource type and HTTP headers, this method returns an ACL
instance.
"""
value = ''
key = sysmeta_header(resource, 'acl')
if key in headers:
value = headers[key]
if value == '':
# Fix me: In the case of value is empty or not dict instance,
# I want an instance of Owner as None.
# However, in the above process would occur error in reference
# to an instance variable of Owner.
return ACL(Owner(None, None), [], True, allow_no_owner)
try:
encode_value = json.loads(value)
if not isinstance(encode_value, dict):
return ACL(Owner(None, None), [], True, allow_no_owner)
id = None
name = None
grants = []
if 'Owner' in encode_value:
id = encode_value['Owner']
name = encode_value['Owner']
if 'Grant' in encode_value:
for grant in encode_value['Grant']:
grantee = None
# pylint: disable-msg=E1101
for group in Group.__subclasses__():
if group.__name__ == grant['Grantee']:
grantee = group()
if not grantee:
grantee = User(grant['Grantee'])
permission = grant['Permission']
grants.append(Grant(grantee, permission))
return ACL(Owner(id, name), grants, True, allow_no_owner)
except Exception as e:
raise InvalidSubresource((resource, 'acl', value), e)
class Grantee(object):
"""
Base class for grantee.
Methods:
* init: create a Grantee instance
* elem: create an ElementTree from itself
Static Methods:
* from_header: convert a grantee string in the HTTP header
to an Grantee instance.
* from_elem: convert a ElementTree to an Grantee instance.
"""
# Needs confirmation whether we really need these methods or not.
# * encode (method): create a JSON which includes whole own elements
# * encode_from_elem (static method): convert from an ElementTree to a JSON
# * elem_from_json (static method): convert from a JSON to an ElementTree
# * from_json (static method): convert a Json string to an Grantee
# instance.
def __contains__(self, key):
"""
The key argument is a S3 user id. This method checks that the user id
belongs to this class.
"""
raise S3NotImplemented()
def elem(self):
"""
Get an etree element of this instance.
"""
raise S3NotImplemented()
@staticmethod
def from_elem(elem):
type = elem.get('{%s}type' % XMLNS_XSI)
if type == 'CanonicalUser':
value = elem.find('./ID').text
return User(value)
elif type == 'Group':
value = elem.find('./URI').text
subclass = get_group_subclass_from_uri(value)
return subclass()
elif type == 'AmazonCustomerByEmail':
raise S3NotImplemented()
else:
raise MalformedACLError()
@staticmethod
def from_header(grantee):
"""
Convert a grantee string in the HTTP header to an Grantee instance.
"""
grantee_type, value = grantee.split('=', 1)
grantee_type = grantee_type.lower()
value = value.strip('"\'')
if grantee_type == 'id':
return User(value)
elif grantee_type == 'emailaddress':
raise S3NotImplemented()
elif grantee_type == 'uri':
# return a subclass instance of Group class
subclass = get_group_subclass_from_uri(value)
return subclass()
else:
raise InvalidArgument(grantee_type, value,
'Argument format not recognized')
class User(Grantee):
"""
Canonical user class for S3 accounts.
"""
type = 'CanonicalUser'
def __init__(self, name):
self.id = name
self.display_name = name
def __contains__(self, key):
return key == self.id
def elem(self):
elem = Element('Grantee', nsmap={'xsi': XMLNS_XSI})
elem.set('{%s}type' % XMLNS_XSI, self.type)
SubElement(elem, 'ID').text = self.id
SubElement(elem, 'DisplayName').text = self.display_name
return elem
def __str__(self):
return self.display_name
def __lt__(self, other):
if not isinstance(other, User):
return NotImplemented
return self.id < other.id
class Owner(object):
"""
Owner class for S3 accounts
"""
def __init__(self, id, name):
self.id = id
if not (name is None or isinstance(name, six.string_types)):
raise TypeError('name must be a string or None')
self.name = name
def get_group_subclass_from_uri(uri):
"""
Convert a URI to one of the predefined groups.
"""
for group in Group.__subclasses__(): # pylint: disable-msg=E1101
if group.uri == uri:
return group
raise InvalidArgument('uri', uri, 'Invalid group uri')
class Group(Grantee):
"""
Base class for Amazon S3 Predefined Groups
"""
type = 'Group'
uri = ''
def __init__(self):
# Initialize method to clarify this has nothing to do
pass
def elem(self):
elem = Element('Grantee', nsmap={'xsi': XMLNS_XSI})
elem.set('{%s}type' % XMLNS_XSI, self.type)
SubElement(elem, 'URI').text = self.uri
return elem
def __str__(self):
return self.__class__.__name__
def canned_acl_grantees(bucket_owner, object_owner=None):
"""
A set of predefined grants supported by AWS S3.
"""
owner = object_owner or bucket_owner
return {
'private': [
('FULL_CONTROL', User(owner.name)),
],
'public-read': [
('READ', AllUsers()),
('FULL_CONTROL', User(owner.name)),
],
'public-read-write': [
('READ', AllUsers()),
('WRITE', AllUsers()),
('FULL_CONTROL', User(owner.name)),
],
'authenticated-read': [
('READ', AuthenticatedUsers()),
('FULL_CONTROL', User(owner.name)),
],
'bucket-owner-read': [
('READ', User(bucket_owner.name)),
('FULL_CONTROL', User(owner.name)),
],
'bucket-owner-full-control': [
('FULL_CONTROL', User(owner.name)),
('FULL_CONTROL', User(bucket_owner.name)),
],
'log-delivery-write': [
('WRITE', LogDelivery()),
('READ_ACP', LogDelivery()),
('FULL_CONTROL', User(owner.name)),
],
}
class AuthenticatedUsers(Group):
"""
This group represents all AWS accounts. Access permission to this group
allows any AWS account to access the resource. However, all requests must
be signed (authenticated).
"""
uri = 'http://acs.amazonaws.com/groups/global/AuthenticatedUsers'
def __contains__(self, key):
# s3api handles only signed requests.
return True
class AllUsers(Group):
"""
Access permission to this group allows anyone to access the resource. The
requests can be signed (authenticated) or unsigned (anonymous). Unsigned
requests omit the Authentication header in the request.
Note: s3api regards unsigned requests as Swift API accesses, and bypasses
them to Swift. As a result, AllUsers behaves completely same as
AuthenticatedUsers.
"""
uri = 'http://acs.amazonaws.com/groups/global/AllUsers'
def __contains__(self, key):
return True
class LogDelivery(Group):
"""
WRITE and READ_ACP permissions on a bucket enables this group to write
server access logs to the bucket.
"""
uri = 'http://acs.amazonaws.com/groups/s3/LogDelivery'
def __contains__(self, key):
if ':' in key:
tenant, user = key.split(':', 1)
else:
user = key
return user == LOG_DELIVERY_USER
class Grant(object):
"""
Grant Class which includes both Grantee and Permission
"""
def __init__(self, grantee, permission):
"""
:param grantee: a grantee class or its subclass
:param permission: string
"""
if permission.upper() not in PERMISSIONS:
raise S3NotImplemented()
if not isinstance(grantee, Grantee):
raise ValueError()
self.grantee = grantee
self.permission = permission
@classmethod
def from_elem(cls, elem):
"""
Convert an ElementTree to an ACL instance
"""
grantee = Grantee.from_elem(elem.find('./Grantee'))
permission = elem.find('./Permission').text
return cls(grantee, permission)
def elem(self):
"""
Create an etree element.
"""
elem = Element('Grant')
elem.append(self.grantee.elem())
SubElement(elem, 'Permission').text = self.permission
return elem
def allow(self, grantee, permission):
return permission == self.permission and grantee in self.grantee
class ACL(object):
"""
S3 ACL class.
Refs (S3 API - acl-overview:
http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html):
The sample ACL includes an Owner element identifying the owner via the
AWS account's canonical user ID. The Grant element identifies the grantee
(either an AWS account or a predefined group), and the permission granted.
This default ACL has one Grant element for the owner. You grant permissions
by adding Grant elements, each grant identifying the grantee and the
permission.
"""
metadata_name = 'acl'
root_tag = 'AccessControlPolicy'
max_xml_length = 200 * 1024
def __init__(self, owner, grants=None, s3_acl=False, allow_no_owner=False):
"""
:param owner: Owner instance for ACL instance
:param grants: a list of Grant instances
:param s3_acl: boolean indicates whether this class is used under
s3_acl is True or False (from s3api middleware configuration)
:param allow_no_owner: boolean indicates this ACL instance can be
handled when no owner information found
"""
self.owner = owner
self.grants = grants or []
self.s3_acl = s3_acl
self.allow_no_owner = allow_no_owner
def __bytes__(self):
return tostring(self.elem())
def __repr__(self):
if six.PY2:
return self.__bytes__()
return self.__bytes__().decode('utf8')
@classmethod
def from_elem(cls, elem, s3_acl=False, allow_no_owner=False):
"""
Convert an ElementTree to an ACL instance
"""
id = elem.find('./Owner/ID').text
try:
name = elem.find('./Owner/DisplayName').text
except AttributeError:
name = id
grants = [Grant.from_elem(e)
for e in elem.findall('./AccessControlList/Grant')]
return cls(Owner(id, name), grants, s3_acl, allow_no_owner)
def elem(self):
"""
Decode the value to an ACL instance.
"""
elem = Element(self.root_tag)
owner = SubElement(elem, 'Owner')
SubElement(owner, 'ID').text = self.owner.id
SubElement(owner, 'DisplayName').text = self.owner.name
SubElement(elem, 'AccessControlList').extend(
g.elem() for g in self.grants
)
return elem
def check_owner(self, user_id):
"""
Check that the user is an owner.
"""
if not self.s3_acl:
# Ignore S3api ACL.
return
if not self.owner.id:
if self.allow_no_owner:
# No owner means public.
return
raise AccessDenied()
if user_id != self.owner.id:
raise AccessDenied()
def check_permission(self, user_id, permission):
"""
Check that the user has a permission.
"""
if not self.s3_acl:
# Ignore S3api ACL.
return
try:
# owners have full control permission
self.check_owner(user_id)
return
except AccessDenied:
pass
if permission in PERMISSIONS:
for g in self.grants:
if g.allow(user_id, 'FULL_CONTROL') or \
g.allow(user_id, permission):
return
raise AccessDenied()
@classmethod
def from_headers(cls, headers, bucket_owner, object_owner=None,
as_private=True):
"""
Convert HTTP headers to an ACL instance.
"""
grants = []
try:
for key, value in headers.items():
if key.lower().startswith('x-amz-grant-'):
permission = key[len('x-amz-grant-'):]
permission = permission.upper().replace('-', '_')
if permission not in PERMISSIONS:
continue
for grantee in value.split(','):
grants.append(
Grant(Grantee.from_header(grantee), permission))
if 'x-amz-acl' in headers:
try:
acl = headers['x-amz-acl']
if len(grants) > 0:
err_msg = 'Specifying both Canned ACLs and Header ' \
'Grants is not allowed'
raise InvalidRequest(err_msg)
grantees = canned_acl_grantees(
bucket_owner, object_owner)[acl]
for permission, grantee in grantees:
grants.append(Grant(grantee, permission))
except KeyError:
# expects canned_acl_grantees()[] raises KeyError
raise InvalidArgument('x-amz-acl', headers['x-amz-acl'])
except (KeyError, ValueError):
# TODO: think about we really catch this except sequence
raise InvalidRequest()
if len(grants) == 0:
# No ACL headers
if as_private:
return ACLPrivate(bucket_owner, object_owner)
else:
return None
return cls(object_owner or bucket_owner, grants)
class CannedACL(object):
"""
A dict-like object that returns canned ACL.
"""
def __getitem__(self, key):
def acl(key, bucket_owner, object_owner=None,
s3_acl=False, allow_no_owner=False):
grants = []
grantees = canned_acl_grantees(bucket_owner, object_owner)[key]
for permission, grantee in grantees:
grants.append(Grant(grantee, permission))
return ACL(object_owner or bucket_owner,
grants, s3_acl, allow_no_owner)
return partial(acl, key)
canned_acl = CannedACL()
ACLPrivate = canned_acl['private']
ACLPublicRead = canned_acl['public-read']
ACLPublicReadWrite = canned_acl['public-read-write']
ACLAuthenticatedRead = canned_acl['authenticated-read']
ACLBucketOwnerRead = canned_acl['bucket-owner-read']
ACLBucketOwnerFullControl = canned_acl['bucket-owner-full-control']
ACLLogDeliveryWrite = canned_acl['log-delivery-write']
| swift-master | swift/common/middleware/s3api/subresource.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
------------
Acl Handlers
------------
Why do we need this
^^^^^^^^^^^^^^^^^^^
To make controller classes clean, we need these handlers.
It is really useful for customizing acl checking algorithms for
each controller.
Basic Information
^^^^^^^^^^^^^^^^^
BaseAclHandler wraps basic Acl handling.
(i.e. it will check acl from ACL_MAP by using HEAD)
How to extend
^^^^^^^^^^^^^
Make a handler with the name of the controller.
(e.g. BucketAclHandler is for BucketController)
It consists of method(s) for actual S3 method on controllers as follows.
Example::
class BucketAclHandler(BaseAclHandler):
def PUT:
<< put acl handling algorithms here for PUT bucket >>
.. note::
If the method DON'T need to recall _get_response in outside of
acl checking, the method have to return the response it needs at
the end of method.
"""
from swift.common.middleware.s3api.subresource import ACL, Owner, encode_acl
from swift.common.middleware.s3api.s3response import MissingSecurityHeader, \
MalformedACLError, UnexpectedContent, AccessDenied
from swift.common.middleware.s3api.etree import fromstring, XMLSyntaxError, \
DocumentInvalid
from swift.common.middleware.s3api.utils import MULTIUPLOAD_SUFFIX, \
sysmeta_header
def get_acl_handler(controller_name):
for base_klass in [BaseAclHandler, MultiUploadAclHandler]:
# pylint: disable-msg=E1101
for handler in base_klass.__subclasses__():
handler_suffix_len = len('AclHandler') \
if not handler.__name__ == 'S3AclHandler' else len('Handler')
if handler.__name__[:-handler_suffix_len] == controller_name:
return handler
return BaseAclHandler
class BaseAclHandler(object):
"""
BaseAclHandler: Handling ACL for basic requests mapped on ACL_MAP
"""
def __init__(self, req, logger, container=None, obj=None, headers=None):
self.req = req
self.container = req.container_name if container is None else container
self.obj = req.object_name if obj is None else obj
self.method = req.environ['REQUEST_METHOD']
self.user_id = self.req.user_id
self.headers = req.headers if headers is None else headers
self.logger = logger
def request_with(self, container, obj, headers):
return type(self)(self.req, self.logger,
container=container, obj=obj, headers=headers)
def handle_acl(self, app, method, container=None, obj=None, headers=None):
method = method or self.method
ah = self.request_with(container, obj, headers)
if hasattr(ah, method):
return getattr(ah, method)(app)
else:
return ah._handle_acl(app, method)
def _handle_acl(self, app, sw_method, container=None, obj=None,
permission=None, headers=None):
"""
General acl handling method.
This method expects to call Request._get_response() in outside of
this method so that this method returns response only when sw_method
is HEAD.
"""
container = self.container if container is None else container
obj = self.obj if obj is None else obj
sw_method = sw_method or self.req.environ['REQUEST_METHOD']
resource = 'object' if obj else 'container'
headers = self.headers if headers is None else headers
self.logger.debug(
'checking permission: %s %s %s %s' %
(container, obj, sw_method, dict(headers)))
if not container:
return
if not permission and (self.method, sw_method, resource) in ACL_MAP:
acl_check = ACL_MAP[(self.method, sw_method, resource)]
resource = acl_check.get('Resource') or resource
permission = acl_check['Permission']
if not permission:
self.logger.debug(
'%s %s %s %s' % (container, obj, sw_method, headers))
raise Exception('No permission to be checked exists')
if resource == 'object':
version_id = self.req.params.get('versionId')
if version_id is None:
query = {}
else:
query = {'version-id': version_id}
resp = self.req.get_acl_response(app, 'HEAD',
container, obj,
headers, query=query)
acl = resp.object_acl
elif resource == 'container':
resp = self.req.get_acl_response(app, 'HEAD',
container, '')
acl = resp.bucket_acl
try:
acl.check_permission(self.user_id, permission)
except Exception as e:
self.logger.debug(acl)
self.logger.debug('permission denined: %s %s %s' %
(e, self.user_id, permission))
raise
if sw_method == 'HEAD':
return resp
def get_acl(self, headers, body, bucket_owner, object_owner=None):
"""
Get ACL instance from S3 (e.g. x-amz-grant) headers or S3 acl xml body.
"""
acl = ACL.from_headers(headers, bucket_owner, object_owner,
as_private=False)
if acl is None:
# Get acl from request body if possible.
if not body:
raise MissingSecurityHeader(missing_header_name='x-amz-acl')
try:
elem = fromstring(body, ACL.root_tag)
acl = ACL.from_elem(
elem, True, self.req.conf.allow_no_owner)
except(XMLSyntaxError, DocumentInvalid):
raise MalformedACLError()
except Exception as e:
self.logger.error(e)
raise
else:
if body:
# Specifying grant with both header and xml is not allowed.
raise UnexpectedContent()
return acl
class BucketAclHandler(BaseAclHandler):
"""
BucketAclHandler: Handler for BucketController
"""
def DELETE(self, app):
if self.container.endswith(MULTIUPLOAD_SUFFIX):
# anyways, delete multiupload container doesn't need acls
# because it depends on GET segment container result for
# cleanup
pass
else:
return self._handle_acl(app, 'DELETE')
def HEAD(self, app):
if self.method == 'DELETE':
return self._handle_acl(app, 'DELETE')
else:
return self._handle_acl(app, 'HEAD')
def GET(self, app):
if self.method == 'DELETE' and \
self.container.endswith(MULTIUPLOAD_SUFFIX):
pass
else:
return self._handle_acl(app, 'GET')
def PUT(self, app):
req_acl = ACL.from_headers(self.req.headers,
Owner(self.user_id, self.user_id))
if not self.req.environ.get('swift_owner'):
raise AccessDenied()
# To avoid overwriting the existing bucket's ACL, we send PUT
# request first before setting the ACL to make sure that the target
# container does not exist.
self.req.get_acl_response(app, 'PUT', self.container)
# update metadata
self.req.bucket_acl = req_acl
# FIXME If this request is failed, there is a possibility that the
# bucket which has no ACL is left.
return self.req.get_acl_response(app, 'POST')
class ObjectAclHandler(BaseAclHandler):
"""
ObjectAclHandler: Handler for ObjectController
"""
def HEAD(self, app):
# No check object permission needed at DELETE Object
if self.method != 'DELETE':
return self._handle_acl(app, 'HEAD')
def PUT(self, app):
b_resp = self._handle_acl(app, 'HEAD', obj='')
req_acl = ACL.from_headers(self.req.headers,
b_resp.bucket_acl.owner,
Owner(self.user_id, self.user_id))
self.req.object_acl = req_acl
class S3AclHandler(BaseAclHandler):
"""
S3AclHandler: Handler for S3AclController
"""
def HEAD(self, app):
self._handle_acl(app, 'HEAD', permission='READ_ACP')
def GET(self, app):
self._handle_acl(app, 'HEAD', permission='READ_ACP')
def PUT(self, app):
if self.req.is_object_request:
b_resp = self.req.get_acl_response(app, 'HEAD', obj='')
o_resp = self._handle_acl(app, 'HEAD', permission='WRITE_ACP')
req_acl = self.get_acl(self.req.headers,
self.req.xml(ACL.max_xml_length),
b_resp.bucket_acl.owner,
o_resp.object_acl.owner)
# Don't change the owner of the resource by PUT acl request.
o_resp.object_acl.check_owner(req_acl.owner.id)
for g in req_acl.grants:
self.logger.debug(
'Grant %s %s permission on the object /%s/%s' %
(g.grantee, g.permission, self.req.container_name,
self.req.object_name))
self.req.object_acl = req_acl
else:
self._handle_acl(app, self.method)
def POST(self, app):
if self.req.is_bucket_request:
resp = self._handle_acl(app, 'HEAD', permission='WRITE_ACP')
req_acl = self.get_acl(self.req.headers,
self.req.xml(ACL.max_xml_length),
resp.bucket_acl.owner)
# Don't change the owner of the resource by PUT acl request.
resp.bucket_acl.check_owner(req_acl.owner.id)
for g in req_acl.grants:
self.logger.debug(
'Grant %s %s permission on the bucket /%s' %
(g.grantee, g.permission, self.req.container_name))
self.req.bucket_acl = req_acl
else:
self._handle_acl(app, self.method)
class MultiObjectDeleteAclHandler(BaseAclHandler):
"""
MultiObjectDeleteAclHandler: Handler for MultiObjectDeleteController
"""
def HEAD(self, app):
# Only bucket write acl is required
if not self.obj:
return self._handle_acl(app, 'HEAD')
def DELETE(self, app):
# Only bucket write acl is required
pass
class MultiUploadAclHandler(BaseAclHandler):
"""
MultiUpload stuff requires acl checking just once for BASE container
so that MultiUploadAclHandler extends BaseAclHandler to check acl only
when the verb defined. We should define the verb as the first step to
request to backend Swift at incoming request.
Basic Rules:
- BASE container name is always w/o 'MULTIUPLOAD_SUFFIX'
- Any check timing is ok but we should check it as soon as possible.
========== ====== ============= ==========
Controller Verb CheckResource Permission
========== ====== ============= ==========
Part PUT Container WRITE
Uploads GET Container READ
Uploads POST Container WRITE
Upload GET Container READ
Upload DELETE Container WRITE
Upload POST Container WRITE
========== ====== ============= ==========
"""
def __init__(self, req, logger, **kwargs):
super(MultiUploadAclHandler, self).__init__(req, logger, **kwargs)
self.acl_checked = False
def handle_acl(self, app, method, container=None, obj=None, headers=None):
method = method or self.method
ah = self.request_with(container, obj, headers)
# MultiUpload stuffs don't need acl check basically.
if hasattr(ah, method):
return getattr(ah, method)(app)
def HEAD(self, app):
# For _check_upload_info
self._handle_acl(app, 'HEAD', self.container, '')
class PartAclHandler(MultiUploadAclHandler):
"""
PartAclHandler: Handler for PartController
"""
def __init__(self, req, logger, **kwargs):
# pylint: disable-msg=E1003
super(MultiUploadAclHandler, self).__init__(req, logger, **kwargs)
def HEAD(self, app):
if self.container.endswith(MULTIUPLOAD_SUFFIX):
# For _check_upload_info
container = self.container[:-len(MULTIUPLOAD_SUFFIX)]
self._handle_acl(app, 'HEAD', container, '')
else:
# For check_copy_source
return self._handle_acl(app, 'HEAD', self.container, self.obj)
class UploadsAclHandler(MultiUploadAclHandler):
"""
UploadsAclHandler: Handler for UploadsController
"""
def handle_acl(self, app, method, *args, **kwargs):
method = method or self.method
if hasattr(self, method):
return getattr(self, method)(app)
else:
pass
def GET(self, app):
# List Multipart Upload
self._handle_acl(app, 'GET', self.container, '')
def PUT(self, app):
if not self.acl_checked:
resp = self._handle_acl(app, 'HEAD', obj='')
req_acl = ACL.from_headers(self.req.headers,
resp.bucket_acl.owner,
Owner(self.user_id, self.user_id))
acl_headers = encode_acl('object', req_acl)
self.req.headers[sysmeta_header('object', 'tmpacl')] = \
acl_headers[sysmeta_header('object', 'acl')]
self.acl_checked = True
class UploadAclHandler(MultiUploadAclHandler):
"""
UploadAclHandler: Handler for UploadController
"""
def handle_acl(self, app, method, *args, **kwargs):
method = method or self.method
if hasattr(self, method):
return getattr(self, method)(app)
else:
pass
def HEAD(self, app):
# FIXME: GET HEAD case conflicts with GET service
method = 'GET' if self.method == 'GET' else 'HEAD'
self._handle_acl(app, method, self.container, '')
def PUT(self, app):
container = self.req.container_name + MULTIUPLOAD_SUFFIX
obj = '%s/%s' % (self.obj, self.req.params['uploadId'])
resp = self.req._get_response(app, 'HEAD', container, obj)
self.req.headers[sysmeta_header('object', 'acl')] = \
resp.sysmeta_headers.get(sysmeta_header('object', 'tmpacl'))
"""
ACL_MAP =
{
('<s3_method>', '<swift_method>', '<swift_resource>'):
{'Resource': '<check_resource>',
'Permission': '<check_permission>'},
...
}
s3_method: Method of S3 Request from user to s3api
swift_method: Method of Swift Request from s3api to swift
swift_resource: Resource of Swift Request from s3api to swift
check_resource: <container/object>
check_permission: <OWNER/READ/WRITE/READ_ACP/WRITE_ACP>
"""
ACL_MAP = {
# HEAD Bucket
('HEAD', 'HEAD', 'container'):
{'Permission': 'READ'},
# GET Service
('GET', 'HEAD', 'container'):
{'Permission': 'OWNER'},
# GET Bucket, List Parts, List Multipart Upload
('GET', 'GET', 'container'):
{'Permission': 'READ'},
# PUT Object, PUT Object Copy
('PUT', 'HEAD', 'container'):
{'Permission': 'WRITE'},
# DELETE Bucket
('DELETE', 'DELETE', 'container'):
{'Permission': 'OWNER'},
# HEAD Object
('HEAD', 'HEAD', 'object'):
{'Permission': 'READ'},
# GET Object
('GET', 'GET', 'object'):
{'Permission': 'READ'},
# PUT Object Copy, Upload Part Copy
('PUT', 'HEAD', 'object'):
{'Permission': 'READ'},
# Abort Multipart Upload
('DELETE', 'HEAD', 'container'):
{'Permission': 'WRITE'},
# Delete Object
('DELETE', 'DELETE', 'object'):
{'Resource': 'container',
'Permission': 'WRITE'},
# Complete Multipart Upload, DELETE Multiple Objects,
# Initiate Multipart Upload
('POST', 'HEAD', 'container'):
{'Permission': 'WRITE'},
# Versioning
('PUT', 'POST', 'container'):
{'Permission': 'WRITE'},
('DELETE', 'GET', 'container'):
{'Permission': 'WRITE'},
}
| swift-master | swift/common/middleware/s3api/acl_handlers.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.utils import public
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, tostring
from swift.common.middleware.s3api.s3response import HTTPOk, S3NotImplemented,\
NoLoggingStatusForKey
class LoggingStatusController(Controller):
"""
Handles the following APIs:
* GET Bucket logging
* PUT Bucket logging
Those APIs are logged as LOGGING_STATUS operations in the S3 server log.
"""
@public
@bucket_operation(err_resp=NoLoggingStatusForKey)
def GET(self, req):
"""
Handles GET Bucket logging.
"""
req.get_response(self.app, method='HEAD')
# logging disabled
elem = Element('BucketLoggingStatus')
body = tostring(elem)
return HTTPOk(body=body, content_type='application/xml')
@public
@bucket_operation(err_resp=NoLoggingStatusForKey)
def PUT(self, req):
"""
Handles PUT Bucket logging.
"""
raise S3NotImplemented()
| swift-master | swift/common/middleware/s3api/controllers/logging.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.swob import bytes_to_wsgi
from swift.common.utils import json, public
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.etree import Element, SubElement, tostring
from swift.common.middleware.s3api.s3response import HTTPOk, AccessDenied, \
NoSuchBucket
from swift.common.middleware.s3api.utils import validate_bucket_name
class ServiceController(Controller):
"""
Handles account level requests.
"""
@public
def GET(self, req):
"""
Handle GET Service request
"""
resp = req.get_response(self.app, query={'format': 'json'})
containers = json.loads(resp.body)
containers = filter(
lambda item: validate_bucket_name(
item['name'], self.conf.dns_compliant_bucket_names),
containers)
# we don't keep the creation time of a bucket (s3cmd doesn't
# work without that) so we use something bogus.
elem = Element('ListAllMyBucketsResult')
owner = SubElement(elem, 'Owner')
SubElement(owner, 'ID').text = req.user_id
SubElement(owner, 'DisplayName').text = req.user_id
buckets = SubElement(elem, 'Buckets')
for c in containers:
if self.conf.s3_acl and self.conf.check_bucket_owner:
container = bytes_to_wsgi(c['name'].encode('utf8'))
try:
req.get_response(self.app, 'HEAD', container)
except AccessDenied:
continue
except NoSuchBucket:
continue
bucket = SubElement(buckets, 'Bucket')
SubElement(bucket, 'Name').text = c['name']
SubElement(bucket, 'CreationDate').text = \
'2009-02-03T16:45:09.000Z'
body = tostring(elem)
return HTTPOk(content_type='application/xml', body=body)
| swift-master | swift/common/middleware/s3api/controllers/service.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from base64 import standard_b64encode as b64encode
from base64 import standard_b64decode as b64decode
import six
from six.moves.urllib.parse import quote
from swift.common import swob
from swift.common.http import HTTP_OK
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
from swift.common.utils import json, public, config_true_value, Timestamp, \
cap_length
from swift.common.registry import get_swift_info
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.etree import Element, SubElement, \
tostring, fromstring, XMLSyntaxError, DocumentInvalid
from swift.common.middleware.s3api.s3response import \
HTTPOk, S3NotImplemented, InvalidArgument, \
MalformedXML, InvalidLocationConstraint, NoSuchBucket, \
BucketNotEmpty, VersionedBucketNotEmpty, InternalError, \
ServiceUnavailable, NoSuchKey
from swift.common.middleware.s3api.utils import MULTIUPLOAD_SUFFIX, S3Timestamp
MAX_PUT_BUCKET_BODY_SIZE = 10240
class BucketController(Controller):
"""
Handles bucket request.
"""
def _delete_segments_bucket(self, req):
"""
Before delete bucket, delete segments bucket if existing.
"""
container = req.container_name + MULTIUPLOAD_SUFFIX
marker = ''
seg = ''
try:
resp = req.get_response(self.app, 'HEAD')
if int(resp.sw_headers['X-Container-Object-Count']) > 0:
if resp.sw_headers.get('X-Container-Sysmeta-Versions-Enabled'):
raise VersionedBucketNotEmpty()
else:
raise BucketNotEmpty()
# FIXME: This extra HEAD saves unexpected segment deletion
# but if a complete multipart upload happen while cleanup
# segment container below, completed object may be missing its
# segments unfortunately. To be safer, it might be good
# to handle if the segments can be deleted for each object.
except NoSuchBucket:
pass
try:
while True:
# delete all segments
resp = req.get_response(self.app, 'GET', container,
query={'format': 'json',
'marker': marker})
segments = json.loads(resp.body)
for seg in segments:
try:
req.get_response(
self.app, 'DELETE', container,
swob.bytes_to_wsgi(seg['name'].encode('utf8')))
except NoSuchKey:
pass
except InternalError:
raise ServiceUnavailable()
if segments:
marker = seg['name']
else:
break
req.get_response(self.app, 'DELETE', container)
except NoSuchBucket:
return
except (BucketNotEmpty, InternalError):
raise ServiceUnavailable()
@public
def HEAD(self, req):
"""
Handle HEAD Bucket (Get Metadata) request
"""
resp = req.get_response(self.app)
return HTTPOk(headers=resp.headers)
def _parse_request_options(self, req, max_keys):
encoding_type = req.params.get('encoding-type')
if encoding_type is not None and encoding_type != 'url':
err_msg = 'Invalid Encoding Method specified in Request'
raise InvalidArgument('encoding-type', encoding_type, err_msg)
# in order to judge that truncated is valid, check whether
# max_keys + 1 th element exists in swift.
query = {
'limit': max_keys + 1,
}
if 'prefix' in req.params:
query['prefix'] = swob.wsgi_to_str(req.params['prefix'])
if 'delimiter' in req.params:
query['delimiter'] = swob.wsgi_to_str(req.params['delimiter'])
fetch_owner = False
if 'versions' in req.params:
query['versions'] = swob.wsgi_to_str(req.params['versions'])
listing_type = 'object-versions'
version_marker = swob.wsgi_to_str(req.params.get(
'version-id-marker'))
if 'key-marker' in req.params:
query['marker'] = swob.wsgi_to_str(req.params['key-marker'])
if version_marker is not None:
if version_marker != 'null':
try:
Timestamp(version_marker)
except ValueError:
raise InvalidArgument(
'version-id-marker', version_marker,
'Invalid version id specified')
query['version_marker'] = version_marker
elif version_marker is not None:
err_msg = ('A version-id marker cannot be specified without '
'a key marker.')
raise InvalidArgument('version-id-marker',
version_marker, err_msg)
elif int(req.params.get('list-type', '1')) == 2:
listing_type = 'version-2'
if 'start-after' in req.params:
query['marker'] = swob.wsgi_to_str(req.params['start-after'])
# continuation-token overrides start-after
if 'continuation-token' in req.params:
decoded = b64decode(req.params['continuation-token'])
if not six.PY2:
decoded = decoded.decode('utf8')
query['marker'] = decoded
if 'fetch-owner' in req.params:
fetch_owner = config_true_value(req.params['fetch-owner'])
else:
listing_type = 'version-1'
if 'marker' in req.params:
query['marker'] = swob.wsgi_to_str(req.params['marker'])
return encoding_type, query, listing_type, fetch_owner
def _build_versions_result(self, req, objects, encoding_type,
tag_max_keys, is_truncated):
elem = Element('ListVersionsResult')
SubElement(elem, 'Name').text = req.container_name
prefix = swob.wsgi_to_str(req.params.get('prefix'))
if prefix and encoding_type == 'url':
prefix = quote(prefix)
SubElement(elem, 'Prefix').text = prefix
key_marker = swob.wsgi_to_str(req.params.get('key-marker'))
if key_marker and encoding_type == 'url':
key_marker = quote(key_marker)
SubElement(elem, 'KeyMarker').text = key_marker
SubElement(elem, 'VersionIdMarker').text = swob.wsgi_to_str(
req.params.get('version-id-marker'))
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['name']
SubElement(elem, 'NextVersionIdMarker').text = \
objects[-1].get('version') or 'null'
if 'subdir' in objects[-1]:
SubElement(elem, 'NextKeyMarker').text = \
objects[-1]['subdir']
SubElement(elem, 'NextVersionIdMarker').text = 'null'
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
delimiter = swob.wsgi_to_str(req.params.get('delimiter'))
if delimiter is not None:
if encoding_type == 'url':
delimiter = quote(delimiter)
SubElement(elem, 'Delimiter').text = delimiter
if encoding_type == 'url':
SubElement(elem, 'EncodingType').text = encoding_type
SubElement(elem, 'IsTruncated').text = \
'true' if is_truncated else 'false'
return elem
def _build_base_listing_element(self, req, encoding_type):
elem = Element('ListBucketResult')
SubElement(elem, 'Name').text = req.container_name
prefix = swob.wsgi_to_str(req.params.get('prefix'))
if prefix and encoding_type == 'url':
prefix = quote(prefix)
SubElement(elem, 'Prefix').text = prefix
return elem
def _build_list_bucket_result_type_one(self, req, objects, encoding_type,
tag_max_keys, is_truncated):
elem = self._build_base_listing_element(req, encoding_type)
marker = swob.wsgi_to_str(req.params.get('marker'))
if marker and encoding_type == 'url':
marker = quote(marker)
SubElement(elem, 'Marker').text = marker
if is_truncated and 'delimiter' in req.params:
if 'name' in objects[-1]:
name = objects[-1]['name']
else:
name = objects[-1]['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(elem, 'NextMarker').text = name
# XXX: really? no NextMarker when no delimiter??
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
delimiter = swob.wsgi_to_str(req.params.get('delimiter'))
if delimiter:
if encoding_type == 'url':
delimiter = quote(delimiter)
SubElement(elem, 'Delimiter').text = delimiter
if encoding_type == 'url':
SubElement(elem, 'EncodingType').text = encoding_type
SubElement(elem, 'IsTruncated').text = \
'true' if is_truncated else 'false'
return elem
def _build_list_bucket_result_type_two(self, req, objects, encoding_type,
tag_max_keys, is_truncated):
elem = self._build_base_listing_element(req, encoding_type)
if is_truncated:
if 'name' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['name'].encode('utf8'))
if 'subdir' in objects[-1]:
SubElement(elem, 'NextContinuationToken').text = \
b64encode(objects[-1]['subdir'].encode('utf8'))
if 'continuation-token' in req.params:
SubElement(elem, 'ContinuationToken').text = \
swob.wsgi_to_str(req.params['continuation-token'])
start_after = swob.wsgi_to_str(req.params.get('start-after'))
if start_after is not None:
if encoding_type == 'url':
start_after = quote(start_after)
SubElement(elem, 'StartAfter').text = start_after
SubElement(elem, 'KeyCount').text = str(len(objects))
SubElement(elem, 'MaxKeys').text = str(tag_max_keys)
delimiter = swob.wsgi_to_str(req.params.get('delimiter'))
if delimiter:
if encoding_type == 'url':
delimiter = quote(delimiter)
SubElement(elem, 'Delimiter').text = delimiter
if encoding_type == 'url':
SubElement(elem, 'EncodingType').text = encoding_type
SubElement(elem, 'IsTruncated').text = \
'true' if is_truncated else 'false'
return elem
def _add_subdir(self, elem, o, encoding_type):
common_prefixes = SubElement(elem, 'CommonPrefixes')
name = o['subdir']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
SubElement(common_prefixes, 'Prefix').text = name
def _add_object(self, req, elem, o, encoding_type, listing_type,
fetch_owner):
name = o['name']
if encoding_type == 'url':
name = quote(name.encode('utf-8'))
if listing_type == 'object-versions':
if o['content_type'] == DELETE_MARKER_CONTENT_TYPE:
contents = SubElement(elem, 'DeleteMarker')
else:
contents = SubElement(elem, 'Version')
SubElement(contents, 'Key').text = name
SubElement(contents, 'VersionId').text = o.get(
'version_id') or 'null'
if 'object_versioning' in get_swift_info():
SubElement(contents, 'IsLatest').text = (
'true' if o['is_latest'] else 'false')
else:
SubElement(contents, 'IsLatest').text = 'true'
else:
contents = SubElement(elem, 'Contents')
SubElement(contents, 'Key').text = name
SubElement(contents, 'LastModified').text = \
S3Timestamp.from_isoformat(o['last_modified']).s3xmlformat
if contents.tag != 'DeleteMarker':
if 's3_etag' in o:
# New-enough MUs are already in the right format
etag = o['s3_etag']
elif 'slo_etag' in o:
# SLOs may be in something *close* to the MU format
etag = '"%s-N"' % o['slo_etag'].strip('"')
else:
# Normal objects just use the MD5
etag = o['hash']
if len(etag) < 2 or etag[::len(etag) - 1] != '""':
# Normal objects just use the MD5
etag = '"%s"' % o['hash']
# This also catches sufficiently-old SLOs, but we have
# no way to identify those from container listings
# Otherwise, somebody somewhere (proxyfs, maybe?) made this
# look like an RFC-compliant ETag; we don't need to
# quote-wrap.
SubElement(contents, 'ETag').text = etag
SubElement(contents, 'Size').text = str(o['bytes'])
if fetch_owner or listing_type != 'version-2':
owner = SubElement(contents, 'Owner')
SubElement(owner, 'ID').text = req.user_id
SubElement(owner, 'DisplayName').text = req.user_id
if contents.tag != 'DeleteMarker':
SubElement(contents, 'StorageClass').text = 'STANDARD'
def _add_objects_to_result(self, req, elem, objects, encoding_type,
listing_type, fetch_owner):
for o in objects:
if 'subdir' in o:
self._add_subdir(elem, o, encoding_type)
else:
self._add_object(req, elem, o, encoding_type, listing_type,
fetch_owner)
@public
def GET(self, req):
"""
Handle GET Bucket (List Objects) request
"""
tag_max_keys = req.get_validated_param(
'max-keys', self.conf.max_bucket_listing)
# TODO: Separate max_bucket_listing and default_bucket_listing
max_keys = min(tag_max_keys, self.conf.max_bucket_listing)
encoding_type, query, listing_type, fetch_owner = \
self._parse_request_options(req, max_keys)
resp = req.get_response(self.app, query=query)
try:
objects = json.loads(resp.body)
except (TypeError, ValueError):
self.logger.error('Got non-JSON response trying to list %s: %r',
req.path, cap_length(resp.body, 60))
raise
is_truncated = max_keys > 0 and len(objects) > max_keys
objects = objects[:max_keys]
if listing_type == 'object-versions':
func = self._build_versions_result
elif listing_type == 'version-2':
func = self._build_list_bucket_result_type_two
else:
func = self._build_list_bucket_result_type_one
elem = func(req, objects, encoding_type, tag_max_keys, is_truncated)
self._add_objects_to_result(
req, elem, objects, encoding_type, listing_type, fetch_owner)
body = tostring(elem)
return HTTPOk(body=body, content_type='application/xml')
@public
def PUT(self, req):
"""
Handle PUT Bucket request
"""
xml = req.xml(MAX_PUT_BUCKET_BODY_SIZE)
if xml:
# check location
try:
elem = fromstring(
xml, 'CreateBucketConfiguration', self.logger)
location = elem.find('./LocationConstraint').text
except (XMLSyntaxError, DocumentInvalid):
raise MalformedXML()
except Exception as e:
self.logger.error(e)
raise
if location not in (self.conf.location,
self.conf.location.lower()):
# s3api cannot support multiple regions currently.
raise InvalidLocationConstraint()
resp = req.get_response(self.app)
resp.status = HTTP_OK
resp.location = '/' + req.container_name
return resp
@public
def DELETE(self, req):
"""
Handle DELETE Bucket request
"""
# NB: object_versioning is responsible for cleaning up its container
if self.conf.allow_multipart_uploads:
self._delete_segments_bucket(req)
resp = req.get_response(self.app)
return resp
@public
def POST(self, req):
"""
Handle POST Bucket request
"""
raise S3NotImplemented()
| swift-master | swift/common/middleware/s3api/controllers/bucket.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.middleware.s3api.controllers.base import Controller, \
UnsupportedController
from swift.common.middleware.s3api.controllers.service import ServiceController
from swift.common.middleware.s3api.controllers.bucket import BucketController
from swift.common.middleware.s3api.controllers.obj import ObjectController
from swift.common.middleware.s3api.controllers.acl import AclController
from swift.common.middleware.s3api.controllers.s3_acl import S3AclController
from swift.common.middleware.s3api.controllers.multi_delete import \
MultiObjectDeleteController
from swift.common.middleware.s3api.controllers.multi_upload import \
UploadController, PartController, UploadsController
from swift.common.middleware.s3api.controllers.location import \
LocationController
from swift.common.middleware.s3api.controllers.logging import \
LoggingStatusController
from swift.common.middleware.s3api.controllers.versioning import \
VersioningController
from swift.common.middleware.s3api.controllers.tagging import \
TaggingController
__all__ = [
'Controller',
'ServiceController',
'BucketController',
'ObjectController',
'AclController',
'S3AclController',
'MultiObjectDeleteController',
'PartController',
'UploadsController',
'UploadController',
'LocationController',
'LoggingStatusController',
'VersioningController',
'TaggingController',
'UnsupportedController',
]
| swift-master | swift/common/middleware/s3api/controllers/__init__.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from swift.common import constraints
from swift.common.http import HTTP_OK, HTTP_PARTIAL_CONTENT, HTTP_NO_CONTENT
from swift.common.request_helpers import update_etag_is_at_header
from swift.common.swob import Range, content_range_header_value, \
normalize_etag
from swift.common.utils import public, list_from_csv
from swift.common.registry import get_swift_info
from swift.common.middleware.versioned_writes.object_versioning import \
DELETE_MARKER_CONTENT_TYPE
from swift.common.middleware.s3api.utils import S3Timestamp, sysmeta_header
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.s3response import S3NotImplemented, \
InvalidRange, NoSuchKey, NoSuchVersion, InvalidArgument, HTTPNoContent, \
PreconditionFailed, KeyTooLongError
class ObjectController(Controller):
"""
Handles requests on objects
"""
def _gen_head_range_resp(self, req_range, resp):
"""
Swift doesn't handle Range header for HEAD requests.
So, this method generates HEAD range response from HEAD response.
S3 return HEAD range response, if the value of range satisfies the
conditions which are described in the following document.
- http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35
"""
length = int(resp.headers.get('Content-Length'))
try:
content_range = Range(req_range)
except ValueError:
return resp
ranges = content_range.ranges_for_length(length)
if ranges == []:
raise InvalidRange()
elif ranges:
if len(ranges) == 1:
start, end = ranges[0]
resp.headers['Content-Range'] = \
content_range_header_value(start, end, length)
resp.headers['Content-Length'] = (end - start)
resp.status = HTTP_PARTIAL_CONTENT
return resp
else:
# TODO: It is necessary to confirm whether need to respond to
# multi-part response.(e.g. bytes=0-10,20-30)
pass
return resp
def GETorHEAD(self, req):
had_match = False
for match_header in ('if-match', 'if-none-match'):
if match_header not in req.headers:
continue
had_match = True
for value in list_from_csv(req.headers[match_header]):
value = normalize_etag(value)
if value.endswith('-N'):
# Deal with fake S3-like etags for SLOs uploaded via Swift
req.headers[match_header] += ', ' + value[:-2]
if had_match:
# Update where to look
update_etag_is_at_header(req, sysmeta_header('object', 'etag'))
object_name = req.object_name
version_id = req.params.get('versionId')
if version_id not in ('null', None) and \
'object_versioning' not in get_swift_info():
raise S3NotImplemented()
query = {} if version_id is None else {'version-id': version_id}
if version_id not in ('null', None):
container_info = req.get_container_info(self.app)
if not container_info.get(
'sysmeta', {}).get('versions-container', ''):
# Versioning has never been enabled
raise NoSuchVersion(object_name, version_id)
resp = req.get_response(self.app, query=query)
if req.method == 'HEAD':
resp.app_iter = None
if 'x-amz-meta-deleted' in resp.headers:
raise NoSuchKey(object_name)
for key in ('content-type', 'content-language', 'expires',
'cache-control', 'content-disposition',
'content-encoding'):
if 'response-' + key in req.params:
resp.headers[key] = req.params['response-' + key]
return resp
@public
def HEAD(self, req):
"""
Handle HEAD Object request
"""
resp = self.GETorHEAD(req)
if 'range' in req.headers:
req_range = req.headers['range']
resp = self._gen_head_range_resp(req_range, resp)
return resp
@public
def GET(self, req):
"""
Handle GET Object request
"""
return self.GETorHEAD(req)
@public
def PUT(self, req):
"""
Handle PUT Object and PUT Object (Copy) request
"""
if len(req.object_name) > constraints.MAX_OBJECT_NAME_LENGTH:
raise KeyTooLongError()
# set X-Timestamp by s3api to use at copy resp body
req_timestamp = S3Timestamp.now()
req.headers['X-Timestamp'] = req_timestamp.internal
if all(h in req.headers
for h in ('X-Amz-Copy-Source', 'X-Amz-Copy-Source-Range')):
raise InvalidArgument('x-amz-copy-source-range',
req.headers['X-Amz-Copy-Source-Range'],
'Illegal copy header')
req.check_copy_source(self.app)
if not req.headers.get('Content-Type'):
# can't setdefault because it can be None for some reason
req.headers['Content-Type'] = 'binary/octet-stream'
resp = req.get_response(self.app)
if 'X-Amz-Copy-Source' in req.headers:
resp.append_copy_resp_body(req.controller_name,
req_timestamp.s3xmlformat)
# delete object metadata from response
for key in list(resp.headers.keys()):
if key.lower().startswith('x-amz-meta-'):
del resp.headers[key]
resp.status = HTTP_OK
return resp
@public
def POST(self, req):
raise S3NotImplemented()
def _restore_on_delete(self, req):
resp = req.get_response(self.app, 'GET', req.container_name, '',
query={'prefix': req.object_name,
'versions': True})
if resp.status_int != HTTP_OK:
return resp
old_versions = json.loads(resp.body)
resp = None
for item in old_versions:
if item['content_type'] == DELETE_MARKER_CONTENT_TYPE:
resp = None
break
try:
resp = req.get_response(self.app, 'PUT', query={
'version-id': item['version_id']})
except PreconditionFailed:
self.logger.debug('skipping failed PUT?version-id=%s' %
item['version_id'])
continue
# if that worked, we'll go ahead and fix up the status code
resp.status_int = HTTP_NO_CONTENT
break
return resp
@public
def DELETE(self, req):
"""
Handle DELETE Object request
"""
if 'versionId' in req.params and \
req.params['versionId'] != 'null' and \
'object_versioning' not in get_swift_info():
raise S3NotImplemented()
version_id = req.params.get('versionId')
if version_id not in ('null', None):
container_info = req.get_container_info(self.app)
if not container_info.get(
'sysmeta', {}).get('versions-container', ''):
# Versioning has never been enabled
return HTTPNoContent(headers={'x-amz-version-id': version_id})
try:
try:
query = req.gen_multipart_manifest_delete_query(
self.app, version=version_id)
except NoSuchKey:
query = {}
req.headers['Content-Type'] = None # Ignore client content-type
if version_id is not None:
query['version-id'] = version_id
query['symlink'] = 'get'
resp = req.get_response(self.app, query=query)
if query.get('multipart-manifest') and resp.status_int == HTTP_OK:
for chunk in resp.app_iter:
pass # drain the bulk-deleter response
resp.status = HTTP_NO_CONTENT
resp.body = b''
if resp.sw_headers.get('X-Object-Current-Version-Id') == 'null':
new_resp = self._restore_on_delete(req)
if new_resp:
resp = new_resp
except NoSuchKey:
# expect to raise NoSuchBucket when the bucket doesn't exist
req.get_container_info(self.app)
# else -- it's gone! Success.
return HTTPNoContent()
return resp
| swift-master | swift/common/middleware/s3api/controllers/obj.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.utils import public
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, tostring
from swift.common.middleware.s3api.s3response import HTTPOk
class LocationController(Controller):
"""
Handles GET Bucket location, which is logged as a LOCATION operation in the
S3 server log.
"""
@public
@bucket_operation
def GET(self, req):
"""
Handles GET Bucket location.
"""
req.get_response(self.app, method='HEAD')
elem = Element('LocationConstraint')
if self.conf.location != 'us-east-1':
elem.text = self.conf.location
body = tostring(elem)
return HTTPOk(body=body, content_type='application/xml')
| swift-master | swift/common/middleware/s3api/controllers/location.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.http import HTTP_OK
from swift.common.middleware.acl import parse_acl, referrer_allowed
from swift.common.utils import public
from swift.common.middleware.s3api.exception import ACLError
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.s3response import HTTPOk, S3NotImplemented,\
MalformedACLError, UnexpectedContent, MissingSecurityHeader
from swift.common.middleware.s3api.etree import Element, SubElement, tostring
from swift.common.middleware.s3api.acl_utils import swift_acl_translate, \
XMLNS_XSI
MAX_ACL_BODY_SIZE = 200 * 1024
def get_acl(account_name, headers):
"""
Attempts to construct an S3 ACL based on what is found in the swift headers
"""
elem = Element('AccessControlPolicy')
owner = SubElement(elem, 'Owner')
SubElement(owner, 'ID').text = account_name
SubElement(owner, 'DisplayName').text = account_name
access_control_list = SubElement(elem, 'AccessControlList')
# grant FULL_CONTROL to myself by default
grant = SubElement(access_control_list, 'Grant')
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
grantee.set('{%s}type' % XMLNS_XSI, 'CanonicalUser')
SubElement(grantee, 'ID').text = account_name
SubElement(grantee, 'DisplayName').text = account_name
SubElement(grant, 'Permission').text = 'FULL_CONTROL'
referrers, _ = parse_acl(headers.get('x-container-read'))
if referrer_allowed('unknown', referrers):
# grant public-read access
grant = SubElement(access_control_list, 'Grant')
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
grantee.set('{%s}type' % XMLNS_XSI, 'Group')
SubElement(grantee, 'URI').text = \
'http://acs.amazonaws.com/groups/global/AllUsers'
SubElement(grant, 'Permission').text = 'READ'
referrers, _ = parse_acl(headers.get('x-container-write'))
if referrer_allowed('unknown', referrers):
# grant public-write access
grant = SubElement(access_control_list, 'Grant')
grantee = SubElement(grant, 'Grantee', nsmap={'xsi': XMLNS_XSI})
grantee.set('{%s}type' % XMLNS_XSI, 'Group')
SubElement(grantee, 'URI').text = \
'http://acs.amazonaws.com/groups/global/AllUsers'
SubElement(grant, 'Permission').text = 'WRITE'
body = tostring(elem)
return HTTPOk(body=body, content_type="text/plain")
class AclController(Controller):
"""
Handles the following APIs:
* GET Bucket acl
* PUT Bucket acl
* GET Object acl
* PUT Object acl
Those APIs are logged as ACL operations in the S3 server log.
"""
@public
def GET(self, req):
"""
Handles GET Bucket acl and GET Object acl.
"""
resp = req.get_response(self.app, method='HEAD')
return get_acl(req.user_id, resp.headers)
@public
def PUT(self, req):
"""
Handles PUT Bucket acl and PUT Object acl.
"""
if req.is_object_request:
# Handle Object ACL
raise S3NotImplemented()
else:
# Handle Bucket ACL
xml = req.xml(MAX_ACL_BODY_SIZE)
if all(['HTTP_X_AMZ_ACL' in req.environ, xml]):
# S3 doesn't allow to give ACL with both ACL header and body.
raise UnexpectedContent()
elif not any(['HTTP_X_AMZ_ACL' in req.environ, xml]):
# Both canned ACL header and xml body are missing
raise MissingSecurityHeader(missing_header_name='x-amz-acl')
else:
# correct ACL exists in the request
if xml:
# We very likely have an XML-based ACL request.
# let's try to translate to the request header
try:
translated_acl = swift_acl_translate(xml, xml=True)
except ACLError:
raise MalformedACLError()
for header, acl in translated_acl:
req.headers[header] = acl
resp = req.get_response(self.app, 'POST')
resp.status = HTTP_OK
resp.headers.update({'Location': req.container_name})
return resp
| swift-master | swift/common/middleware/s3api/controllers/acl.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.utils import public, config_true_value
from swift.common.registry import get_swift_info
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, tostring, \
fromstring, XMLSyntaxError, DocumentInvalid, SubElement
from swift.common.middleware.s3api.s3response import HTTPOk, \
S3NotImplemented, MalformedXML
MAX_PUT_VERSIONING_BODY_SIZE = 10240
class VersioningController(Controller):
"""
Handles the following APIs:
* GET Bucket versioning
* PUT Bucket versioning
Those APIs are logged as VERSIONING operations in the S3 server log.
"""
@public
@bucket_operation
def GET(self, req):
"""
Handles GET Bucket versioning.
"""
sysmeta = req.get_container_info(self.app).get('sysmeta', {})
elem = Element('VersioningConfiguration')
if sysmeta.get('versions-enabled'):
SubElement(elem, 'Status').text = (
'Enabled' if config_true_value(sysmeta['versions-enabled'])
else 'Suspended')
body = tostring(elem)
return HTTPOk(body=body, content_type=None)
@public
@bucket_operation
def PUT(self, req):
"""
Handles PUT Bucket versioning.
"""
if 'object_versioning' not in get_swift_info():
raise S3NotImplemented()
xml = req.xml(MAX_PUT_VERSIONING_BODY_SIZE)
try:
elem = fromstring(xml, 'VersioningConfiguration')
status = elem.find('./Status').text
except (XMLSyntaxError, DocumentInvalid):
raise MalformedXML()
except Exception as e:
self.logger.error(e)
raise
if status not in ['Enabled', 'Suspended']:
raise MalformedXML()
# Set up versioning
# NB: object_versioning responsible for ensuring its container exists
req.headers['X-Versions-Enabled'] = str(status == 'Enabled').lower()
req.get_response(self.app, 'POST')
return HTTPOk()
| swift-master | swift/common/middleware/s3api/controllers/versioning.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Implementation of S3 Multipart Upload.
This module implements S3 Multipart Upload APIs with the Swift SLO feature.
The following explains how S3api uses swift container and objects to store S3
upload information:
-----------------
[bucket]+segments
-----------------
A container to store upload information. [bucket] is the original bucket
where multipart upload is initiated.
-----------------------------
[bucket]+segments/[upload_id]
-----------------------------
An object of the ongoing upload id. The object is empty and used for
checking the target upload status. If the object exists, it means that the
upload is initiated but not either completed or aborted.
-------------------------------------------
[bucket]+segments/[upload_id]/[part_number]
-------------------------------------------
The last suffix is the part number under the upload id. When the client uploads
the parts, they will be stored in the namespace with
[bucket]+segments/[upload_id]/[part_number].
Example listing result in the [bucket]+segments container::
[bucket]+segments/[upload_id1] # upload id object for upload_id1
[bucket]+segments/[upload_id1]/1 # part object for upload_id1
[bucket]+segments/[upload_id1]/2 # part object for upload_id1
[bucket]+segments/[upload_id1]/3 # part object for upload_id1
[bucket]+segments/[upload_id2] # upload id object for upload_id2
[bucket]+segments/[upload_id2]/1 # part object for upload_id2
[bucket]+segments/[upload_id2]/2 # part object for upload_id2
.
.
Those part objects are directly used as segments of a Swift
Static Large Object when the multipart upload is completed.
"""
import binascii
import copy
import os
import re
import time
import six
from swift.common import constraints
from swift.common.swob import Range, bytes_to_wsgi, normalize_etag, wsgi_to_str
from swift.common.utils import json, public, reiterate, md5
from swift.common.db import utf8encode
from swift.common.request_helpers import get_container_update_override_key, \
get_param
from six.moves.urllib.parse import quote, urlparse
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation, object_operation, check_container_existence
from swift.common.middleware.s3api.s3response import InvalidArgument, \
ErrorResponse, MalformedXML, BadDigest, KeyTooLongError, \
InvalidPart, BucketAlreadyExists, EntityTooSmall, InvalidPartOrder, \
InvalidRequest, HTTPOk, HTTPNoContent, NoSuchKey, NoSuchUpload, \
NoSuchBucket, BucketAlreadyOwnedByYou
from swift.common.middleware.s3api.utils import unique_id, \
MULTIUPLOAD_SUFFIX, S3Timestamp, sysmeta_header
from swift.common.middleware.s3api.etree import Element, SubElement, \
fromstring, tostring, XMLSyntaxError, DocumentInvalid
from swift.common.storage_policy import POLICIES
DEFAULT_MAX_PARTS_LISTING = 1000
DEFAULT_MAX_UPLOADS = 1000
MAX_COMPLETE_UPLOAD_BODY_SIZE = 2048 * 1024
def _get_upload_info(req, app, upload_id):
container = req.container_name + MULTIUPLOAD_SUFFIX
obj = '%s/%s' % (req.object_name, upload_id)
# XXX: if we leave the copy-source header, somewhere later we might
# drop in a ?version-id=... query string that's utterly inappropriate
# for the upload marker. Until we get around to fixing that, just pop
# it off for now...
copy_source = req.headers.pop('X-Amz-Copy-Source', None)
try:
return req.get_response(app, 'HEAD', container=container, obj=obj)
except NoSuchKey:
upload_marker_path = req.environ.get('s3api.backend_path')
try:
resp = req.get_response(app, 'HEAD')
if resp.sysmeta_headers.get(sysmeta_header(
'object', 'upload-id')) == upload_id:
return resp
except NoSuchKey:
pass
finally:
# Ops often find it more useful for us to log the upload marker
# path, so put it back
if upload_marker_path is not None:
req.environ['s3api.backend_path'] = upload_marker_path
raise NoSuchUpload(upload_id=upload_id)
finally:
# ...making sure to restore any copy-source before returning
if copy_source is not None:
req.headers['X-Amz-Copy-Source'] = copy_source
def _make_complete_body(req, s3_etag, yielded_anything):
result_elem = Element('CompleteMultipartUploadResult')
# NOTE: boto with sig v4 appends port to HTTP_HOST value at
# the request header when the port is non default value and it
# makes req.host_url like as http://localhost:8080:8080/path
# that obviously invalid. Probably it should be resolved at
# swift.common.swob though, tentatively we are parsing and
# reconstructing the correct host_url info here.
# in detail, https://github.com/boto/boto/pull/3513
parsed_url = urlparse(req.host_url)
host_url = '%s://%s' % (parsed_url.scheme, parsed_url.hostname)
# Why are we doing our own port parsing? Because py3 decided
# to start raising ValueErrors on access after parsing such
# an invalid port
netloc = parsed_url.netloc.split('@')[-1].split(']')[-1]
if ':' in netloc:
port = netloc.split(':', 2)[1]
host_url += ':%s' % port
SubElement(result_elem, 'Location').text = host_url + req.path
SubElement(result_elem, 'Bucket').text = req.container_name
SubElement(result_elem, 'Key').text = wsgi_to_str(req.object_name)
SubElement(result_elem, 'ETag').text = '"%s"' % s3_etag
body = tostring(result_elem, xml_declaration=not yielded_anything)
if yielded_anything:
return b'\n' + body
return body
class PartController(Controller):
"""
Handles the following APIs:
* Upload Part
* Upload Part - Copy
Those APIs are logged as PART operations in the S3 server log.
"""
@public
@object_operation
@check_container_existence
def PUT(self, req):
"""
Handles Upload Part and Upload Part Copy.
"""
if 'uploadId' not in req.params:
raise InvalidArgument('ResourceType', 'partNumber',
'Unexpected query string parameter')
try:
part_number = int(get_param(req, 'partNumber'))
if part_number < 1 or self.conf.max_upload_part_num < part_number:
raise Exception()
except Exception:
err_msg = 'Part number must be an integer between 1 and %d,' \
' inclusive' % self.conf.max_upload_part_num
raise InvalidArgument('partNumber', get_param(req, 'partNumber'),
err_msg)
upload_id = get_param(req, 'uploadId')
_get_upload_info(req, self.app, upload_id)
req.container_name += MULTIUPLOAD_SUFFIX
req.object_name = '%s/%s/%d' % (req.object_name, upload_id,
part_number)
req_timestamp = S3Timestamp.now()
req.headers['X-Timestamp'] = req_timestamp.internal
source_resp = req.check_copy_source(self.app)
if 'X-Amz-Copy-Source' in req.headers and \
'X-Amz-Copy-Source-Range' in req.headers:
rng = req.headers['X-Amz-Copy-Source-Range']
header_valid = True
try:
rng_obj = Range(rng)
if len(rng_obj.ranges) != 1:
header_valid = False
except ValueError:
header_valid = False
if not header_valid:
err_msg = ('The x-amz-copy-source-range value must be of the '
'form bytes=first-last where first and last are '
'the zero-based offsets of the first and last '
'bytes to copy')
raise InvalidArgument('x-amz-source-range', rng, err_msg)
source_size = int(source_resp.headers['Content-Length'])
if not rng_obj.ranges_for_length(source_size):
err_msg = ('Range specified is not valid for source object '
'of size: %s' % source_size)
raise InvalidArgument('x-amz-source-range', rng, err_msg)
req.headers['Range'] = rng
del req.headers['X-Amz-Copy-Source-Range']
if 'X-Amz-Copy-Source' in req.headers:
# Clear some problematic headers that might be on the source
req.headers.update({
sysmeta_header('object', 'etag'): '',
'X-Object-Sysmeta-Swift3-Etag': '', # for legacy data
'X-Object-Sysmeta-Slo-Etag': '',
'X-Object-Sysmeta-Slo-Size': '',
get_container_update_override_key('etag'): '',
})
resp = req.get_response(self.app)
if 'X-Amz-Copy-Source' in req.headers:
resp.append_copy_resp_body(req.controller_name,
req_timestamp.s3xmlformat)
resp.status = 200
return resp
class UploadsController(Controller):
"""
Handles the following APIs:
* List Multipart Uploads
* Initiate Multipart Upload
Those APIs are logged as UPLOADS operations in the S3 server log.
"""
@public
@bucket_operation(err_resp=InvalidRequest,
err_msg="Key is not expected for the GET method "
"?uploads subresource")
@check_container_existence
def GET(self, req):
"""
Handles List Multipart Uploads
"""
def separate_uploads(uploads, prefix, delimiter):
"""
separate_uploads will separate uploads into non_delimited_uploads
(a subset of uploads) and common_prefixes according to the
specified delimiter. non_delimited_uploads is a list of uploads
which exclude the delimiter. common_prefixes is a set of prefixes
prior to the specified delimiter. Note that the prefix in the
common_prefixes includes the delimiter itself.
i.e. if '/' delimiter specified and then the uploads is consists of
['foo', 'foo/bar'], this function will return (['foo'], ['foo/']).
:param uploads: A list of uploads dictionary
:param prefix: A string of prefix reserved on the upload path.
(i.e. the delimiter must be searched behind the
prefix)
:param delimiter: A string of delimiter to split the path in each
upload
:return (non_delimited_uploads, common_prefixes)
"""
if six.PY2:
(prefix, delimiter) = utf8encode(prefix, delimiter)
non_delimited_uploads = []
common_prefixes = set()
for upload in uploads:
key = upload['key']
end = key.find(delimiter, len(prefix))
if end >= 0:
common_prefix = key[:end + len(delimiter)]
common_prefixes.add(common_prefix)
else:
non_delimited_uploads.append(upload)
return non_delimited_uploads, sorted(common_prefixes)
encoding_type = get_param(req, 'encoding-type')
if encoding_type is not None and encoding_type != 'url':
err_msg = 'Invalid Encoding Method specified in Request'
raise InvalidArgument('encoding-type', encoding_type, err_msg)
keymarker = get_param(req, 'key-marker', '')
uploadid = get_param(req, 'upload-id-marker', '')
maxuploads = req.get_validated_param(
'max-uploads', DEFAULT_MAX_UPLOADS, DEFAULT_MAX_UPLOADS)
query = {
'format': 'json',
'marker': '',
}
if uploadid and keymarker:
query.update({'marker': '%s/%s' % (keymarker, uploadid)})
elif keymarker:
query.update({'marker': '%s/~' % (keymarker)})
if 'prefix' in req.params:
query.update({'prefix': get_param(req, 'prefix')})
container = req.container_name + MULTIUPLOAD_SUFFIX
uploads = []
prefixes = []
def object_to_upload(object_info):
obj, upid = object_info['name'].rsplit('/', 1)
obj_dict = {'key': obj,
'upload_id': upid,
'last_modified': object_info['last_modified']}
return obj_dict
is_segment = re.compile('.*/[0-9]+$')
while len(uploads) < maxuploads:
try:
resp = req.get_response(self.app, container=container,
query=query)
objects = json.loads(resp.body)
except NoSuchBucket:
# Assume NoSuchBucket as no uploads
objects = []
if not objects:
break
new_uploads = [object_to_upload(obj) for obj in objects
if not is_segment.match(obj.get('name', ''))]
new_prefixes = []
if 'delimiter' in req.params:
prefix = get_param(req, 'prefix', '')
delimiter = get_param(req, 'delimiter')
new_uploads, new_prefixes = separate_uploads(
new_uploads, prefix, delimiter)
uploads.extend(new_uploads)
prefixes.extend(new_prefixes)
if six.PY2:
query['marker'] = objects[-1]['name'].encode('utf-8')
else:
query['marker'] = objects[-1]['name']
truncated = len(uploads) >= maxuploads
if len(uploads) > maxuploads:
uploads = uploads[:maxuploads]
nextkeymarker = ''
nextuploadmarker = ''
if len(uploads) > 1:
nextuploadmarker = uploads[-1]['upload_id']
nextkeymarker = uploads[-1]['key']
result_elem = Element('ListMultipartUploadsResult')
SubElement(result_elem, 'Bucket').text = req.container_name
SubElement(result_elem, 'KeyMarker').text = keymarker
SubElement(result_elem, 'UploadIdMarker').text = uploadid
SubElement(result_elem, 'NextKeyMarker').text = nextkeymarker
SubElement(result_elem, 'NextUploadIdMarker').text = nextuploadmarker
if 'delimiter' in req.params:
SubElement(result_elem, 'Delimiter').text = \
get_param(req, 'delimiter')
if 'prefix' in req.params:
SubElement(result_elem, 'Prefix').text = get_param(req, 'prefix')
SubElement(result_elem, 'MaxUploads').text = str(maxuploads)
if encoding_type is not None:
SubElement(result_elem, 'EncodingType').text = encoding_type
SubElement(result_elem, 'IsTruncated').text = \
'true' if truncated else 'false'
# TODO: don't show uploads which are initiated before this bucket is
# created.
for u in uploads:
upload_elem = SubElement(result_elem, 'Upload')
name = u['key']
if encoding_type == 'url':
name = quote(name)
SubElement(upload_elem, 'Key').text = name
SubElement(upload_elem, 'UploadId').text = u['upload_id']
initiator_elem = SubElement(upload_elem, 'Initiator')
SubElement(initiator_elem, 'ID').text = req.user_id
SubElement(initiator_elem, 'DisplayName').text = req.user_id
owner_elem = SubElement(upload_elem, 'Owner')
SubElement(owner_elem, 'ID').text = req.user_id
SubElement(owner_elem, 'DisplayName').text = req.user_id
SubElement(upload_elem, 'StorageClass').text = 'STANDARD'
SubElement(upload_elem, 'Initiated').text = \
S3Timestamp.from_isoformat(u['last_modified']).s3xmlformat
for p in prefixes:
elem = SubElement(result_elem, 'CommonPrefixes')
SubElement(elem, 'Prefix').text = p
body = tostring(result_elem)
return HTTPOk(body=body, content_type='application/xml')
@public
@object_operation
@check_container_existence
def POST(self, req):
"""
Handles Initiate Multipart Upload.
"""
if len(req.object_name) > constraints.MAX_OBJECT_NAME_LENGTH:
# Note that we can still run into trouble where the MPU is just
# within the limit, which means the segment names will go over
raise KeyTooLongError()
# Create a unique S3 upload id from UUID to avoid duplicates.
upload_id = unique_id()
seg_container = req.container_name + MULTIUPLOAD_SUFFIX
content_type = req.headers.get('Content-Type')
if content_type:
req.headers[sysmeta_header('object', 'has-content-type')] = 'yes'
req.headers[
sysmeta_header('object', 'content-type')] = content_type
else:
req.headers[sysmeta_header('object', 'has-content-type')] = 'no'
req.headers['Content-Type'] = 'application/directory'
try:
seg_req = copy.copy(req)
seg_req.environ = copy.copy(req.environ)
seg_req.container_name = seg_container
seg_req.get_container_info(self.app)
except NoSuchBucket:
try:
# multi-upload bucket doesn't exist, create one with
# same storage policy and acls as the primary bucket
info = req.get_container_info(self.app)
policy_name = POLICIES[info['storage_policy']].name
hdrs = {'X-Storage-Policy': policy_name}
if info.get('read_acl'):
hdrs['X-Container-Read'] = info['read_acl']
if info.get('write_acl'):
hdrs['X-Container-Write'] = info['write_acl']
seg_req.get_response(self.app, 'PUT', seg_container, '',
headers=hdrs)
except (BucketAlreadyExists, BucketAlreadyOwnedByYou):
pass
obj = '%s/%s' % (req.object_name, upload_id)
req.headers.pop('Etag', None)
req.headers.pop('Content-Md5', None)
req.get_response(self.app, 'PUT', seg_container, obj, body='')
result_elem = Element('InitiateMultipartUploadResult')
SubElement(result_elem, 'Bucket').text = req.container_name
SubElement(result_elem, 'Key').text = wsgi_to_str(req.object_name)
SubElement(result_elem, 'UploadId').text = upload_id
body = tostring(result_elem)
return HTTPOk(body=body, content_type='application/xml')
class UploadController(Controller):
"""
Handles the following APIs:
* List Parts
* Abort Multipart Upload
* Complete Multipart Upload
Those APIs are logged as UPLOAD operations in the S3 server log.
"""
@public
@object_operation
@check_container_existence
def GET(self, req):
"""
Handles List Parts.
"""
def filter_part_num_marker(o):
try:
num = int(os.path.basename(o['name']))
return num > part_num_marker
except ValueError:
return False
encoding_type = get_param(req, 'encoding-type')
if encoding_type is not None and encoding_type != 'url':
err_msg = 'Invalid Encoding Method specified in Request'
raise InvalidArgument('encoding-type', encoding_type, err_msg)
upload_id = get_param(req, 'uploadId')
_get_upload_info(req, self.app, upload_id)
maxparts = req.get_validated_param(
'max-parts', DEFAULT_MAX_PARTS_LISTING,
self.conf.max_parts_listing)
part_num_marker = req.get_validated_param(
'part-number-marker', 0)
object_name = wsgi_to_str(req.object_name)
query = {
'format': 'json',
'prefix': '%s/%s/' % (object_name, upload_id),
'delimiter': '/',
'marker': '',
}
container = req.container_name + MULTIUPLOAD_SUFFIX
# Because the parts are out of order in Swift, we list up to the
# maximum number of parts and then apply the marker and limit options.
objects = []
while True:
resp = req.get_response(self.app, container=container, obj='',
query=query)
new_objects = json.loads(resp.body)
if not new_objects:
break
objects.extend(new_objects)
if six.PY2:
query['marker'] = new_objects[-1]['name'].encode('utf-8')
else:
query['marker'] = new_objects[-1]['name']
last_part = 0
# If the caller requested a list starting at a specific part number,
# construct a sub-set of the object list.
objList = [obj for obj in objects if filter_part_num_marker(obj)]
# pylint: disable-msg=E1103
objList.sort(key=lambda o: int(o['name'].split('/')[-1]))
if len(objList) > maxparts:
objList = objList[:maxparts]
truncated = True
else:
truncated = False
# TODO: We have to retrieve object list again when truncated is True
# and some objects filtered by invalid name because there could be no
# enough objects for limit defined by maxparts.
if objList:
o = objList[-1]
last_part = os.path.basename(o['name'])
result_elem = Element('ListPartsResult')
SubElement(result_elem, 'Bucket').text = req.container_name
if encoding_type == 'url':
object_name = quote(object_name)
SubElement(result_elem, 'Key').text = object_name
SubElement(result_elem, 'UploadId').text = upload_id
initiator_elem = SubElement(result_elem, 'Initiator')
SubElement(initiator_elem, 'ID').text = req.user_id
SubElement(initiator_elem, 'DisplayName').text = req.user_id
owner_elem = SubElement(result_elem, 'Owner')
SubElement(owner_elem, 'ID').text = req.user_id
SubElement(owner_elem, 'DisplayName').text = req.user_id
SubElement(result_elem, 'StorageClass').text = 'STANDARD'
SubElement(result_elem, 'PartNumberMarker').text = str(part_num_marker)
SubElement(result_elem, 'NextPartNumberMarker').text = str(last_part)
SubElement(result_elem, 'MaxParts').text = str(maxparts)
if 'encoding-type' in req.params:
SubElement(result_elem, 'EncodingType').text = \
get_param(req, 'encoding-type')
SubElement(result_elem, 'IsTruncated').text = \
'true' if truncated else 'false'
for i in objList:
part_elem = SubElement(result_elem, 'Part')
SubElement(part_elem, 'PartNumber').text = i['name'].split('/')[-1]
SubElement(part_elem, 'LastModified').text = \
S3Timestamp.from_isoformat(i['last_modified']).s3xmlformat
SubElement(part_elem, 'ETag').text = '"%s"' % i['hash']
SubElement(part_elem, 'Size').text = str(i['bytes'])
body = tostring(result_elem)
return HTTPOk(body=body, content_type='application/xml')
@public
@object_operation
@check_container_existence
def DELETE(self, req):
"""
Handles Abort Multipart Upload.
"""
upload_id = get_param(req, 'uploadId')
_get_upload_info(req, self.app, upload_id)
# First check to see if this multi-part upload was already
# completed. Look in the primary container, if the object exists,
# then it was completed and we return an error here.
container = req.container_name + MULTIUPLOAD_SUFFIX
obj = '%s/%s' % (req.object_name, upload_id)
req.get_response(self.app, container=container, obj=obj)
# The completed object was not found so this
# must be a multipart upload abort.
# We must delete any uploaded segments for this UploadID and then
# delete the object in the main container as well
object_name = wsgi_to_str(req.object_name)
query = {
'format': 'json',
'prefix': '%s/%s/' % (object_name, upload_id),
'delimiter': '/',
}
resp = req.get_response(self.app, 'GET', container, '', query=query)
# Iterate over the segment objects and delete them individually
objects = json.loads(resp.body)
while objects:
for o in objects:
container = req.container_name + MULTIUPLOAD_SUFFIX
obj = bytes_to_wsgi(o['name'].encode('utf-8'))
req.get_response(self.app, container=container, obj=obj)
if six.PY2:
query['marker'] = objects[-1]['name'].encode('utf-8')
else:
query['marker'] = objects[-1]['name']
resp = req.get_response(self.app, 'GET', container, '',
query=query)
objects = json.loads(resp.body)
return HTTPNoContent()
@public
@object_operation
@check_container_existence
def POST(self, req):
"""
Handles Complete Multipart Upload.
"""
upload_id = get_param(req, 'uploadId')
resp = _get_upload_info(req, self.app, upload_id)
headers = {'Accept': 'application/json',
sysmeta_header('object', 'upload-id'): upload_id}
for key, val in resp.headers.items():
_key = key.lower()
if _key.startswith('x-amz-meta-'):
headers['x-object-meta-' + _key[11:]] = val
elif _key in ('content-encoding', 'content-language',
'content-disposition', 'expires', 'cache-control'):
headers[key] = val
hct_header = sysmeta_header('object', 'has-content-type')
if resp.sysmeta_headers.get(hct_header) == 'yes':
content_type = resp.sysmeta_headers.get(
sysmeta_header('object', 'content-type'))
elif hct_header in resp.sysmeta_headers:
# has-content-type is present but false, so no content type was
# set on initial upload. In that case, we won't set one on our
# PUT request. Swift will end up guessing one based on the
# object name.
content_type = None
else:
content_type = resp.headers.get('Content-Type')
if content_type:
headers['Content-Type'] = content_type
container = req.container_name + MULTIUPLOAD_SUFFIX
s3_etag_hasher = md5(usedforsecurity=False)
manifest = []
previous_number = 0
try:
xml = req.xml(MAX_COMPLETE_UPLOAD_BODY_SIZE)
if not xml:
raise InvalidRequest(msg='You must specify at least one part')
if 'content-md5' in req.headers:
# If an MD5 was provided, we need to verify it.
# Note that S3Request already took care of translating to ETag
if req.headers['etag'] != md5(
xml, usedforsecurity=False).hexdigest():
raise BadDigest(content_md5=req.headers['content-md5'])
# We're only interested in the body here, in the
# multipart-upload controller -- *don't* let it get
# plumbed down to the object-server
del req.headers['etag']
complete_elem = fromstring(
xml, 'CompleteMultipartUpload', self.logger)
for part_elem in complete_elem.iterchildren('Part'):
part_number = int(part_elem.find('./PartNumber').text)
if part_number <= previous_number:
raise InvalidPartOrder(upload_id=upload_id)
previous_number = part_number
etag = normalize_etag(part_elem.find('./ETag').text)
if etag is None:
raise InvalidPart(upload_id=upload_id,
part_number=part_number,
e_tag=etag)
if len(etag) != 32 or any(c not in '0123456789abcdef'
for c in etag):
raise InvalidPart(upload_id=upload_id,
part_number=part_number,
e_tag=etag)
manifest.append({
'path': '/%s/%s/%s/%d' % (
wsgi_to_str(container), wsgi_to_str(req.object_name),
upload_id, part_number),
'etag': etag})
s3_etag_hasher.update(binascii.a2b_hex(etag))
except (XMLSyntaxError, DocumentInvalid):
# NB: our schema definitions catch uploads with no parts here
raise MalformedXML()
except ErrorResponse:
raise
except Exception as e:
self.logger.error(e)
raise
s3_etag = '%s-%d' % (s3_etag_hasher.hexdigest(), len(manifest))
s3_etag_header = sysmeta_header('object', 'etag')
if resp.sysmeta_headers.get(s3_etag_header) == s3_etag:
# This header should only already be present if the upload marker
# has been cleaned up and the current target uses the same
# upload-id; assuming the segments to use haven't changed, the work
# is already done
return HTTPOk(body=_make_complete_body(req, s3_etag, False),
content_type='application/xml')
headers[s3_etag_header] = s3_etag
# Leave base header value blank; SLO will populate
c_etag = '; s3_etag=%s' % s3_etag
headers[get_container_update_override_key('etag')] = c_etag
too_small_message = ('s3api requires that each segment be at least '
'%d bytes' % self.conf.min_segment_size)
def size_checker(manifest):
# Check the size of each segment except the last and make sure
# they are all more than the minimum upload chunk size.
# Note that we need to use the *internal* keys, since we're
# looking at the manifest that's about to be written.
return [
(item['name'], too_small_message)
for item in manifest[:-1]
if item and item['bytes'] < self.conf.min_segment_size]
req.environ['swift.callback.slo_manifest_hook'] = size_checker
start_time = time.time()
def response_iter():
# NB: XML requires that the XML declaration, if present, be at the
# very start of the document. Clients *will* call us out on not
# being valid XML if we pass through whitespace before it.
# Track whether we've sent anything yet so we can yield out that
# declaration *first*
yielded_anything = False
try:
try:
# TODO: add support for versioning
put_resp = req.get_response(
self.app, 'PUT', body=json.dumps(manifest),
query={'multipart-manifest': 'put',
'heartbeat': 'on'},
headers=headers)
if put_resp.status_int == 202:
body = []
put_resp.fix_conditional_response()
for chunk in put_resp.response_iter:
if not chunk.strip():
if time.time() - start_time < 10:
# Include some grace period to keep
# ceph-s3tests happy
continue
if not yielded_anything:
yield (b'<?xml version="1.0" '
b'encoding="UTF-8"?>\n')
yielded_anything = True
yield chunk
continue
body.append(chunk)
body = json.loads(b''.join(body))
if body['Response Status'] != '201 Created':
for seg, err in body['Errors']:
if err == too_small_message:
raise EntityTooSmall()
elif err in ('Etag Mismatch', '404 Not Found'):
raise InvalidPart(upload_id=upload_id)
raise InvalidRequest(
status=body['Response Status'],
msg='\n'.join(': '.join(err)
for err in body['Errors']))
except InvalidRequest as err_resp:
msg = err_resp._msg
if too_small_message in msg:
raise EntityTooSmall(msg)
elif ', Etag Mismatch' in msg:
raise InvalidPart(upload_id=upload_id)
elif ', 404 Not Found' in msg:
raise InvalidPart(upload_id=upload_id)
else:
raise
# clean up the multipart-upload record
obj = '%s/%s' % (req.object_name, upload_id)
try:
req.get_response(self.app, 'DELETE', container, obj)
except NoSuchKey:
# The important thing is that we wrote out a tombstone to
# make sure the marker got cleaned up. If it's already
# gone (e.g., because of concurrent completes or a retried
# complete), so much the better.
pass
yield _make_complete_body(req, s3_etag, yielded_anything)
except ErrorResponse as err_resp:
if yielded_anything:
err_resp.xml_declaration = False
yield b'\n'
else:
# Oh good, we can still change HTTP status code, too!
resp.status = err_resp.status
for chunk in err_resp({}, lambda *a: None):
yield chunk
resp = HTTPOk() # assume we're good for now... but see above!
resp.app_iter = reiterate(response_iter())
resp.content_type = "application/xml"
return resp
| swift-master | swift/common/middleware/s3api/controllers/multi_upload.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.utils import public
from swift.common.middleware.s3api.controllers.base import Controller, \
S3NotImplemented
from swift.common.middleware.s3api.s3response import HTTPOk
from swift.common.middleware.s3api.etree import Element, tostring, \
SubElement
class TaggingController(Controller):
"""
Handles the following APIs:
* GET Bucket and Object tagging
* PUT Bucket and Object tagging
* DELETE Bucket and Object tagging
"""
@public
def GET(self, req):
"""
Handles GET Bucket and Object tagging.
"""
elem = Element('Tagging')
SubElement(elem, 'TagSet')
body = tostring(elem)
return HTTPOk(body=body, content_type=None)
@public
def PUT(self, req):
"""
Handles PUT Bucket and Object tagging.
"""
raise S3NotImplemented('The requested resource is not implemented')
@public
def DELETE(self, req):
"""
Handles DELETE Bucket and Object tagging.
"""
raise S3NotImplemented('The requested resource is not implemented')
| swift-master | swift/common/middleware/s3api/controllers/tagging.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import functools
from swift.common.middleware.s3api.s3response import S3NotImplemented, \
InvalidRequest
from swift.common.middleware.s3api.utils import camel_to_snake
def bucket_operation(func=None, err_resp=None, err_msg=None):
"""
A decorator to ensure that the request is a bucket operation. If the
target resource is an object, this decorator updates the request by default
so that the controller handles it as a bucket operation. If 'err_resp' is
specified, this raises it on error instead.
"""
def _bucket_operation(func):
@functools.wraps(func)
def wrapped(self, req):
if not req.is_bucket_request:
if err_resp:
raise err_resp(msg=err_msg)
self.logger.debug('A key is specified for bucket API.')
req.object_name = None
return func(self, req)
return wrapped
if func:
return _bucket_operation(func)
else:
return _bucket_operation
def object_operation(func):
"""
A decorator to ensure that the request is an object operation. If the
target resource is not an object, this raises an error response.
"""
@functools.wraps(func)
def wrapped(self, req):
if not req.is_object_request:
raise InvalidRequest('A key must be specified')
return func(self, req)
return wrapped
def check_container_existence(func):
"""
A decorator to ensure the container existence.
"""
@functools.wraps(func)
def check_container(self, req):
req.get_container_info(self.app)
return func(self, req)
return check_container
class Controller(object):
"""
Base WSGI controller class for the middleware
"""
def __init__(self, app, conf, logger, **kwargs):
self.app = app
self.conf = conf
self.logger = logger
@classmethod
def resource_type(cls):
"""
Returns the target resource type of this controller.
"""
name = cls.__name__[:-len('Controller')]
return camel_to_snake(name).upper()
class UnsupportedController(Controller):
"""
Handles unsupported requests.
"""
def __init__(self, app, conf, logger, **kwargs):
raise S3NotImplemented('The requested resource is not implemented')
| swift-master | swift/common/middleware/s3api/controllers/base.py |
# Copyright (c) 2010-2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import json
from swift.common.constraints import MAX_OBJECT_NAME_LENGTH
from swift.common.http import HTTP_NO_CONTENT
from swift.common.swob import str_to_wsgi
from swift.common.utils import public, StreamingPile
from swift.common.registry import get_swift_info
from swift.common.middleware.s3api.controllers.base import Controller, \
bucket_operation
from swift.common.middleware.s3api.etree import Element, SubElement, \
fromstring, tostring, XMLSyntaxError, DocumentInvalid
from swift.common.middleware.s3api.s3response import HTTPOk, \
S3NotImplemented, NoSuchKey, ErrorResponse, MalformedXML, \
UserKeyMustBeSpecified, AccessDenied, MissingRequestBodyError
class MultiObjectDeleteController(Controller):
"""
Handles Delete Multiple Objects, which is logged as a MULTI_OBJECT_DELETE
operation in the S3 server log.
"""
def _gen_error_body(self, error, elem, delete_list):
for key, version in delete_list:
error_elem = SubElement(elem, 'Error')
SubElement(error_elem, 'Key').text = key
if version is not None:
SubElement(error_elem, 'VersionId').text = version
SubElement(error_elem, 'Code').text = error.__class__.__name__
SubElement(error_elem, 'Message').text = error._msg
return tostring(elem)
@public
@bucket_operation
def POST(self, req):
"""
Handles Delete Multiple Objects.
"""
def object_key_iter(elem):
for obj in elem.iterchildren('Object'):
key = obj.find('./Key').text
if not key:
raise UserKeyMustBeSpecified()
version = obj.find('./VersionId')
if version is not None:
version = version.text
yield key, version
max_body_size = min(
# FWIW, AWS limits multideletes to 1000 keys, and swift limits
# object names to 1024 bytes (by default). Add a factor of two to
# allow some slop.
2 * self.conf.max_multi_delete_objects * MAX_OBJECT_NAME_LENGTH,
# But, don't let operators shoot themselves in the foot
10 * 1024 * 1024)
try:
xml = req.xml(max_body_size)
if not xml:
raise MissingRequestBodyError()
req.check_md5(xml)
elem = fromstring(xml, 'Delete', self.logger)
quiet = elem.find('./Quiet')
self.quiet = quiet is not None and quiet.text.lower() == 'true'
delete_list = list(object_key_iter(elem))
if len(delete_list) > self.conf.max_multi_delete_objects:
raise MalformedXML()
except (XMLSyntaxError, DocumentInvalid):
raise MalformedXML()
except ErrorResponse:
raise
except Exception as e:
self.logger.error(e)
raise
elem = Element('DeleteResult')
# check bucket existence
try:
req.get_response(self.app, 'HEAD')
except AccessDenied as error:
body = self._gen_error_body(error, elem, delete_list)
return HTTPOk(body=body)
if 'object_versioning' not in get_swift_info() and any(
version not in ('null', None)
for _key, version in delete_list):
raise S3NotImplemented()
def do_delete(base_req, key, version):
req = copy.copy(base_req)
req.environ = copy.copy(base_req.environ)
req.object_name = str_to_wsgi(key)
if version:
req.params = {'version-id': version, 'symlink': 'get'}
try:
try:
query = req.gen_multipart_manifest_delete_query(
self.app, version=version)
except NoSuchKey:
query = {}
if version:
query['version-id'] = version
query['symlink'] = 'get'
resp = req.get_response(self.app, method='DELETE', query=query,
headers={'Accept': 'application/json'})
# If async segment cleanup is available, we expect to get
# back a 204; otherwise, the delete is synchronous and we
# have to read the response to actually do the SLO delete
if query.get('multipart-manifest') and \
resp.status_int != HTTP_NO_CONTENT:
try:
delete_result = json.loads(resp.body)
if delete_result['Errors']:
# NB: bulk includes 404s in "Number Not Found",
# not "Errors"
msg_parts = [delete_result['Response Status']]
msg_parts.extend(
'%s: %s' % (obj, status)
for obj, status in delete_result['Errors'])
return key, {'code': 'SLODeleteError',
'message': '\n'.join(msg_parts)}
# else, all good
except (ValueError, TypeError, KeyError):
# Logs get all the gory details
self.logger.exception(
'Could not parse SLO delete response (%s): %s',
resp.status, resp.body)
# Client gets something more generic
return key, {'code': 'SLODeleteError',
'message': 'Unexpected swift response'}
except NoSuchKey:
pass
except ErrorResponse as e:
return key, {'code': e.__class__.__name__, 'message': e._msg}
except Exception:
self.logger.exception(
'Unexpected Error handling DELETE of %r %r' % (
req.container_name, key))
return key, {'code': 'Server Error', 'message': 'Server Error'}
return key, None
with StreamingPile(self.conf.multi_delete_concurrency) as pile:
for key, err in pile.asyncstarmap(do_delete, (
(req, key, version) for key, version in delete_list)):
if err:
error = SubElement(elem, 'Error')
SubElement(error, 'Key').text = key
SubElement(error, 'Code').text = err['code']
SubElement(error, 'Message').text = err['message']
elif not self.quiet:
deleted = SubElement(elem, 'Deleted')
SubElement(deleted, 'Key').text = key
body = tostring(elem)
return HTTPOk(body=body)
| swift-master | swift/common/middleware/s3api/controllers/multi_delete.py |
# Copyright (c) 2014 OpenStack Foundation.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from six.moves.urllib.parse import quote
from swift.common.utils import public
from swift.common.middleware.s3api.controllers.base import Controller
from swift.common.middleware.s3api.s3response import HTTPOk
from swift.common.middleware.s3api.etree import tostring
class S3AclController(Controller):
"""
Handles the following APIs:
* GET Bucket acl
* PUT Bucket acl
* GET Object acl
* PUT Object acl
Those APIs are logged as ACL operations in the S3 server log.
"""
@public
def GET(self, req):
"""
Handles GET Bucket acl and GET Object acl.
"""
resp = req.get_response(self.app, method='HEAD')
acl = resp.object_acl if req.is_object_request else resp.bucket_acl
resp = HTTPOk()
resp.body = tostring(acl.elem())
return resp
@public
def PUT(self, req):
"""
Handles PUT Bucket acl and PUT Object acl.
"""
if req.is_object_request:
headers = {}
src_path = '/%s/%s' % (req.container_name, req.object_name)
# object-sysmeta' can be updated by 'Copy' method,
# but can not be by 'POST' method.
# So headers['X-Copy-From'] for copy request is added here.
headers['X-Copy-From'] = quote(src_path)
headers['Content-Length'] = 0
req.get_response(self.app, 'PUT', headers=headers)
else:
req.get_response(self.app, 'POST')
return HTTPOk()
| swift-master | swift/common/middleware/s3api/controllers/s3_acl.py |
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from castellan import key_manager, options
from castellan.common.credentials import keystone_password
from oslo_config import cfg
from swift.common.middleware.crypto.keymaster import BaseKeyMaster
class KmsKeyMaster(BaseKeyMaster):
"""Middleware for retrieving a encryption root secret from an external KMS.
The middleware accesses the encryption root secret from an external key
management system (KMS), e.g., a Barbican service, using Castellan. To be
able to do so, the appropriate configuration options shall be set in the
proxy-server.conf file, or in the configuration pointed to using the
keymaster_config_path configuration value in the proxy-server.conf file.
"""
log_route = 'kms_keymaster'
keymaster_opts = ('username', 'password', 'project_name',
'user_domain_name', 'project_domain_name',
'user_id', 'user_domain_id', 'trust_id',
'domain_id', 'domain_name', 'project_id',
'project_domain_id', 'reauthenticate',
'auth_endpoint', 'api_class', 'key_id*',
'active_root_secret_id')
keymaster_conf_section = 'kms_keymaster'
def _get_root_secret(self, conf):
"""
Retrieve the root encryption secret from an external key management
system using Castellan.
:param conf: the keymaster config section from proxy-server.conf
:type conf: dict
:return: the encryption root secret binary bytes
:rtype: bytearray
"""
ctxt = keystone_password.KeystonePassword(
auth_url=conf.get('auth_endpoint'),
username=conf.get('username'),
password=conf.get('password'),
project_name=conf.get('project_name'),
user_domain_name=conf.get('user_domain_name'),
project_domain_name=conf.get(
'project_domain_name'),
user_id=conf.get('user_id'),
user_domain_id=conf.get('user_domain_id'),
trust_id=conf.get('trust_id'),
domain_id=conf.get('domain_id'),
domain_name=conf.get('domain_name'),
project_id=conf.get('project_id'),
project_domain_id=conf.get('project_domain_id'),
reauthenticate=conf.get('reauthenticate'))
oslo_conf = cfg.ConfigOpts()
options.set_defaults(
oslo_conf, auth_endpoint=conf.get('auth_endpoint'),
api_class=conf.get('api_class')
)
options.enable_logging()
manager = key_manager.API(oslo_conf)
root_secrets = {}
for opt, secret_id, key_id in self._load_multikey_opts(
conf, 'key_id'):
key = manager.get(ctxt, key_id)
if key is None:
raise ValueError("Retrieval of encryption root secret with "
"key_id '%s' returned None."
% (key_id, ))
try:
if (key.bit_length < 256) or (key.algorithm.lower() != "aes"):
raise ValueError('encryption root secret stored in the '
'external KMS must be an AES key of at '
'least 256 bits (provided key '
'length: %d, provided key algorithm: %s)'
% (key.bit_length, key.algorithm))
if (key.format != 'RAW'):
raise ValueError('encryption root secret stored in the '
'external KMS must be in RAW format and '
'not e.g., as a base64 encoded string '
'(format of key with uuid %s: %s)' %
(key_id, key.format))
except Exception:
raise ValueError("Secret with key_id '%s' is not a symmetric "
"key (type: %s)" % (key_id, str(type(key))))
secret = key.get_encoded()
if not isinstance(secret, bytes):
secret = secret.encode('utf-8')
root_secrets[secret_id] = secret
return root_secrets
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def kms_keymaster_filter(app):
return KmsKeyMaster(app, conf)
return kms_keymaster_filter
| swift-master | swift/common/middleware/crypto/kms_keymaster.py |
# -*- coding: utf-8 -*-
# Copyright (c) 2018 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from swift.common.middleware.crypto import keymaster
from swift.common.utils import LogLevelFilter
from kmip.pie.client import ProxyKmipClient
"""
This middleware enables Swift to fetch a root secret from a KMIP service.
The root secret is expected to have been previously created in the KMIP service
and is referenced by its unique identifier. The secret should be an AES-256
symmetric key.
To use this middleware, edit the swift proxy-server.conf to insert the
middleware in the wsgi pipeline, replacing any other keymaster middleware::
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging \
<other middleware> kmip_keymaster encryption proxy-logging proxy-server
and add a new filter section::
[filter:kmip_keymaster]
use = egg:swift#kmip_keymaster
key_id = <unique id of secret to be fetched from the KMIP service>
key_id_<secret_id> = <unique id of additional secret to be fetched>
active_root_secret_id = <secret_id to be used for new encryptions>
host = <KMIP server host>
port = <KMIP server port>
certfile = /path/to/client/cert.pem
keyfile = /path/to/client/key.pem
ca_certs = /path/to/server/cert.pem
username = <KMIP username>
password = <KMIP password>
Apart from ``use``, ``key_id*``, ``active_root_secret_id`` the options are
as defined for a PyKMIP client. The authoritative definition of these options
can be found at `https://pykmip.readthedocs.io/en/latest/client.html`_
The value of each ``key_id*`` option should be a unique identifier for a secret
to be retrieved from the KMIP service. Any of these secrets may be used for
*decryption*.
The value of the ``active_root_secret_id`` option should be the ``secret_id``
for the secret that should be used for all new *encryption*. If not specified,
the ``key_id`` secret will be used.
.. note::
To ensure there is no loss of data availability, deploying a new key to
your cluster requires a two-stage config change. First, add the new key
to the ``key_id_<secret_id>`` option and restart the proxy-server. Do this
for all proxies. Next, set the ``active_root_secret_id`` option to the
new secret id and restart the proxy. Again, do this for all proxies. This
process ensures that all proxies will have the new key available for
*decryption* before any proxy uses it for *encryption*.
The keymaster configuration can alternatively be defined in a separate config
file by using the ``keymaster_config_path`` option::
[filter:kmip_keymaster]
use = egg:swift#kmip_keymaster
keymaster_config_path=/etc/swift/kmip_keymaster.conf
In this case, the ``filter:kmip_keymaster`` section should contain no other
options than ``use`` and ``keymaster_config_path``. All other options should be
defined in the separate config file in a section named ``kmip_keymaster``. For
example::
[kmip_keymaster]
key_id = 1234567890
key_id_foo = 2468024680
key_id_bar = 1357913579
active_root_secret_id = foo
host = 127.0.0.1
port = 5696
certfile = /etc/swift/kmip_client.crt
keyfile = /etc/swift/kmip_client.key
ca_certs = /etc/swift/kmip_server.crt
username = swift
password = swift_password
"""
class KmipKeyMaster(keymaster.BaseKeyMaster):
log_route = 'kmip_keymaster'
keymaster_opts = ('host', 'port', 'certfile', 'keyfile',
'ca_certs', 'username', 'password',
'active_root_secret_id', 'key_id*')
keymaster_conf_section = 'kmip_keymaster'
def _load_keymaster_config_file(self, conf):
conf = super(KmipKeyMaster, self)._load_keymaster_config_file(conf)
if self.keymaster_config_path:
section = self.keymaster_conf_section
else:
# __name__ is just the filter name, not the whole section name.
# Luckily, PasteDeploy only uses the one prefix for filters.
section = 'filter:' + conf['__name__']
if os.path.isdir(conf['__file__']):
raise ValueError(
'KmipKeyMaster config cannot be read from conf dir %s. Use '
'keymaster_config_path option in the proxy server config to '
'specify a config file.')
# Make sure we've got the kmip log handler set up before
# we instantiate a client
kmip_logger = logging.getLogger('kmip')
for handler in self.logger.logger.handlers:
kmip_logger.addHandler(handler)
debug_filter = LogLevelFilter(logging.DEBUG)
for name in (
# The kmip_protocol logger includes hex-encoded data off the
# wire, which may include key material!! We *NEVER* want that
# enabled.
'kmip.services.server.kmip_protocol',
# The config_helper logger includes any password that may be
# provided, which doesn't seem great either.
'kmip.core.config_helper',
):
logging.getLogger(name).addFilter(debug_filter)
self.proxy_kmip_client = ProxyKmipClient(
config=section,
config_file=conf['__file__']
)
return conf
def _get_root_secret(self, conf):
multikey_opts = self._load_multikey_opts(conf, 'key_id')
kmip_to_secret = {}
root_secrets = {}
with self.proxy_kmip_client as client:
for opt, secret_id, kmip_id in multikey_opts:
if kmip_id in kmip_to_secret:
# Save some round trips if there are multiple
# secret_ids for a single kmip_id
root_secrets[secret_id] = root_secrets[
kmip_to_secret[kmip_id]]
continue
secret = client.get(kmip_id)
algo = secret.cryptographic_algorithm.name
length = secret.cryptographic_length
if (algo, length) != ('AES', 256):
raise ValueError(
'Expected key %s to be an AES-256 key, not %s-%d' % (
kmip_id, algo, length))
root_secrets[secret_id] = secret.value
kmip_to_secret.setdefault(kmip_id, secret_id)
return root_secrets
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def keymaster_filter(app):
return KmipKeyMaster(app, conf)
return keymaster_filter
| swift-master | swift/common/middleware/crypto/kmip_keymaster.py |
# Copyright (c) 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
import hmac
import six
from swift.common.exceptions import UnknownSecretIdError
from swift.common.middleware.crypto.crypto_utils import CRYPTO_KEY_CALLBACK
from swift.common.swob import Request, HTTPException, wsgi_to_str, str_to_wsgi
from swift.common.utils import readconf, strict_b64decode, get_logger, \
split_path
from swift.common.wsgi import WSGIContext
class KeyMasterContext(WSGIContext):
"""
The simple scheme for key derivation is as follows: every path is
associated with a key, where the key is derived from the path itself in a
deterministic fashion such that the key does not need to be stored.
Specifically, the key for any path is an HMAC of a root key and the path
itself, calculated using an SHA256 hash function::
<path_key> = HMAC_SHA256(<root_secret>, <path>)
"""
def __init__(self, keymaster, account, container, obj,
meta_version_to_write='2'):
"""
:param keymaster: a Keymaster instance
:param account: account name
:param container: container name
:param obj: object name
"""
super(KeyMasterContext, self).__init__(keymaster.app)
self.keymaster = keymaster
self.account = account
self.container = container
self.obj = obj
self._keys = {}
self.alternate_fetch_keys = None
self.meta_version_to_write = meta_version_to_write
def _make_key_id(self, path, secret_id, version):
if version in ('1', '2'):
path = str_to_wsgi(path)
key_id = {'v': version, 'path': path}
if secret_id:
# stash secret_id so that decrypter can pass it back to get the
# same keys
key_id['secret_id'] = secret_id
return key_id
def fetch_crypto_keys(self, key_id=None, *args, **kwargs):
"""
Setup container and object keys based on the request path.
Keys are derived from request path. The 'id' entry in the results dict
includes the part of the path used to derive keys. Other keymaster
implementations may use a different strategy to generate keys and may
include a different type of 'id', so callers should treat the 'id' as
opaque keymaster-specific data.
:param key_id: if given this should be a dict with the items included
under the ``id`` key of a dict returned by this method.
:returns: A dict containing encryption keys for 'object' and
'container', and entries 'id' and 'all_ids'. The 'all_ids' entry is a
list of key id dicts for all root secret ids including the one used
to generate the returned keys.
"""
if key_id:
secret_id = key_id.get('secret_id')
version = key_id['v']
if version not in ('1', '2', '3'):
raise ValueError('Unknown key_id version: %s' % version)
if version == '1' and not key_id['path'].startswith(
'/' + self.account + '/'):
# Well shoot. This was the bug that made us notice we needed
# a v2! Hope the current account/container was the original!
key_acct, key_cont, key_obj = (
self.account, self.container, key_id['path'])
else:
key_acct, key_cont, key_obj = split_path(
key_id['path'], 1, 3, True)
check_path = (
self.account, self.container or key_cont, self.obj or key_obj)
if version in ('1', '2') and (
key_acct, key_cont, key_obj) != check_path:
# Older py3 proxies may have written down crypto meta as WSGI
# strings; we still need to be able to read that
try:
if six.PY2:
alt_path = tuple(
part.decode('utf-8').encode('latin1')
for part in (key_acct, key_cont, key_obj))
else:
alt_path = tuple(
part.encode('latin1').decode('utf-8')
for part in (key_acct, key_cont, key_obj))
except UnicodeError:
# Well, it was worth a shot
pass
else:
if check_path == alt_path or (
check_path[:2] == alt_path[:2] and not self.obj):
# This object is affected by bug #1888037
key_acct, key_cont, key_obj = alt_path
if (key_acct, key_cont, key_obj) != check_path:
# Pipeline may have been misconfigured, with copy right of
# encryption. In that case, path in meta may not be the
# request path.
self.keymaster.logger.info(
"Path stored in meta (%r) does not match path from "
"request (%r)! Using path from meta.",
key_id['path'],
'/' + '/'.join(x for x in [
self.account, self.container, self.obj] if x))
else:
secret_id = self.keymaster.active_secret_id
# v1 had a bug where we would claim the path was just the object
# name if the object started with a slash.
# v1 and v2 had a bug on py3 where we'd write the path in meta as
# a WSGI string (ie, as Latin-1 chars decoded from UTF-8 bytes).
# Bump versions to establish that we can trust the path.
version = self.meta_version_to_write
key_acct, key_cont, key_obj = (
self.account, self.container, self.obj)
if (secret_id, version) in self._keys:
return self._keys[(secret_id, version)]
keys = {}
account_path = '/' + key_acct
try:
# self.account/container/obj reflect the level of the *request*,
# which may be different from the level of the key_id-path. Only
# fetch the keys that the request needs.
if self.container:
path = account_path + '/' + key_cont
keys['container'] = self.keymaster.create_key(
path, secret_id=secret_id)
if self.obj:
if key_obj.startswith('/') and version == '1':
path = key_obj
else:
path = path + '/' + key_obj
keys['object'] = self.keymaster.create_key(
path, secret_id=secret_id)
# For future-proofing include a keymaster version number and
# the path used to derive keys in the 'id' entry of the
# results. The encrypter will persist this as part of the
# crypto-meta for encrypted data and metadata. If we ever
# change the way keys are generated then the decrypter could
# pass the persisted 'id' value when it calls fetch_crypto_keys
# to inform the keymaster as to how that particular data or
# metadata had its keys generated. Currently we have no need to
# do that, so we are simply persisting this information for
# future use.
keys['id'] = self._make_key_id(path, secret_id, version)
# pass back a list of key id dicts for all other secret ids in
# case the caller is interested, in which case the caller can
# call this method again for different secret ids; this avoided
# changing the return type of the callback or adding another
# callback. Note that the caller should assume no knowledge of
# the content of these key id dicts.
keys['all_ids'] = [self._make_key_id(path, id_, version)
for id_ in self.keymaster.root_secret_ids]
if self.alternate_fetch_keys:
alternate_keys = self.alternate_fetch_keys(
key_id=None, *args, **kwargs)
keys['all_ids'].extend(alternate_keys.get('all_ids', []))
self._keys[(secret_id, version)] = keys
return keys
except UnknownSecretIdError:
if self.alternate_fetch_keys:
return self.alternate_fetch_keys(key_id, *args, **kwargs)
raise
def handle_request(self, req, start_response):
self.alternate_fetch_keys = req.environ.get(CRYPTO_KEY_CALLBACK)
req.environ[CRYPTO_KEY_CALLBACK] = self.fetch_crypto_keys
resp = self._app_call(req.environ)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
class BaseKeyMaster(object):
"""Base middleware for providing encryption keys.
This provides some basic helpers for:
- loading from a separate config path,
- deriving keys based on path, and
- installing a ``swift.callback.fetch_crypto_keys`` hook
in the request environment.
Subclasses should define ``log_route``, ``keymaster_opts``, and
``keymaster_conf_section`` attributes, and implement the
``_get_root_secret`` function.
"""
@property
def log_route(self):
raise NotImplementedError
@property
def keymaster_opts(self):
raise NotImplementedError
@property
def keymaster_conf_section(self):
raise NotImplementedError
def _get_root_secret(self, conf):
raise NotImplementedError
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route=self.log_route)
self.keymaster_config_path = conf.get('keymaster_config_path')
conf = self._load_keymaster_config_file(conf)
# The _get_root_secret() function is overridden by other keymasters
# which may historically only return a single value
self._root_secrets = self._get_root_secret(conf)
if not isinstance(self._root_secrets, dict):
self._root_secrets = {None: self._root_secrets}
self.active_secret_id = conf.get('active_root_secret_id') or None
if self.active_secret_id not in self._root_secrets:
raise ValueError('No secret loaded for active_root_secret_id %s' %
self.active_secret_id)
for secret_id, secret in self._root_secrets.items():
if not isinstance(secret, bytes):
raise ValueError('Secret with id %s is %s, not bytes' % (
secret_id, type(secret)))
self.meta_version_to_write = conf.get('meta_version_to_write') or '2'
if self.meta_version_to_write not in ('1', '2', '3'):
raise ValueError('Unknown/unsupported metadata version: %r' %
self.meta_version_to_write)
@property
def root_secret(self):
# Returns the default root secret; this is here for historical reasons
# to support tests and any third party code that might have used it
return self._root_secrets.get(self.active_secret_id)
@property
def root_secret_ids(self):
# Only sorted to simplify testing
return sorted(self._root_secrets.keys(), key=lambda x: x or '')
def _load_keymaster_config_file(self, conf):
if not self.keymaster_config_path:
return conf
# Keymaster options specified in the filter section would be ignored if
# a separate keymaster config file is specified. To avoid confusion,
# prohibit them existing in the filter section.
bad_opts = []
for opt in conf:
for km_opt in self.keymaster_opts:
if ((km_opt.endswith('*') and opt.startswith(km_opt[:-1])) or
opt == km_opt):
bad_opts.append(opt)
if bad_opts:
raise ValueError('keymaster_config_path is set, but there '
'are other config options specified: %s' %
", ".join(bad_opts))
return readconf(self.keymaster_config_path,
self.keymaster_conf_section)
def _load_multikey_opts(self, conf, prefix):
result = []
for k, v in conf.items():
if not k.startswith(prefix):
continue
suffix = k[len(prefix):]
if suffix and (suffix[0] != '_' or len(suffix) < 2):
raise ValueError('Malformed root secret option name %s' % k)
result.append((k, suffix[1:] or None, v))
return sorted(result)
def __call__(self, env, start_response):
req = Request(env)
try:
parts = [wsgi_to_str(part) for part in req.split_path(2, 4, True)]
except ValueError:
return self.app(env, start_response)
if req.method in ('PUT', 'POST', 'GET', 'HEAD'):
# handle only those request methods that may require keys
km_context = KeyMasterContext(
self, *parts[1:],
meta_version_to_write=self.meta_version_to_write)
try:
return km_context.handle_request(req, start_response)
except HTTPException as err_resp:
return err_resp(env, start_response)
# anything else
return self.app(env, start_response)
def create_key(self, path, secret_id=None):
"""
Creates an encryption key that is unique for the given path.
:param path: the (WSGI string) path of the resource being encrypted.
:param secret_id: the id of the root secret from which the key should
be derived.
:return: an encryption key.
:raises UnknownSecretIdError: if the secret_id is not recognised.
"""
try:
key = self._root_secrets[secret_id]
except KeyError:
self.logger.warning('Unrecognised secret id: %s' % secret_id)
raise UnknownSecretIdError(secret_id)
else:
if not six.PY2:
path = path.encode('utf-8')
return hmac.new(key, path, digestmod=hashlib.sha256).digest()
class KeyMaster(BaseKeyMaster):
"""Middleware for providing encryption keys.
The middleware requires its encryption root secret to be set. This is the
root secret from which encryption keys are derived. This must be set before
first use to a value that is at least 256 bits. The security of all
encrypted data critically depends on this key, therefore it should be set
to a high-entropy value. For example, a suitable value may be obtained by
generating a 32 byte (or longer) value using a cryptographically secure
random number generator. Changing the root secret is likely to result in
data loss.
"""
log_route = 'keymaster'
keymaster_opts = ('encryption_root_secret*', 'active_root_secret_id')
keymaster_conf_section = 'keymaster'
def _get_root_secret(self, conf):
"""
This keymaster requires ``encryption_root_secret[_id]`` options to be
set. At least one must be set before first use to a value that is a
base64 encoding of at least 32 bytes. The encryption root secrets are
specified in either proxy-server.conf, or in an external file
referenced from proxy-server.conf using ``keymaster_config_path``.
:param conf: the keymaster config section from proxy-server.conf
:type conf: dict
:return: a dict mapping secret ids to encryption root secret binary
bytes
:rtype: dict
"""
root_secrets = {}
for opt, secret_id, value in self._load_multikey_opts(
conf, 'encryption_root_secret'):
try:
secret = self._decode_root_secret(value)
except ValueError:
raise ValueError(
'%s option in %s must be a base64 encoding of at '
'least 32 raw bytes' %
(opt, self.keymaster_config_path or 'proxy-server.conf'))
root_secrets[secret_id] = secret
return root_secrets
def _decode_root_secret(self, b64_root_secret):
binary_root_secret = strict_b64decode(b64_root_secret,
allow_line_breaks=True)
if len(binary_root_secret) < 32:
raise ValueError
return binary_root_secret
def filter_factory(global_conf, **local_conf):
conf = global_conf.copy()
conf.update(local_conf)
def keymaster_filter(app):
return KeyMaster(app, conf)
return keymaster_filter
| swift-master | swift/common/middleware/crypto/keymaster.py |
# Copyright (c) 2015-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import hashlib
import hmac
from contextlib import contextmanager
from swift.common.constraints import check_metadata
from swift.common.http import is_success
from swift.common.middleware.crypto.crypto_utils import CryptoWSGIContext, \
dump_crypto_meta, append_crypto_meta, Crypto
from swift.common.request_helpers import get_object_transient_sysmeta, \
strip_user_meta_prefix, is_user_meta, update_etag_is_at_header, \
get_container_update_override_key
from swift.common.swob import Request, Match, HTTPException, \
HTTPUnprocessableEntity, wsgi_to_bytes, bytes_to_wsgi, normalize_etag
from swift.common.utils import get_logger, config_true_value, \
MD5_OF_EMPTY_STRING, md5
def encrypt_header_val(crypto, value, key):
"""
Encrypt a header value using the supplied key.
:param crypto: a Crypto instance
:param value: value to encrypt
:param key: crypto key to use
:returns: a tuple of (encrypted value, crypto_meta) where crypto_meta is a
dict of form returned by
:py:func:`~swift.common.middleware.crypto.Crypto.get_crypto_meta`
:raises ValueError: if value is empty
"""
if not value:
raise ValueError('empty value is not acceptable')
crypto_meta = crypto.create_crypto_meta()
crypto_ctxt = crypto.create_encryption_ctxt(key, crypto_meta['iv'])
enc_val = bytes_to_wsgi(base64.b64encode(
crypto_ctxt.update(wsgi_to_bytes(value))))
return enc_val, crypto_meta
def _hmac_etag(key, etag):
"""
Compute an HMAC-SHA256 using given key and etag.
:param key: The starting key for the hash.
:param etag: The etag to hash.
:returns: a Base64-encoded representation of the HMAC
"""
if not isinstance(etag, bytes):
etag = wsgi_to_bytes(etag)
result = hmac.new(key, etag, digestmod=hashlib.sha256).digest()
return base64.b64encode(result).decode()
class EncInputWrapper(object):
"""File-like object to be swapped in for wsgi.input."""
def __init__(self, crypto, keys, req, logger):
self.env = req.environ
self.wsgi_input = req.environ['wsgi.input']
self.path = req.path
self.crypto = crypto
self.body_crypto_ctxt = None
self.keys = keys
self.plaintext_md5 = None
self.ciphertext_md5 = None
self.logger = logger
self.install_footers_callback(req)
def _init_encryption_context(self):
# do this once when body is first read
if self.body_crypto_ctxt is None:
self.body_crypto_meta = self.crypto.create_crypto_meta()
body_key = self.crypto.create_random_key()
# wrap the body key with object key
self.body_crypto_meta['body_key'] = self.crypto.wrap_key(
self.keys['object'], body_key)
self.body_crypto_meta['key_id'] = self.keys['id']
self.body_crypto_ctxt = self.crypto.create_encryption_ctxt(
body_key, self.body_crypto_meta.get('iv'))
self.plaintext_md5 = md5(usedforsecurity=False)
self.ciphertext_md5 = md5(usedforsecurity=False)
def install_footers_callback(self, req):
# the proxy controller will call back for footer metadata after
# body has been sent
inner_callback = req.environ.get('swift.callback.update_footers')
# remove any Etag from headers, it won't be valid for ciphertext and
# we'll send the ciphertext Etag later in footer metadata
client_etag = req.headers.pop('etag', None)
override_header = get_container_update_override_key('etag')
container_listing_etag_header = req.headers.get(override_header)
def footers_callback(footers):
if inner_callback:
# pass on footers dict to any other callback that was
# registered before this one. It may override any footers that
# were set.
inner_callback(footers)
plaintext_etag = None
if self.body_crypto_ctxt:
plaintext_etag = self.plaintext_md5.hexdigest()
# If client (or other middleware) supplied etag, then validate
# against plaintext etag
etag_to_check = footers.get('Etag') or client_etag
if (etag_to_check is not None and
plaintext_etag != etag_to_check):
raise HTTPUnprocessableEntity(request=Request(self.env))
# override any previous notion of etag with the ciphertext etag
footers['Etag'] = self.ciphertext_md5.hexdigest()
# Encrypt the plaintext etag using the object key and persist
# as sysmeta along with the crypto parameters that were used.
encrypted_etag, etag_crypto_meta = encrypt_header_val(
self.crypto, plaintext_etag, self.keys['object'])
footers['X-Object-Sysmeta-Crypto-Etag'] = \
append_crypto_meta(encrypted_etag, etag_crypto_meta)
footers['X-Object-Sysmeta-Crypto-Body-Meta'] = \
dump_crypto_meta(self.body_crypto_meta)
# Also add an HMAC of the etag for use when evaluating
# conditional requests
footers['X-Object-Sysmeta-Crypto-Etag-Mac'] = _hmac_etag(
self.keys['object'], plaintext_etag)
else:
# No data was read from body, nothing was encrypted, so don't
# set any crypto sysmeta for the body, but do re-instate any
# etag provided in inbound request if other middleware has not
# already set a value.
if client_etag is not None:
footers.setdefault('Etag', client_etag)
# When deciding on the etag that should appear in container
# listings, look for:
# * override in the footer, otherwise
# * override in the header, and finally
# * MD5 of the plaintext received
# This may be None if no override was set and no data was read. An
# override value of '' will be passed on.
container_listing_etag = footers.get(
override_header, container_listing_etag_header)
if container_listing_etag is None:
container_listing_etag = plaintext_etag
if (container_listing_etag and
(container_listing_etag != MD5_OF_EMPTY_STRING or
plaintext_etag)):
# Encrypt the container-listing etag using the container key
# and a random IV, and use it to override the container update
# value, with the crypto parameters appended. We use the
# container key here so that only that key is required to
# decrypt all etag values in a container listing when handling
# a container GET request. Don't encrypt an EMPTY_ETAG
# unless there actually was some body content, in which case
# the container-listing etag is possibly conveying some
# non-obvious information.
val, crypto_meta = encrypt_header_val(
self.crypto, container_listing_etag,
self.keys['container'])
crypto_meta['key_id'] = self.keys['id']
footers[override_header] = \
append_crypto_meta(val, crypto_meta)
# else: no override was set and no data was read
req.environ['swift.callback.update_footers'] = footers_callback
def read(self, *args, **kwargs):
return self.readChunk(self.wsgi_input.read, *args, **kwargs)
def readline(self, *args, **kwargs):
return self.readChunk(self.wsgi_input.readline, *args, **kwargs)
def readChunk(self, read_method, *args, **kwargs):
chunk = read_method(*args, **kwargs)
if chunk:
self._init_encryption_context()
self.plaintext_md5.update(chunk)
# Encrypt one chunk at a time
ciphertext = self.body_crypto_ctxt.update(chunk)
self.ciphertext_md5.update(ciphertext)
return ciphertext
return chunk
class EncrypterObjContext(CryptoWSGIContext):
def __init__(self, encrypter, logger):
super(EncrypterObjContext, self).__init__(
encrypter, 'object', logger)
def _check_headers(self, req):
# Check the user-metadata length before encrypting and encoding
error_response = check_metadata(req, self.server_type)
if error_response:
raise error_response
def encrypt_user_metadata(self, req, keys):
"""
Encrypt user-metadata header values. Replace each x-object-meta-<key>
user metadata header with a corresponding
x-object-transient-sysmeta-crypto-meta-<key> header which has the
crypto metadata required to decrypt appended to the encrypted value.
:param req: a swob Request
:param keys: a dict of encryption keys
"""
prefix = get_object_transient_sysmeta('crypto-meta-')
user_meta_headers = [h for h in req.headers.items() if
is_user_meta(self.server_type, h[0]) and h[1]]
crypto_meta = None
for name, val in user_meta_headers:
short_name = strip_user_meta_prefix(self.server_type, name)
new_name = prefix + short_name
enc_val, crypto_meta = encrypt_header_val(
self.crypto, val, keys[self.server_type])
req.headers[new_name] = append_crypto_meta(enc_val, crypto_meta)
req.headers.pop(name)
# store a single copy of the crypto meta items that are common to all
# encrypted user metadata independently of any such meta that is stored
# with the object body because it might change on a POST. This is done
# for future-proofing - the meta stored here is not currently used
# during decryption.
if crypto_meta:
meta = dump_crypto_meta({'cipher': crypto_meta['cipher'],
'key_id': keys['id']})
req.headers[get_object_transient_sysmeta('crypto-meta')] = meta
def handle_put(self, req, start_response):
self._check_headers(req)
keys = self.get_keys(req.environ, required=['object', 'container'])
self.encrypt_user_metadata(req, keys)
enc_input_proxy = EncInputWrapper(self.crypto, keys, req, self.logger)
req.environ['wsgi.input'] = enc_input_proxy
resp = self._app_call(req.environ)
# If an etag is in the response headers and a plaintext etag was
# calculated, then overwrite the response value with the plaintext etag
# provided it matches the ciphertext etag. If it does not match then do
# not overwrite and allow the response value to return to client.
mod_resp_headers = self._response_headers
if (is_success(self._get_status_int()) and
enc_input_proxy.plaintext_md5):
plaintext_etag = enc_input_proxy.plaintext_md5.hexdigest()
ciphertext_etag = enc_input_proxy.ciphertext_md5.hexdigest()
mod_resp_headers = [
(h, v if (h.lower() != 'etag' or
normalize_etag(v) != ciphertext_etag)
else plaintext_etag)
for h, v in mod_resp_headers]
start_response(self._response_status, mod_resp_headers,
self._response_exc_info)
return resp
def handle_post(self, req, start_response):
"""
Encrypt the new object headers with a new iv and the current crypto.
Note that an object may have encrypted headers while the body may
remain unencrypted.
"""
self._check_headers(req)
keys = self.get_keys(req.environ)
self.encrypt_user_metadata(req, keys)
resp = self._app_call(req.environ)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
@contextmanager
def _mask_conditional_etags(self, req, header_name):
"""
Calculate HMACs of etags in header value and append to existing list.
The HMACs are calculated in the same way as was done for the object
plaintext etag to generate the value of
X-Object-Sysmeta-Crypto-Etag-Mac when the object was PUT. The object
server can therefore use these HMACs to evaluate conditional requests.
HMACs of the etags are appended for the current root secrets and
historic root secrets because it is not known which of them may have
been used to generate the on-disk etag HMAC.
The existing etag values are left in the list of values to match in
case the object was not encrypted when it was PUT. It is unlikely that
a masked etag value would collide with an unmasked value.
:param req: an instance of swob.Request
:param header_name: name of header that has etags to mask
:return: True if any etags were masked, False otherwise
"""
masked = False
old_etags = req.headers.get(header_name)
if old_etags:
all_keys = self.get_multiple_keys(req.environ)
new_etags = []
for etag in Match(old_etags).tags:
if etag == '*':
new_etags.append(etag)
continue
new_etags.append('"%s"' % etag)
for keys in all_keys:
masked_etag = _hmac_etag(keys['object'], etag)
new_etags.append('"%s"' % masked_etag)
masked = True
req.headers[header_name] = ', '.join(new_etags)
try:
yield masked
finally:
if old_etags:
req.headers[header_name] = old_etags
def handle_get_or_head(self, req, start_response):
with self._mask_conditional_etags(req, 'If-Match') as masked1:
with self._mask_conditional_etags(req, 'If-None-Match') as masked2:
if masked1 or masked2:
update_etag_is_at_header(
req, 'X-Object-Sysmeta-Crypto-Etag-Mac')
resp = self._app_call(req.environ)
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return resp
class Encrypter(object):
"""Middleware for encrypting data and user metadata.
By default all PUT or POST'ed object data and/or metadata will be
encrypted. Encryption of new data and/or metadata may be disabled by
setting the ``disable_encryption`` option to True. However, this middleware
should remain in the pipeline in order for existing encrypted data to be
read.
"""
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route="encrypter")
self.crypto = Crypto(conf)
self.disable_encryption = config_true_value(
conf.get('disable_encryption', 'false'))
def __call__(self, env, start_response):
# If override is set in env, then just pass along
if config_true_value(env.get('swift.crypto.override')):
return self.app(env, start_response)
req = Request(env)
if self.disable_encryption and req.method in ('PUT', 'POST'):
return self.app(env, start_response)
try:
req.split_path(4, 4, True)
is_object_request = True
except ValueError:
is_object_request = False
if not is_object_request:
return self.app(env, start_response)
if req.method in ('GET', 'HEAD'):
handler = EncrypterObjContext(self, self.logger).handle_get_or_head
elif req.method == 'PUT':
handler = EncrypterObjContext(self, self.logger).handle_put
elif req.method == 'POST':
handler = EncrypterObjContext(self, self.logger).handle_post
else:
# anything else
return self.app(env, start_response)
try:
return handler(req, start_response)
except HTTPException as err_resp:
return err_resp(env, start_response)
| swift-master | swift/common/middleware/crypto/encrypter.py |
# Copyright (c) 2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Implements middleware for object encryption which comprises an instance of a
:class:`~swift.common.middleware.crypto.decrypter.Decrypter` combined with an
instance of an :class:`~swift.common.middleware.crypto.encrypter.Encrypter`.
"""
from swift.common.middleware.crypto.decrypter import Decrypter
from swift.common.middleware.crypto.encrypter import Encrypter
from swift.common.utils import config_true_value
from swift.common.registry import register_swift_info
def filter_factory(global_conf, **local_conf):
"""Provides a factory function for loading encryption middleware."""
conf = global_conf.copy()
conf.update(local_conf)
enabled = not config_true_value(conf.get('disable_encryption', 'false'))
register_swift_info('encryption', admin=True, enabled=enabled)
def encryption_filter(app):
return Decrypter(Encrypter(app, conf), conf)
return encryption_filter
| swift-master | swift/common/middleware/crypto/__init__.py |
# Copyright (c) 2015-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import json
from swift import gettext_ as _
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.http import is_success
from swift.common.middleware.crypto.crypto_utils import CryptoWSGIContext, \
load_crypto_meta, extract_crypto_meta, Crypto
from swift.common.exceptions import EncryptionException, UnknownSecretIdError
from swift.common.request_helpers import get_object_transient_sysmeta, \
get_sys_meta_prefix, get_user_meta_prefix, \
get_container_update_override_key
from swift.common.swob import Request, HTTPException, \
HTTPInternalServerError, wsgi_to_bytes, bytes_to_wsgi
from swift.common.utils import get_logger, config_true_value, \
parse_content_range, closing_if_possible, parse_content_type, \
FileLikeIter, multipart_byteranges_to_document_iters
DECRYPT_CHUNK_SIZE = 65536
def purge_crypto_sysmeta_headers(headers):
return [h for h in headers if not
h[0].lower().startswith(
(get_object_transient_sysmeta('crypto-'),
get_sys_meta_prefix('object') + 'crypto-'))]
class BaseDecrypterContext(CryptoWSGIContext):
def get_crypto_meta(self, header_name, check=True):
"""
Extract a crypto_meta dict from a header.
:param header_name: name of header that may have crypto_meta
:param check: if True validate the crypto meta
:return: A dict containing crypto_meta items
:raises EncryptionException: if an error occurs while parsing the
crypto meta
"""
crypto_meta_json = self._response_header_value(header_name)
if crypto_meta_json is None:
return None
crypto_meta = load_crypto_meta(crypto_meta_json)
if check:
self.crypto.check_crypto_meta(crypto_meta)
return crypto_meta
def get_unwrapped_key(self, crypto_meta, wrapping_key):
"""
Get a wrapped key from crypto-meta and unwrap it using the provided
wrapping key.
:param crypto_meta: a dict of crypto-meta
:param wrapping_key: key to be used to decrypt the wrapped key
:return: an unwrapped key
:raises HTTPInternalServerError: if the crypto-meta has no wrapped key
or the unwrapped key is invalid
"""
try:
return self.crypto.unwrap_key(wrapping_key,
crypto_meta['body_key'])
except KeyError as err:
self.logger.error(
_('Error decrypting %(resp_type)s: Missing %(key)s'),
{'resp_type': self.server_type, 'key': err})
except ValueError as err:
self.logger.error(_('Error decrypting %(resp_type)s: %(reason)s'),
{'resp_type': self.server_type, 'reason': err})
raise HTTPInternalServerError(
body='Error decrypting %s' % self.server_type,
content_type='text/plain')
def decrypt_value_with_meta(self, value, key, required, decoder):
"""
Base64-decode and decrypt a value if crypto meta can be extracted from
the value itself, otherwise return the value unmodified.
A value should either be a string that does not contain the ';'
character or should be of the form::
<base64-encoded ciphertext>;swift_meta=<crypto meta>
:param value: value to decrypt
:param key: crypto key to use
:param required: if True then the value is required to be decrypted
and an EncryptionException will be raised if the
header cannot be decrypted due to missing crypto meta.
:param decoder: function to turn the decrypted bytes into useful data
:returns: decrypted value if crypto meta is found, otherwise the
unmodified value
:raises EncryptionException: if an error occurs while parsing crypto
meta or if the header value was required
to be decrypted but crypto meta was not
found.
"""
extracted_value, crypto_meta = extract_crypto_meta(value)
if crypto_meta:
self.crypto.check_crypto_meta(crypto_meta)
value = self.decrypt_value(
extracted_value, key, crypto_meta, decoder)
elif required:
raise EncryptionException(
"Missing crypto meta in value %s" % value)
return value
def decrypt_value(self, value, key, crypto_meta, decoder):
"""
Base64-decode and decrypt a value using the crypto_meta provided.
:param value: a base64-encoded value to decrypt
:param key: crypto key to use
:param crypto_meta: a crypto-meta dict of form returned by
:py:func:`~swift.common.middleware.crypto.Crypto.get_crypto_meta`
:param decoder: function to turn the decrypted bytes into useful data
:returns: decrypted value
"""
if not value:
return decoder(b'')
crypto_ctxt = self.crypto.create_decryption_ctxt(
key, crypto_meta['iv'], 0)
return decoder(crypto_ctxt.update(base64.b64decode(value)))
def get_decryption_keys(self, req, crypto_meta=None):
"""
Determine if a response should be decrypted, and if so then fetch keys.
:param req: a Request object
:param crypto_meta: a dict of crypto metadata
:returns: a dict of decryption keys
"""
if config_true_value(req.environ.get('swift.crypto.override')):
self.logger.debug('No decryption is necessary because of override')
return None
key_id = crypto_meta.get('key_id') if crypto_meta else None
return self.get_keys(req.environ, key_id=key_id)
class DecrypterObjContext(BaseDecrypterContext):
def __init__(self, decrypter, logger):
super(DecrypterObjContext, self).__init__(decrypter, 'object', logger)
def _decrypt_header(self, header, value, key, required=False):
"""
Attempt to decrypt a header value that may be encrypted.
:param header: the header name
:param value: the header value
:param key: decryption key
:param required: if True then the header is required to be decrypted
and an HTTPInternalServerError will be raised if the
header cannot be decrypted due to missing crypto meta.
:return: decrypted value or the original value if it was not encrypted.
:raises HTTPInternalServerError: if an error occurred during decryption
or if the header value was required to
be decrypted but crypto meta was not
found.
"""
try:
return self.decrypt_value_with_meta(
value, key, required, bytes_to_wsgi)
except EncryptionException as err:
self.logger.error(
_("Error decrypting header %(header)s: %(error)s"),
{'header': header, 'error': err})
raise HTTPInternalServerError(
body='Error decrypting header',
content_type='text/plain')
def decrypt_user_metadata(self, keys):
prefix = get_object_transient_sysmeta('crypto-meta-')
prefix_len = len(prefix)
new_prefix = get_user_meta_prefix(self.server_type).title()
result = []
for name, val in self._response_headers:
if name.lower().startswith(prefix) and val:
short_name = name[prefix_len:]
decrypted_value = self._decrypt_header(
name, val, keys[self.server_type], required=True)
result.append((new_prefix + short_name, decrypted_value))
return result
def decrypt_resp_headers(self, put_keys, post_keys, update_cors_exposed):
"""
Find encrypted headers and replace with the decrypted versions.
:param put_keys: a dict of decryption keys used for object PUT.
:param post_keys: a dict of decryption keys used for object POST.
:return: A list of headers with any encrypted headers replaced by their
decrypted values.
:raises HTTPInternalServerError: if any error occurs while decrypting
headers
"""
mod_hdr_pairs = []
if put_keys:
# Decrypt plaintext etag and place in Etag header for client
# response
etag_header = 'X-Object-Sysmeta-Crypto-Etag'
encrypted_etag = self._response_header_value(etag_header)
if encrypted_etag:
decrypted_etag = self._decrypt_header(
etag_header, encrypted_etag, put_keys['object'],
required=True)
mod_hdr_pairs.append(('Etag', decrypted_etag))
etag_header = get_container_update_override_key('etag')
encrypted_etag = self._response_header_value(etag_header)
if encrypted_etag:
decrypted_etag = self._decrypt_header(
etag_header, encrypted_etag, put_keys['container'])
mod_hdr_pairs.append((etag_header, decrypted_etag))
# Decrypt all user metadata. Encrypted user metadata values are stored
# in the x-object-transient-sysmeta-crypto-meta- namespace. Those are
# decrypted and moved back to the x-object-meta- namespace. Prior to
# decryption, the response should have no x-object-meta- headers, but
# if it does then they will be overwritten by any decrypted headers
# that map to the same x-object-meta- header names i.e. decrypted
# headers win over unexpected, unencrypted headers.
if post_keys:
decrypted_meta = self.decrypt_user_metadata(post_keys)
mod_hdr_pairs.extend(decrypted_meta)
else:
decrypted_meta = []
mod_hdr_names = {h.lower() for h, v in mod_hdr_pairs}
found_aceh = False
for header, value in self._response_headers:
lheader = header.lower()
if lheader in mod_hdr_names:
continue
if lheader == 'access-control-expose-headers':
found_aceh = True
mod_hdr_pairs.append((header, value + ', ' + ', '.join(
meta.lower() for meta, _data in decrypted_meta)))
else:
mod_hdr_pairs.append((header, value))
if update_cors_exposed and not found_aceh:
mod_hdr_pairs.append(('Access-Control-Expose-Headers', ', '.join(
meta.lower() for meta, _data in decrypted_meta)))
return mod_hdr_pairs
def multipart_response_iter(self, resp, boundary, body_key, crypto_meta):
"""
Decrypts a multipart mime doc response body.
:param resp: application response
:param boundary: multipart boundary string
:param body_key: decryption key for the response body
:param crypto_meta: crypto_meta for the response body
:return: generator for decrypted response body
"""
with closing_if_possible(resp):
parts_iter = multipart_byteranges_to_document_iters(
FileLikeIter(resp), boundary)
for first_byte, last_byte, length, headers, body in parts_iter:
yield b"--" + boundary + b"\r\n"
for header, value in headers:
yield b"%s: %s\r\n" % (wsgi_to_bytes(header),
wsgi_to_bytes(value))
yield b"\r\n"
decrypt_ctxt = self.crypto.create_decryption_ctxt(
body_key, crypto_meta['iv'], first_byte)
for chunk in iter(lambda: body.read(DECRYPT_CHUNK_SIZE), b''):
yield decrypt_ctxt.update(chunk)
yield b"\r\n"
yield b"--" + boundary + b"--"
def response_iter(self, resp, body_key, crypto_meta, offset):
"""
Decrypts a response body.
:param resp: application response
:param body_key: decryption key for the response body
:param crypto_meta: crypto_meta for the response body
:param offset: offset into object content at which response body starts
:return: generator for decrypted response body
"""
decrypt_ctxt = self.crypto.create_decryption_ctxt(
body_key, crypto_meta['iv'], offset)
with closing_if_possible(resp):
for chunk in resp:
yield decrypt_ctxt.update(chunk)
def _read_crypto_meta(self, header, check):
crypto_meta = None
if (is_success(self._get_status_int()) or
self._get_status_int() in (304, 412)):
try:
crypto_meta = self.get_crypto_meta(header, check)
except EncryptionException as err:
self.logger.error(_('Error decrypting object: %s'), err)
raise HTTPInternalServerError(
body='Error decrypting object', content_type='text/plain')
return crypto_meta
def handle(self, req, start_response):
app_resp = self._app_call(req.environ)
try:
put_crypto_meta = self._read_crypto_meta(
'X-Object-Sysmeta-Crypto-Body-Meta', True)
put_keys = self.get_decryption_keys(req, put_crypto_meta)
post_crypto_meta = self._read_crypto_meta(
'X-Object-Transient-Sysmeta-Crypto-Meta', False)
post_keys = self.get_decryption_keys(req, post_crypto_meta)
except EncryptionException as err:
self.logger.error(
"Error decrypting object: %s",
err)
raise HTTPInternalServerError(
body='Error decrypting object',
content_type='text/plain')
if put_keys is None and post_keys is None:
# skip decryption
start_response(self._response_status, self._response_headers,
self._response_exc_info)
return app_resp
mod_resp_headers = self.decrypt_resp_headers(
put_keys, post_keys,
update_cors_exposed=bool(req.headers.get('origin')))
if put_crypto_meta and req.method == 'GET' and \
is_success(self._get_status_int()):
# 2xx response and encrypted body
body_key = self.get_unwrapped_key(
put_crypto_meta, put_keys['object'])
content_type, content_type_attrs = parse_content_type(
self._response_header_value('Content-Type'))
if (self._get_status_int() == 206 and
content_type == 'multipart/byteranges'):
boundary = wsgi_to_bytes(dict(content_type_attrs)["boundary"])
resp_iter = self.multipart_response_iter(
app_resp, boundary, body_key, put_crypto_meta)
else:
offset = 0
content_range = self._response_header_value('Content-Range')
if content_range:
# Determine offset within the whole object if ranged GET
offset, end, total = parse_content_range(content_range)
resp_iter = self.response_iter(
app_resp, body_key, put_crypto_meta, offset)
else:
# don't decrypt body of unencrypted or non-2xx responses
resp_iter = app_resp
mod_resp_headers = purge_crypto_sysmeta_headers(mod_resp_headers)
start_response(self._response_status, mod_resp_headers,
self._response_exc_info)
return resp_iter
class DecrypterContContext(BaseDecrypterContext):
def __init__(self, decrypter, logger):
super(DecrypterContContext, self).__init__(
decrypter, 'container', logger)
def handle(self, req, start_response):
app_resp = self._app_call(req.environ)
if is_success(self._get_status_int()):
# only decrypt body of 2xx responses
headers = HeaderKeyDict(self._response_headers)
content_type = headers.get('content-type', '').split(';', 1)[0]
if content_type == 'application/json':
app_resp = self.process_json_resp(req, app_resp)
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
def process_json_resp(self, req, resp_iter):
"""
Parses json body listing and decrypt encrypted entries. Updates
Content-Length header with new body length and return a body iter.
"""
with closing_if_possible(resp_iter):
resp_body = b''.join(resp_iter)
body_json = json.loads(resp_body)
new_body = json.dumps([self.decrypt_obj_dict(req, obj_dict)
for obj_dict in body_json]).encode('ascii')
self.update_content_length(len(new_body))
return [new_body]
def decrypt_obj_dict(self, req, obj_dict):
if 'hash' in obj_dict:
# each object's etag may have been encrypted with a different key
# so fetch keys based on its crypto meta
ciphertext, crypto_meta = extract_crypto_meta(obj_dict['hash'])
bad_keys = set()
if crypto_meta:
try:
self.crypto.check_crypto_meta(crypto_meta)
keys = self.get_decryption_keys(req, crypto_meta)
# Note that symlinks (for example) may put swift paths in
# the listing ETag, so we can't just use ASCII.
obj_dict['hash'] = self.decrypt_value(
ciphertext, keys['container'], crypto_meta,
decoder=lambda x: x.decode('utf-8'))
except EncryptionException as err:
if not isinstance(err, UnknownSecretIdError) or \
err.args[0] not in bad_keys:
# Only warn about an unknown key once per listing
self.logger.error(
"Error decrypting container listing: %s",
err)
if isinstance(err, UnknownSecretIdError):
bad_keys.add(err.args[0])
obj_dict['hash'] = '<unknown>'
return obj_dict
class Decrypter(object):
"""Middleware for decrypting data and user metadata."""
def __init__(self, app, conf):
self.app = app
self.logger = get_logger(conf, log_route="decrypter")
self.crypto = Crypto(conf)
def __call__(self, env, start_response):
req = Request(env)
try:
parts = req.split_path(3, 4, True)
is_cont_or_obj_req = True
except ValueError:
is_cont_or_obj_req = False
if not is_cont_or_obj_req:
return self.app(env, start_response)
if parts[3] and req.method in ('GET', 'HEAD'):
handler = DecrypterObjContext(self, self.logger).handle
elif parts[2] and req.method == 'GET':
handler = DecrypterContContext(self, self.logger).handle
else:
# url and/or request verb is not handled by decrypter
return self.app(env, start_response)
try:
return handler(req, start_response)
except HTTPException as err_resp:
return err_resp(env, start_response)
| swift-master | swift/common/middleware/crypto/decrypter.py |
# Copyright (c) 2015-2016 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import binascii
import json
import os
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
import six
from six.moves.urllib import parse as urlparse
from swift import gettext_ as _
from swift.common.exceptions import EncryptionException, UnknownSecretIdError
from swift.common.swob import HTTPInternalServerError
from swift.common.utils import get_logger
from swift.common.wsgi import WSGIContext
from cgi import parse_header
CRYPTO_KEY_CALLBACK = 'swift.callback.fetch_crypto_keys'
class Crypto(object):
"""
Used by middleware: Calls cryptography library
"""
cipher = 'AES_CTR_256'
# AES will accept several key sizes - we are using 256 bits i.e. 32 bytes
key_length = 32
iv_length = algorithms.AES.block_size // 8
def __init__(self, conf=None):
self.logger = get_logger(conf, log_route="crypto")
# memoize backend to avoid repeated iteration over entry points
self.backend = default_backend()
def create_encryption_ctxt(self, key, iv):
"""
Creates a crypto context for encrypting
:param key: 256-bit key
:param iv: 128-bit iv or nonce used for encryption
:raises ValueError: on invalid key or iv
:returns: an instance of an encryptor
"""
self.check_key(key)
engine = Cipher(algorithms.AES(key), modes.CTR(iv),
backend=self.backend)
return engine.encryptor()
def create_decryption_ctxt(self, key, iv, offset):
"""
Creates a crypto context for decrypting
:param key: 256-bit key
:param iv: 128-bit iv or nonce used for decryption
:param offset: offset into the message; used for range reads
:returns: an instance of a decryptor
"""
self.check_key(key)
if offset < 0:
raise ValueError('Offset must not be negative')
if offset:
# Adjust IV so that it is correct for decryption at offset.
# The CTR mode offset is incremented for every AES block and taken
# modulo 2^128.
offset_blocks, offset_in_block = divmod(offset, self.iv_length)
ivl = int(binascii.hexlify(iv), 16) + offset_blocks
ivl %= 1 << algorithms.AES.block_size
iv = bytes(bytearray.fromhex(format(
ivl, '0%dx' % (2 * self.iv_length))))
else:
offset_in_block = 0
engine = Cipher(algorithms.AES(key), modes.CTR(iv),
backend=self.backend)
dec = engine.decryptor()
# Adjust decryption boundary within current AES block
dec.update(b'*' * offset_in_block)
return dec
def create_iv(self):
return os.urandom(self.iv_length)
def create_crypto_meta(self):
# create a set of parameters
return {'iv': self.create_iv(), 'cipher': self.cipher}
def check_crypto_meta(self, meta):
"""
Check that crypto meta dict has valid items.
:param meta: a dict
:raises EncryptionException: if an error is found in the crypto meta
"""
try:
if meta['cipher'] != self.cipher:
raise EncryptionException('Bad crypto meta: Cipher must be %s'
% self.cipher)
if len(meta['iv']) != self.iv_length:
raise EncryptionException(
'Bad crypto meta: IV must be length %s bytes'
% self.iv_length)
except KeyError as err:
raise EncryptionException(
'Bad crypto meta: Missing %s' % err)
def create_random_key(self):
# helper method to create random key of correct length
return os.urandom(self.key_length)
def wrap_key(self, wrapping_key, key_to_wrap):
# we don't use an RFC 3394 key wrap algorithm such as cryptography's
# aes_wrap_key because it's slower and we have iv material readily
# available so don't need a deterministic algorithm
iv = self.create_iv()
encryptor = Cipher(algorithms.AES(wrapping_key), modes.CTR(iv),
backend=self.backend).encryptor()
return {'key': encryptor.update(key_to_wrap), 'iv': iv}
def unwrap_key(self, wrapping_key, context):
# unwrap a key from dict of form returned by wrap_key
# check the key length early - unwrapping won't change the length
self.check_key(context['key'])
decryptor = Cipher(algorithms.AES(wrapping_key),
modes.CTR(context['iv']),
backend=self.backend).decryptor()
return decryptor.update(context['key'])
def check_key(self, key):
if len(key) != self.key_length:
raise ValueError("Key must be length %s bytes" % self.key_length)
class CryptoWSGIContext(WSGIContext):
"""
Base class for contexts used by crypto middlewares.
"""
def __init__(self, crypto_app, server_type, logger):
super(CryptoWSGIContext, self).__init__(crypto_app.app)
self.crypto = crypto_app.crypto
self.logger = logger
self.server_type = server_type
def get_keys(self, env, required=None, key_id=None):
# Get the key(s) from the keymaster
required = required if required is not None else [self.server_type]
try:
fetch_crypto_keys = env[CRYPTO_KEY_CALLBACK]
except KeyError:
self.logger.exception(_('ERROR get_keys() missing callback'))
raise HTTPInternalServerError(
"Unable to retrieve encryption keys.")
err = None
try:
keys = fetch_crypto_keys(key_id=key_id)
except UnknownSecretIdError as err:
self.logger.error('get_keys(): unknown key id: %s', err)
raise
except Exception as err: # noqa
self.logger.exception('get_keys(): from callback: %s', err)
raise HTTPInternalServerError(
"Unable to retrieve encryption keys.")
for name in required:
try:
key = keys[name]
self.crypto.check_key(key)
continue
except KeyError:
self.logger.exception(_("Missing key for %r") % name)
except TypeError:
self.logger.exception(_("Did not get a keys dict"))
except ValueError as e:
# don't include the key in any messages!
self.logger.exception(_("Bad key for %(name)r: %(err)s") %
{'name': name, 'err': e})
raise HTTPInternalServerError(
"Unable to retrieve encryption keys.")
return keys
def get_multiple_keys(self, env):
# get a list of keys from the keymaster containing one dict of keys for
# each of the keymaster root secret ids
keys = [self.get_keys(env)]
active_key_id = keys[0]['id']
for other_key_id in keys[0].get('all_ids', []):
if other_key_id == active_key_id:
continue
keys.append(self.get_keys(env, key_id=other_key_id))
return keys
def dump_crypto_meta(crypto_meta):
"""
Serialize crypto meta to a form suitable for including in a header value.
The crypto-meta is serialized as a json object. The iv and key values are
random bytes and as a result need to be base64 encoded before sending over
the wire. Base64 encoding returns a bytes object in py3, to future proof
the code, decode this data to produce a string, which is what the
json.dumps function expects.
:param crypto_meta: a dict containing crypto meta items
:returns: a string serialization of a crypto meta dict
"""
def b64_encode_meta(crypto_meta):
return {
name: (base64.b64encode(value).decode() if name in ('iv', 'key')
else b64_encode_meta(value) if isinstance(value, dict)
else value)
for name, value in crypto_meta.items()}
# use sort_keys=True to make serialized form predictable for testing
return urlparse.quote_plus(
json.dumps(b64_encode_meta(crypto_meta), sort_keys=True))
def load_crypto_meta(value, b64decode=True):
"""
Build the crypto_meta from the json object.
Note that json.loads always produces unicode strings; to ensure the
resultant crypto_meta matches the original object:
* cast all keys to str (effectively a no-op on py3),
* base64 decode 'key' and 'iv' values to bytes, and
* encode remaining string values as UTF-8 on py2 (while leaving them
as native unicode strings on py3).
:param value: a string serialization of a crypto meta dict
:param b64decode: decode the 'key' and 'iv' values to bytes, default True
:returns: a dict containing crypto meta items
:raises EncryptionException: if an error occurs while parsing the
crypto meta
"""
def b64_decode_meta(crypto_meta):
return {
str(name): (
base64.b64decode(val) if name in ('iv', 'key') and b64decode
else b64_decode_meta(val) if isinstance(val, dict)
else val.encode('utf8') if six.PY2 else val)
for name, val in crypto_meta.items()}
try:
if not isinstance(value, six.string_types):
raise ValueError('crypto meta not a string')
val = json.loads(urlparse.unquote_plus(value))
if not isinstance(val, dict):
raise ValueError('crypto meta not a Mapping')
return b64_decode_meta(val)
except (KeyError, ValueError, TypeError) as err:
msg = 'Bad crypto meta %r: %s' % (value, err)
raise EncryptionException(msg)
def append_crypto_meta(value, crypto_meta):
"""
Serialize and append crypto metadata to an encrypted value.
:param value: value to which serialized crypto meta will be appended.
:param crypto_meta: a dict of crypto meta
:return: a string of the form <value>; swift_meta=<serialized crypto meta>
"""
if not isinstance(value, str):
raise ValueError
return '%s; swift_meta=%s' % (value, dump_crypto_meta(crypto_meta))
def extract_crypto_meta(value):
"""
Extract and deserialize any crypto meta from the end of a value.
:param value: string that may have crypto meta at end
:return: a tuple of the form:
(<value without crypto meta>, <deserialized crypto meta> or None)
"""
swift_meta = None
value, meta = parse_header(value)
if 'swift_meta' in meta:
swift_meta = load_crypto_meta(meta['swift_meta'])
return value, swift_meta
| swift-master | swift/common/middleware/crypto/crypto_utils.py |
swift-master | swift/common/middleware/x_profile/__init__.py |
|
# Copyright (c) 2010-2012 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
import re
import string
import tempfile
from swift import gettext_ as _
from swift.common.middleware.x_profile.exceptions import PLOTLIBNotInstalled
from swift.common.middleware.x_profile.exceptions import ODFLIBNotInstalled
from swift.common.middleware.x_profile.exceptions import NotFoundException
from swift.common.middleware.x_profile.exceptions import MethodNotAllowed
from swift.common.middleware.x_profile.exceptions import DataLoadFailure
from swift.common.middleware.x_profile.exceptions import ProfileException
from swift.common.middleware.x_profile.profile_model import Stats2
from swift.common.request_helpers import html_escape
PLOTLIB_INSTALLED = True
try:
import matplotlib
# use agg backend for writing to file, not for rendering in a window.
# otherwise some platform will complain "no display name and $DISPLAY
# environment variable"
matplotlib.use('agg')
import matplotlib.pyplot as plt
except ImportError:
PLOTLIB_INSTALLED = False
empty_description = """
The default profile of current process or the profile you requested is
empty. <input type="submit" name="refresh" value="Refresh"/>
"""
profile_tmpl = """
<select name="profile">
<option value="current">current</option>
<option value="all">all</option>
${profile_list}
</select>
"""
sort_tmpl = """
<select name="sort">
<option value="time">time</option>
<option value="cumulative">cumulative</option>
<option value="calls">calls</option>
<option value="pcalls">pcalls</option>
<option value="name">name</option>
<option value="file">file</option>
<option value="module">module</option>
<option value="line">line</option>
<option value="nfl">nfl</option>
<option value="stdname">stdname</option>
</select>
"""
limit_tmpl = """
<select name="limit">
<option value="-1">all</option>
<option value="0.1">10%</option>
<option value="0.2">20%</option>
<option value="0.3">30%</option>
<option value="10">10</option>
<option value="20">20</option>
<option value="30">30</option>
<option value="50">50</option>
<option value="100">100</option>
<option value="200">200</option>
<option value="300">300</option>
<option value="400">400</option>
<option value="500">500</option>
</select>
"""
fulldirs_tmpl = """
<input type="checkbox" name="fulldirs" value="1"
${fulldir_checked}/>
"""
mode_tmpl = """
<select name="mode">
<option value="stats">stats</option>
<option value="callees">callees</option>
<option value="callers">callers</option>
</select>
"""
nfl_filter_tmpl = """
<input type="text" name="nfl_filter" value="${nfl_filter}"
placeholder="filename part" />
"""
formelements_tmpl = """
<div>
<table>
<tr>
<td>
<strong>Profile</strong>
<td>
<strong>Sort</strong>
</td>
<td>
<strong>Limit</strong>
</td>
<td>
<strong>Full Path</strong>
</td>
<td>
<strong>Filter</strong>
</td>
<td>
</td>
<td>
<strong>Plot Metric</strong>
</td>
<td>
<strong>Plot Type</strong>
<td>
</td>
<td>
<strong>Format</strong>
</td>
<td>
<td>
</td>
<td>
</td>
</tr>
<tr>
<td>
${profile}
<td>
${sort}
</td>
<td>
${limit}
</td>
<td>
${fulldirs}
</td>
<td>
${nfl_filter}
</td>
<td>
<input type="submit" name="query" value="query"/>
</td>
<td>
<select name='metric'>
<option value='nc'>call count</option>
<option value='cc'>primitive call count</option>
<option value='tt'>total time</option>
<option value='ct'>cumulative time</option>
</select>
</td>
<td>
<select name='plottype'>
<option value='bar'>bar</option>
<option value='pie'>pie</option>
</select>
<td>
<input type="submit" name="plot" value="plot"/>
</td>
<td>
<select name='format'>
<option value='default'>binary</option>
<option value='json'>json</option>
<option value='csv'>csv</option>
<option value='ods'>ODF.ods</option>
</select>
</td>
<td>
<input type="submit" name="download" value="download"/>
</td>
<td>
<input type="submit" name="clear" value="clear"/>
</td>
</tr>
</table>
</div>
"""
index_tmpl = """
<html>
<head>
<title>profile results</title>
<style>
<!--
tr.normal { background-color: #ffffff }
tr.hover { background-color: #88eeee }
//-->
</style>
</head>
<body>
<form action="${action}" method="POST">
<div class="form-text">
${description}
</div>
<hr />
${formelements}
</form>
<pre>
${profilehtml}
</pre>
</body>
</html>
"""
class HTMLViewer(object):
format_dict = {'default': 'application/octet-stream',
'json': 'application/json',
'csv': 'text/csv',
'ods': 'application/vnd.oasis.opendocument.spreadsheet',
'python': 'text/html'}
def __init__(self, app_path, profile_module, profile_log):
self.app_path = app_path
self.profile_module = profile_module
self.profile_log = profile_log
def _get_param(self, query_dict, key, default=None, multiple=False):
value = query_dict.get(key, default)
if value is None or value == '':
return default
if multiple:
return value
if isinstance(value, list):
return eval(value[0]) if isinstance(default, int) else value[0]
else:
return value
def render(self, url, method, path_entry, query_dict, clear_callback):
plot = self._get_param(query_dict, 'plot', None)
download = self._get_param(query_dict, 'download', None)
clear = self._get_param(query_dict, 'clear', None)
action = plot or download or clear
profile_id = self._get_param(query_dict, 'profile', 'current')
sort = self._get_param(query_dict, 'sort', 'time')
limit = self._get_param(query_dict, 'limit', -1)
fulldirs = self._get_param(query_dict, 'fulldirs', 0)
nfl_filter = self._get_param(query_dict, 'nfl_filter', '').strip()
metric_selected = self._get_param(query_dict, 'metric', 'cc')
plot_type = self._get_param(query_dict, 'plottype', 'bar')
download_format = self._get_param(query_dict, 'format', 'default')
content = ''
# GET /__profile, POST /__profile
if len(path_entry) == 2 and method in ['GET', 'POST']:
log_files = self.profile_log.get_logfiles(profile_id)
if action == 'plot':
content, headers = self.plot(log_files, sort, limit,
nfl_filter, metric_selected,
plot_type)
elif action == 'download':
content, headers = self.download(log_files, sort, limit,
nfl_filter, download_format)
else:
if action == 'clear':
self.profile_log.clear(profile_id)
clear_callback and clear_callback()
content, headers = self.index_page(log_files, sort, limit,
fulldirs, nfl_filter,
profile_id, url)
# GET /__profile__/all
# GET /__profile__/current
# GET /__profile__/profile_id
# GET /__profile__/profile_id/
# GET /__profile__/profile_id/account.py:50(GETorHEAD)
# GET /__profile__/profile_id/swift/proxy/controllers
# /account.py:50(GETorHEAD)
# with QUERY_STRING: ?format=[default|json|csv|ods]
elif len(path_entry) > 2 and method == 'GET':
profile_id = path_entry[2]
log_files = self.profile_log.get_logfiles(profile_id)
pids = self.profile_log.get_all_pids()
# return all profiles in a json format by default.
# GET /__profile__/
if profile_id == '':
content = '{"profile_ids": ["' + '","'.join(pids) + '"]}'
headers = [('content-type', self.format_dict['json'])]
else:
if len(path_entry) > 3 and path_entry[3] != '':
nfl_filter = '/'.join(path_entry[3:])
if path_entry[-1].find(':0') == -1:
nfl_filter = '/' + nfl_filter
content, headers = self.download(log_files, sort, -1,
nfl_filter, download_format)
headers.append(('Access-Control-Allow-Origin', '*'))
else:
raise MethodNotAllowed(_('method %s is not allowed.') % method)
return content, headers
def index_page(self, log_files=None, sort='time', limit=-1,
fulldirs=0, nfl_filter='', profile_id='current', url='#'):
headers = [('content-type', 'text/html')]
if len(log_files) == 0:
return empty_description, headers
try:
stats = Stats2(*log_files)
except (IOError, ValueError):
raise DataLoadFailure(_('Can not load profile data from %s.')
% log_files)
if not fulldirs:
stats.strip_dirs()
stats.sort_stats(sort)
nfl_filter_esc = nfl_filter.replace(r'(', r'\(').replace(r')', r'\)')
amount = [nfl_filter_esc, limit] if nfl_filter_esc else [limit]
profile_html = self.generate_stats_html(stats, self.app_path,
profile_id, *amount)
description = "Profiling information is generated by using\
'%s' profiler." % self.profile_module
sort_repl = '<option value="%s">' % sort
sort_selected = '<option value="%s" selected>' % sort
sort = sort_tmpl.replace(sort_repl, sort_selected)
plist = ''.join(['<option value="%s">%s</option>' % (p, p)
for p in self.profile_log.get_all_pids()])
profile_element = string.Template(profile_tmpl).substitute(
{'profile_list': plist})
profile_repl = '<option value="%s">' % profile_id
profile_selected = '<option value="%s" selected>' % profile_id
profile_element = profile_element.replace(profile_repl,
profile_selected)
limit_repl = '<option value="%s">' % limit
limit_selected = '<option value="%s" selected>' % limit
limit = limit_tmpl.replace(limit_repl, limit_selected)
fulldirs_checked = 'checked' if fulldirs else ''
fulldirs_element = string.Template(fulldirs_tmpl).substitute(
{'fulldir_checked': fulldirs_checked})
nfl_filter_element = string.Template(nfl_filter_tmpl).\
substitute({'nfl_filter': nfl_filter})
form_elements = string.Template(formelements_tmpl).substitute(
{'description': description,
'action': url,
'profile': profile_element,
'sort': sort,
'limit': limit,
'fulldirs': fulldirs_element,
'nfl_filter': nfl_filter_element,
}
)
content = string.Template(index_tmpl).substitute(
{'formelements': form_elements,
'action': url,
'description': description,
'profilehtml': profile_html,
})
return content, headers
def download(self, log_files, sort='time', limit=-1, nfl_filter='',
output_format='default'):
if len(log_files) == 0:
raise NotFoundException(_('no log file found'))
try:
nfl_esc = nfl_filter.replace(r'(', r'\(').replace(r')', r'\)')
# remove the slash that is intentionally added in the URL
# to avoid failure of filtering stats data.
if nfl_esc.startswith('/'):
nfl_esc = nfl_esc[1:]
stats = Stats2(*log_files)
stats.sort_stats(sort)
if output_format == 'python':
data = self.format_source_code(nfl_filter)
elif output_format == 'json':
data = stats.to_json(nfl_esc, limit)
elif output_format == 'csv':
data = stats.to_csv(nfl_esc, limit)
elif output_format == 'ods':
data = stats.to_ods(nfl_esc, limit)
else:
data = stats.print_stats()
return data, [('content-type', self.format_dict[output_format])]
except ODFLIBNotInstalled:
raise
except Exception as ex:
raise ProfileException(_('Data download error: %s') % ex)
def plot(self, log_files, sort='time', limit=10, nfl_filter='',
metric_selected='cc', plot_type='bar'):
if not PLOTLIB_INSTALLED:
raise PLOTLIBNotInstalled(_('python-matplotlib not installed.'))
if len(log_files) == 0:
raise NotFoundException(_('no log file found'))
try:
stats = Stats2(*log_files)
stats.sort_stats(sort)
stats_dict = stats.stats
__, func_list = stats.get_print_list([nfl_filter, limit])
nfls = []
performance = []
names = {'nc': 'Total Call Count', 'cc': 'Primitive Call Count',
'tt': 'Total Time', 'ct': 'Cumulative Time'}
for func in func_list:
cc, nc, tt, ct, __ = stats_dict[func]
metric = {'cc': cc, 'nc': nc, 'tt': tt, 'ct': ct}
nfls.append(func[2])
performance.append(metric[metric_selected])
y_pos = range(len(nfls))
error = [random.random() for _unused in y_pos]
plt.clf()
if plot_type == 'pie':
plt.pie(x=performance, explode=None, labels=nfls,
autopct='%1.1f%%')
else:
plt.barh(y_pos, performance, xerr=error, align='center',
alpha=0.4)
plt.yticks(y_pos, nfls)
plt.xlabel(names[metric_selected])
plt.title('Profile Statistics (by %s)' % names[metric_selected])
# plt.gcf().tight_layout(pad=1.2)
with tempfile.TemporaryFile() as profile_img:
plt.savefig(profile_img, format='png', dpi=300)
profile_img.seek(0)
data = profile_img.read()
return data, [('content-type', 'image/jpg')]
except Exception as ex:
raise ProfileException(_('plotting results failed due to %s') % ex)
def format_source_code(self, nfl):
nfls = re.split('[:()]', nfl)
file_path = nfls[0]
try:
lineno = int(nfls[1])
except (TypeError, ValueError, IndexError):
lineno = 0
# for security reason, this need to be fixed.
if not file_path.endswith('.py'):
return _('The file type are forbidden to access!')
try:
data = []
i = 0
with open(file_path) as f:
lines = f.readlines()
max_width = str(len(str(len(lines))))
fmt = '<span id="L%d" rel="#L%d">%' + max_width\
+ 'd|<code>%s</code></span>'
for line in lines:
el = html_escape(line)
i = i + 1
if i == lineno:
fmt2 = '<span id="L%d" style="background-color: \
rgb(127,255,127)">%' + max_width +\
'd|<code>%s</code></span>'
data.append(fmt2 % (i, i, el))
else:
data.append(fmt % (i, i, i, el))
data = ''.join(data)
except Exception:
return _('Can not access the file %s.') % file_path
return '<pre>%s</pre>' % data
def generate_stats_html(self, stats, app_path, profile_id, *selection):
html = []
for filename in stats.files:
html.append('<p>%s</p>' % filename)
try:
for func in stats.top_level:
html.append('<p>%s</p>' % func[2])
html.append('%s function calls' % stats.total_calls)
if stats.total_calls != stats.prim_calls:
html.append("(%d primitive calls)" % stats.prim_calls)
html.append('in %.3f seconds' % stats.total_tt)
if stats.fcn_list:
stat_list = stats.fcn_list[:]
msg = "<p>Ordered by: %s</p>" % stats.sort_type
else:
stat_list = stats.stats.keys()
msg = '<p>Random listing order was used</p>'
for sel in selection:
stat_list, msg = stats.eval_print_amount(sel, stat_list, msg)
html.append(msg)
html.append('<table style="border-width: 1px">')
if stat_list:
html.append('<tr><th>#</th><th>Call Count</th>\
<th>Total Time</th><th>Time/Call</th>\
<th>Cumulative Time</th>\
<th>Cumulative Time/Call</th>\
<th>Filename:Lineno(Function)</th>\
<th>JSON</th>\
</tr>')
count = 0
for func in stat_list:
count = count + 1
html.append('<tr onMouseOver="this.className=\'hover\'"\
onMouseOut="this.className=\'normal\'">\
<td>%d)</td>' % count)
cc, nc, tt, ct, __ = stats.stats[func]
c = str(nc)
if nc != cc:
c = c + '/' + str(cc)
html.append('<td>%s</td>' % c)
html.append('<td>%f</td>' % tt)
if nc == 0:
html.append('<td>-</td>')
else:
html.append('<td>%f</td>' % (float(tt) / nc))
html.append('<td>%f</td>' % ct)
if cc == 0:
html.append('<td>-</td>')
else:
html.append('<td>%f</td>' % (float(ct) / cc))
nfls = html_escape(stats.func_std_string(func))
if nfls.split(':')[0] not in ['', 'profile'] and\
os.path.isfile(nfls.split(':')[0]):
html.append('<td><a href="%s/%s%s?format=python#L%d">\
%s</a></td>' % (app_path, profile_id,
nfls, func[1], nfls))
else:
html.append('<td>%s</td>' % nfls)
if not nfls.startswith('/'):
nfls = '/' + nfls
html.append('<td><a href="%s/%s%s?format=json">\
--></a></td></tr>' % (app_path,
profile_id, nfls))
except Exception as ex:
html.append("Exception:" + str(ex))
return ''.join(html)
| swift-master | swift/common/middleware/x_profile/html_viewer.py |
# Copyright (c) 2010-2012 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import json
import os
import pstats
import tempfile
import time
from swift import gettext_ as _
from swift.common.middleware.x_profile.exceptions import ODFLIBNotInstalled
ODFLIB_INSTALLED = True
try:
from odf.opendocument import OpenDocumentSpreadsheet
from odf.table import Table, TableRow, TableCell
from odf.text import P
except ImportError:
ODFLIB_INSTALLED = False
class Stats2(pstats.Stats):
def __init__(self, *args, **kwds):
pstats.Stats.__init__(self, *args, **kwds)
def func_to_dict(self, func):
return {'module': func[0], 'line': func[1], 'function': func[2]}
def func_std_string(self, func):
return pstats.func_std_string(func)
def to_json(self, *selection):
d = dict()
d['files'] = [f for f in self.files]
d['prim_calls'] = (self.prim_calls)
d['total_calls'] = (self.total_calls)
if hasattr(self, 'sort_type'):
d['sort_type'] = self.sort_type
else:
d['sort_type'] = 'random'
d['total_tt'] = (self.total_tt)
if self.fcn_list:
stat_list = self.fcn_list[:]
else:
stat_list = self.stats.keys()
for s in selection:
stat_list, __ = self.eval_print_amount(s, stat_list, '')
self.calc_callees()
function_calls = []
for func in stat_list:
cc, nc, tt, ct, callers = self.stats[func]
fdict = dict()
fdict.update(self.func_to_dict(func))
fdict.update({'cc': (cc), 'nc': (nc), 'tt': (tt),
'ct': (ct)})
if self.all_callees:
fdict.update({'callees': []})
for key in self.all_callees[func]:
cee = self.func_to_dict(key)
metric = self.all_callees[func][key]
# FIXME: eventlet profiler don't provide full list of
# the metrics
if type(metric) is tuple:
cc1, nc1, tt1, ct1 = metric
cee.update({'cc': cc1, 'nc': nc1, 'tt': tt1,
'ct': ct1})
else:
cee['nc'] = metric
fdict['callees'].append(cee)
cer = []
for caller in callers:
fd = self.func_to_dict(caller)
metric2 = callers[caller]
if isinstance(metric2, tuple):
cc2, nc2, tt2, ct2 = metric2
fd.update({'cc': cc2, 'nc': nc2, 'tt': tt2, 'ct': ct2})
else:
fd.update({'nc': metric2})
cer.append(fd)
fdict.update({'callers': cer})
function_calls.append(fdict)
d['stats'] = function_calls
return json.dumps(d, indent=2)
def to_csv(self, *selection):
if self.fcn_list:
stat_list = self.fcn_list[:]
order_text = "Ordered by: " + self.sort_type + '\r\n'
else:
stat_list = self.stats.keys()
order_text = "Random listing order was used\r\n"
for s in selection:
stat_list, __ = self.eval_print_amount(s, stat_list, '')
csv = '%d function calls (%d primitive calls) in %.6f seconds.' % (
self.total_calls, self.prim_calls, self.total_tt)
csv = csv + order_text + 'call count(nc), primitive call count(cc), \
total time(tt), time per call, \
cumulative time(ct), time per call, \
function\r\n'
for func in stat_list:
cc, nc, tt, ct, __ = self.stats[func]
tpc = '' if nc == 0 else '%3f' % (tt / nc)
cpc = '' if cc == 0 else '%3f' % (ct / cc)
fn = '%s:%d(%s)' % (func[0], func[1], func[2])
csv = csv + '%d,%d,%3f,%s,%3f,%s,%s\r\n' % (
nc, cc, tt, tpc, ct, cpc, fn)
return csv
def to_ods(self, *selection):
if not ODFLIB_INSTALLED:
raise ODFLIBNotInstalled(_('odfpy not installed.'))
if self.fcn_list:
stat_list = self.fcn_list[:]
order_text = " Ordered by: " + self.sort_type + '\n'
else:
stat_list = self.stats.keys()
order_text = " Random listing order was used\n"
for s in selection:
stat_list, __ = self.eval_print_amount(s, stat_list, '')
spreadsheet = OpenDocumentSpreadsheet()
table = Table(name="Profile")
for fn in self.files:
tcf = TableCell()
tcf.addElement(P(text=fn))
trf = TableRow()
trf.addElement(tcf)
table.addElement(trf)
tc_summary = TableCell()
summary_text = '%d function calls (%d primitive calls) in %.6f \
seconds' % (self.total_calls, self.prim_calls,
self.total_tt)
tc_summary.addElement(P(text=summary_text))
tr_summary = TableRow()
tr_summary.addElement(tc_summary)
table.addElement(tr_summary)
tc_order = TableCell()
tc_order.addElement(P(text=order_text))
tr_order = TableRow()
tr_order.addElement(tc_order)
table.addElement(tr_order)
tr_header = TableRow()
tc_cc = TableCell()
tc_cc.addElement(P(text='Total Call Count'))
tr_header.addElement(tc_cc)
tc_pc = TableCell()
tc_pc.addElement(P(text='Primitive Call Count'))
tr_header.addElement(tc_pc)
tc_tt = TableCell()
tc_tt.addElement(P(text='Total Time(seconds)'))
tr_header.addElement(tc_tt)
tc_pc = TableCell()
tc_pc.addElement(P(text='Time Per call(seconds)'))
tr_header.addElement(tc_pc)
tc_ct = TableCell()
tc_ct.addElement(P(text='Cumulative Time(seconds)'))
tr_header.addElement(tc_ct)
tc_pt = TableCell()
tc_pt.addElement(P(text='Cumulative Time per call(seconds)'))
tr_header.addElement(tc_pt)
tc_nfl = TableCell()
tc_nfl.addElement(P(text='filename:lineno(function)'))
tr_header.addElement(tc_nfl)
table.addElement(tr_header)
for func in stat_list:
cc, nc, tt, ct, __ = self.stats[func]
tr_header = TableRow()
tc_nc = TableCell()
tc_nc.addElement(P(text=nc))
tr_header.addElement(tc_nc)
tc_pc = TableCell()
tc_pc.addElement(P(text=cc))
tr_header.addElement(tc_pc)
tc_tt = TableCell()
tc_tt.addElement(P(text=tt))
tr_header.addElement(tc_tt)
tc_tpc = TableCell()
tc_tpc.addElement(P(text=(None if nc == 0 else float(tt) / nc)))
tr_header.addElement(tc_tpc)
tc_ct = TableCell()
tc_ct.addElement(P(text=ct))
tr_header.addElement(tc_ct)
tc_tpt = TableCell()
tc_tpt.addElement(P(text=(None if cc == 0 else float(ct) / cc)))
tr_header.addElement(tc_tpt)
tc_nfl = TableCell()
tc_nfl.addElement(P(text=func))
tr_header.addElement(tc_nfl)
table.addElement(tr_header)
spreadsheet.spreadsheet.addElement(table)
with tempfile.TemporaryFile() as tmp_ods:
spreadsheet.write(tmp_ods)
tmp_ods.seek(0)
data = tmp_ods.read()
return data
class ProfileLog(object):
def __init__(self, log_filename_prefix, dump_timestamp):
self.log_filename_prefix = log_filename_prefix
self.dump_timestamp = dump_timestamp
def get_all_pids(self):
profile_ids = [l.replace(self.log_filename_prefix, '') for l
in glob.glob(self.log_filename_prefix + '*')
if not l.endswith('.tmp')]
return sorted(profile_ids, reverse=True)
def get_logfiles(self, id_or_name):
# The first file with timestamp in the sorted log_files
# (PREFIX)(PROCESS_ID)-(TIMESTAMP)
if id_or_name in ['all']:
if self.dump_timestamp:
latest_dict = {}
for pid in self.get_all_pids():
[process_id, __] = pid.split('-')
if process_id not in latest_dict.keys():
latest_dict[process_id] = self.log_filename_prefix +\
pid
log_files = latest_dict.values()
else:
log_files = [l for l in glob.glob(self.log_filename_prefix
+ '*') if not l.endswith('.tmp')]
else:
pid = str(os.getpid()) if id_or_name in [None, '', 'current']\
else id_or_name
log_files = [l for l in glob.glob(self.log_filename_prefix +
pid + '*') if not l.endswith('.tmp')]
if len(log_files) > 0:
log_files = sorted(log_files, reverse=True)[0:1]
return log_files
def dump_profile(self, profiler, pid):
if self.log_filename_prefix:
pfn = self.log_filename_prefix + str(pid)
if self.dump_timestamp:
pfn = pfn + "-" + str(time.time())
tmpfn = pfn + ".tmp"
profiler.dump_stats(tmpfn)
os.rename(tmpfn, pfn)
return pfn
def clear(self, id_or_name):
log_files = self.get_logfiles(id_or_name)
for l in log_files:
os.path.exists(l) and os.remove(l)
| swift-master | swift/common/middleware/x_profile/profile_model.py |
# Copyright (c) 2010-2012 OpenStack, LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift import gettext_ as _
class ProfileException(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return _('Profiling Error: %s') % self.msg
class NotFoundException(ProfileException):
pass
class MethodNotAllowed(ProfileException):
pass
class ODFLIBNotInstalled(ProfileException):
pass
class PLOTLIBNotInstalled(ProfileException):
pass
class DataLoadFailure(ProfileException):
pass
| swift-master | swift/common/middleware/x_profile/exceptions.py |
# Copyright (c) 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
.. note::
This middleware supports two legacy modes of object versioning that is
now replaced by a new mode. It is recommended to use the new
:ref:`Object Versioning <object_versioning>` mode for new containers.
Object versioning in swift is implemented by setting a flag on the container
to tell swift to version all objects in the container. The value of the flag is
the URL-encoded container name where the versions are stored (commonly referred
to as the "archive container"). The flag itself is one of two headers, which
determines how object ``DELETE`` requests are handled:
* ``X-History-Location``
On ``DELETE``, copy the current version of the object to the archive
container, write a zero-byte "delete marker" object that notes when the
delete took place, and delete the object from the versioned container. The
object will no longer appear in container listings for the versioned
container and future requests there will return ``404 Not Found``. However,
the content will still be recoverable from the archive container.
* ``X-Versions-Location``
On ``DELETE``, only remove the current version of the object. If any
previous versions exist in the archive container, the most recent one is
copied over the current version, and the copy in the archive container is
deleted. As a result, if you have 5 total versions of the object, you must
delete the object 5 times for that object name to start responding with
``404 Not Found``.
Either header may be used for the various containers within an account, but
only one may be set for any given container. Attempting to set both
simulataneously will result in a ``400 Bad Request`` response.
.. note::
It is recommended to use a different archive container for
each container that is being versioned.
.. note::
Enabling versioning on an archive container is not recommended.
When data is ``PUT`` into a versioned container (a container with the
versioning flag turned on), the existing data in the file is redirected to a
new object in the archive container and the data in the ``PUT`` request is
saved as the data for the versioned object. The new object name (for the
previous version) is ``<archive_container>/<length><object_name>/<timestamp>``,
where ``length`` is the 3-character zero-padded hexadecimal length of the
``<object_name>`` and ``<timestamp>`` is the timestamp of when the previous
version was created.
A ``GET`` to a versioned object will return the current version of the object
without having to do any request redirects or metadata lookups.
A ``POST`` to a versioned object will update the object metadata as normal,
but will not create a new version of the object. In other words, new versions
are only created when the content of the object changes.
A ``DELETE`` to a versioned object will be handled in one of two ways,
as described above.
To restore a previous version of an object, find the desired version in the
archive container then issue a ``COPY`` with a ``Destination`` header
indicating the original location. This will archive the current version similar
to a ``PUT`` over the versioned object. If the client additionally wishes to
permanently delete what was the current version, it must find the newly-created
archive in the archive container and issue a separate ``DELETE`` to it.
--------------------------------------------------
How to Enable Object Versioning in a Swift Cluster
--------------------------------------------------
This middleware was written as an effort to refactor parts of the proxy server,
so this functionality was already available in previous releases and every
attempt was made to maintain backwards compatibility. To allow operators to
perform a seamless upgrade, it is not required to add the middleware to the
proxy pipeline and the flag ``allow_versions`` in the container server
configuration files are still valid, but only when using
``X-Versions-Location``. In future releases, ``allow_versions`` will be
deprecated in favor of adding this middleware to the pipeline to enable or
disable the feature.
In case the middleware is added to the proxy pipeline, you must also
set ``allow_versioned_writes`` to ``True`` in the middleware options
to enable the information about this middleware to be returned in a /info
request.
.. note::
You need to add the middleware to the proxy pipeline and set
``allow_versioned_writes = True`` to use ``X-History-Location``. Setting
``allow_versions = True`` in the container server is not sufficient to
enable the use of ``X-History-Location``.
Upgrade considerations
++++++++++++++++++++++
If ``allow_versioned_writes`` is set in the filter configuration, you can leave
the ``allow_versions`` flag in the container server configuration files
untouched. If you decide to disable or remove the ``allow_versions`` flag, you
must re-set any existing containers that had the ``X-Versions-Location`` flag
configured so that it can now be tracked by the versioned_writes middleware.
Clients should not use the ``X-History-Location`` header until all proxies in
the cluster have been upgraded to a version of Swift that supports it.
Attempting to use ``X-History-Location`` during a rolling upgrade may result
in some requests being served by proxies running old code, leading to data
loss.
----------------------------------------------------
Examples Using ``curl`` with ``X-Versions-Location``
----------------------------------------------------
First, create a container with the ``X-Versions-Location`` header or add the
header to an existing container. Also make sure the container referenced by
the ``X-Versions-Location`` exists. In this example, the name of that
container is "versions"::
curl -i -XPUT -H "X-Auth-Token: <token>" \
-H "X-Versions-Location: versions" http://<storage_url>/container
curl -i -XPUT -H "X-Auth-Token: <token>" http://<storage_url>/versions
Create an object (the first version)::
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
Now create a new version of that object::
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
See a listing of the older versions of the object::
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
Now delete the current version of the object and see that the older version is
gone from 'versions' container and back in 'container' container::
curl -i -XDELETE -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
curl -i -XGET -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
---------------------------------------------------
Examples Using ``curl`` with ``X-History-Location``
---------------------------------------------------
As above, create a container with the ``X-History-Location`` header and ensure
that the container referenced by the ``X-History-Location`` exists. In this
example, the name of that container is "versions"::
curl -i -XPUT -H "X-Auth-Token: <token>" \
-H "X-History-Location: versions" http://<storage_url>/container
curl -i -XPUT -H "X-Auth-Token: <token>" http://<storage_url>/versions
Create an object (the first version)::
curl -i -XPUT --data-binary 1 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
Now create a new version of that object::
curl -i -XPUT --data-binary 2 -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
Now delete the current version of the object. Subsequent requests will 404::
curl -i -XDELETE -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/container/myobject
A listing of the older versions of the object will include both the first and
second versions of the object, as well as a "delete marker" object::
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
To restore a previous version, simply ``COPY`` it from the archive container::
curl -i -XCOPY -H "X-Auth-Token: <token>" \
http://<storage_url>/versions/008myobject/<timestamp> \
-H "Destination: container/myobject"
Note that the archive container still has all previous versions of the object,
including the source for the restore::
curl -i -H "X-Auth-Token: <token>" \
http://<storage_url>/versions?prefix=008myobject/
To permanently delete a previous version, ``DELETE`` it from the archive
container::
curl -i -XDELETE -H "X-Auth-Token: <token>" \
http://<storage_url>/versions/008myobject/<timestamp>
---------------------------------------------------
How to Disable Object Versioning in a Swift Cluster
---------------------------------------------------
If you want to disable all functionality, set ``allow_versioned_writes`` to
``False`` in the middleware options.
Disable versioning from a container (x is any value except empty)::
curl -i -XPOST -H "X-Auth-Token: <token>" \
-H "X-Remove-Versions-Location: x" http://<storage_url>/container
"""
import calendar
import json
import time
from swift.common.utils import get_logger, Timestamp, \
config_true_value, close_if_possible, FileLikeIter, drain_and_close
from swift.common.request_helpers import get_sys_meta_prefix, \
copy_header_subset
from swift.common.wsgi import WSGIContext, make_pre_authed_request
from swift.common.swob import (
Request, HTTPException, HTTPRequestEntityTooLarge)
from swift.common.constraints import check_container_format, MAX_FILE_SIZE
from swift.proxy.controllers.base import get_container_info
from swift.common.http import (
is_success, is_client_error, HTTP_NOT_FOUND)
from swift.common.swob import HTTPPreconditionFailed, HTTPServiceUnavailable, \
HTTPServerError, HTTPBadRequest, str_to_wsgi, bytes_to_wsgi, wsgi_quote, \
wsgi_unquote
from swift.common.exceptions import (
ListingIterNotFound, ListingIterError)
DELETE_MARKER_CONTENT_TYPE = 'application/x-deleted;swift_versions_deleted=1'
CLIENT_VERSIONS_LOC = 'x-versions-location'
CLIENT_HISTORY_LOC = 'x-history-location'
SYSMETA_VERSIONS_LOC = get_sys_meta_prefix('container') + 'versions-location'
SYSMETA_VERSIONS_MODE = get_sys_meta_prefix('container') + 'versions-mode'
class VersionedWritesContext(WSGIContext):
def __init__(self, wsgi_app, logger):
WSGIContext.__init__(self, wsgi_app)
self.logger = logger
def _listing_iter(self, account_name, lcontainer, lprefix, req):
try:
for page in self._listing_pages_iter(account_name, lcontainer,
lprefix, req):
for item in page:
yield item
except ListingIterNotFound:
pass
except ListingIterError:
raise HTTPServerError(request=req)
def _in_proxy_reverse_listing(self, account_name, lcontainer, lprefix,
req, failed_marker, failed_listing):
'''Get the complete prefix listing and reverse it on the proxy.
This is only necessary if we encounter a response from a
container-server that does not respect the ``reverse`` param
included by default in ``_listing_pages_iter``. This may happen
during rolling upgrades from pre-2.6.0 swift.
:param failed_marker: the marker that was used when we encountered
the non-reversed listing
:param failed_listing: the non-reversed listing that was encountered.
If ``failed_marker`` is blank, we can use this
to save ourselves a request
:returns: an iterator over all objects starting with ``lprefix`` (up
to but not including the failed marker) in reverse order
'''
complete_listing = []
if not failed_marker:
# We've never gotten a reversed listing. So save a request and
# use the failed listing.
complete_listing.extend(failed_listing)
marker = bytes_to_wsgi(complete_listing[-1]['name'].encode('utf8'))
else:
# We've gotten at least one reversed listing. Have to start at
# the beginning.
marker = ''
# First, take the *entire* prefix listing into memory
try:
for page in self._listing_pages_iter(
account_name, lcontainer, lprefix,
req, marker, end_marker=failed_marker, reverse=False):
complete_listing.extend(page)
except ListingIterNotFound:
pass
# Now that we've got everything, return the whole listing as one giant
# reversed page
return reversed(complete_listing)
def _listing_pages_iter(self, account_name, lcontainer, lprefix,
req, marker='', end_marker='', reverse=True):
'''Get "pages" worth of objects that start with a prefix.
The optional keyword arguments ``marker``, ``end_marker``, and
``reverse`` are used similar to how they are for containers. We're
either coming:
- directly from ``_listing_iter``, in which case none of the
optional args are specified, or
- from ``_in_proxy_reverse_listing``, in which case ``reverse``
is ``False`` and both ``marker`` and ``end_marker`` are specified
(although they may still be blank).
'''
while True:
lreq = make_pre_authed_request(
req.environ, method='GET', swift_source='VW',
path=wsgi_quote('/v1/%s/%s' % (account_name, lcontainer)))
lreq.environ['QUERY_STRING'] = \
'prefix=%s&marker=%s' % (wsgi_quote(lprefix),
wsgi_quote(marker))
if end_marker:
lreq.environ['QUERY_STRING'] += '&end_marker=%s' % (
wsgi_quote(end_marker))
if reverse:
lreq.environ['QUERY_STRING'] += '&reverse=on'
lresp = lreq.get_response(self.app)
if not is_success(lresp.status_int):
# errors should be short
drain_and_close(lresp)
if lresp.status_int == HTTP_NOT_FOUND:
raise ListingIterNotFound()
elif is_client_error(lresp.status_int):
raise HTTPPreconditionFailed(request=req)
else:
raise ListingIterError()
if not lresp.body:
break
sublisting = json.loads(lresp.body)
if not sublisting:
break
# When using the ``reverse`` param, check that the listing is
# actually reversed
first_item = bytes_to_wsgi(sublisting[0]['name'].encode('utf-8'))
last_item = bytes_to_wsgi(sublisting[-1]['name'].encode('utf-8'))
page_is_after_marker = marker and first_item > marker
if reverse and (first_item < last_item or page_is_after_marker):
# Apparently there's at least one pre-2.6.0 container server
yield self._in_proxy_reverse_listing(
account_name, lcontainer, lprefix,
req, marker, sublisting)
return
marker = last_item
yield sublisting
def _get_source_object(self, req, path_info):
# make a pre_auth request in case the user has write access
# to container, but not READ. This was allowed in previous version
# (i.e., before middleware) so keeping the same behavior here
get_req = make_pre_authed_request(
req.environ, path=wsgi_quote(path_info) + '?symlink=get',
headers={'X-Newest': 'True'}, method='GET', swift_source='VW')
source_resp = get_req.get_response(self.app)
if source_resp.content_length is None or \
source_resp.content_length > MAX_FILE_SIZE:
# Consciously *don't* drain the response before closing;
# any logged 499 is actually rather appropriate here
close_if_possible(source_resp.app_iter)
return HTTPRequestEntityTooLarge(request=req)
return source_resp
def _put_versioned_obj(self, req, put_path_info, source_resp):
# Create a new Request object to PUT to the container, copying
# all headers from the source object apart from x-timestamp.
put_req = make_pre_authed_request(
req.environ, path=wsgi_quote(put_path_info), method='PUT',
swift_source='VW')
copy_header_subset(source_resp, put_req,
lambda k: k.lower() != 'x-timestamp')
slo_size = put_req.headers.get('X-Object-Sysmeta-Slo-Size')
if slo_size:
put_req.headers['Content-Type'] += '; swift_bytes=' + slo_size
put_req.environ['swift.content_type_overridden'] = True
put_req.environ['wsgi.input'] = FileLikeIter(source_resp.app_iter)
put_resp = put_req.get_response(self.app)
# the PUT was responsible for draining
close_if_possible(source_resp.app_iter)
return put_resp
def _check_response_error(self, req, resp):
"""
Raise Error Response in case of error
"""
if is_success(resp.status_int):
return
# any error should be short
drain_and_close(resp)
if is_client_error(resp.status_int):
# missing container or bad permissions
raise HTTPPreconditionFailed(request=req)
# could not version the data, bail
raise HTTPServiceUnavailable(request=req)
def _build_versions_object_prefix(self, object_name):
return '%03x%s/' % (
len(object_name),
object_name)
def _build_versions_object_name(self, object_name, ts):
return ''.join((
self._build_versions_object_prefix(object_name),
Timestamp(ts).internal))
def _copy_current(self, req, versions_cont, api_version, account_name,
object_name):
# validate the write access to the versioned container before
# making any backend requests
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app, swift_source='VW')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
raise aresp
get_resp = self._get_source_object(req, req.path_info)
if get_resp.status_int == HTTP_NOT_FOUND:
# nothing to version, proceed with original request
drain_and_close(get_resp)
return
# check for any other errors
self._check_response_error(req, get_resp)
# if there's an existing object, then copy it to
# X-Versions-Location
ts_source = get_resp.headers.get(
'x-timestamp',
calendar.timegm(time.strptime(
get_resp.headers['last-modified'],
'%a, %d %b %Y %H:%M:%S GMT')))
vers_obj_name = self._build_versions_object_name(
object_name, ts_source)
put_path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, vers_obj_name)
req.environ['QUERY_STRING'] = ''
put_resp = self._put_versioned_obj(req, put_path_info, get_resp)
self._check_response_error(req, put_resp)
# successful PUT response should be short
drain_and_close(put_resp)
def handle_obj_versions_put(self, req, versions_cont, api_version,
account_name, object_name):
"""
Copy current version of object to versions_container before proceeding
with original request.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param object_name: name of object of original request
"""
self._copy_current(req, versions_cont, api_version, account_name,
object_name)
return self.app
def handle_obj_versions_delete_push(self, req, versions_cont, api_version,
account_name, container_name,
object_name):
"""
Handle DELETE requests when in history mode.
Copy current version of object to versions_container and write a
delete marker before proceeding with original request.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param object_name: name of object of original request
"""
self._copy_current(req, versions_cont, api_version, account_name,
object_name)
marker_path = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont,
self._build_versions_object_name(object_name, time.time()))
marker_headers = {
# Definitive source of truth is Content-Type, and since we add
# a swift_* param, we know users haven't set it themselves.
# This is still open to users POSTing to update the content-type
# but they're just shooting themselves in the foot then.
'content-type': DELETE_MARKER_CONTENT_TYPE,
'content-length': '0',
'x-auth-token': req.headers.get('x-auth-token')}
marker_req = make_pre_authed_request(
req.environ, path=wsgi_quote(marker_path),
headers=marker_headers, method='PUT', swift_source='VW')
marker_req.environ['swift.content_type_overridden'] = True
marker_resp = marker_req.get_response(self.app)
self._check_response_error(req, marker_resp)
drain_and_close(marker_resp)
# successfully copied and created delete marker; safe to delete
return self.app
def _restore_data(self, req, versions_cont, api_version, account_name,
container_name, object_name, prev_obj_name):
get_path = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, prev_obj_name)
get_resp = self._get_source_object(req, get_path)
# if the version isn't there, keep trying with previous version
if get_resp.status_int == HTTP_NOT_FOUND:
drain_and_close(get_resp)
return False
self._check_response_error(req, get_resp)
put_path_info = "/%s/%s/%s/%s" % (
api_version, account_name, container_name, object_name)
put_resp = self._put_versioned_obj(req, put_path_info, get_resp)
self._check_response_error(req, put_resp)
drain_and_close(put_resp)
return get_path
def handle_obj_versions_delete_pop(self, req, versions_cont, api_version,
account_name, container_name,
object_name):
"""
Handle DELETE requests when in stack mode.
Delete current version of object and pop previous version in its place.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param container_name: container name.
:param object_name: object name.
"""
listing_prefix = self._build_versions_object_prefix(object_name)
item_iter = self._listing_iter(account_name, versions_cont,
listing_prefix, req)
auth_token_header = {'X-Auth-Token': req.headers.get('X-Auth-Token')}
authed = False
for previous_version in item_iter:
if not authed:
# validate the write access to the versioned container before
# making any backend requests
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app, swift_source='VW')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
return aresp
authed = True
if previous_version['content_type'] == DELETE_MARKER_CONTENT_TYPE:
# check whether we have data in the versioned container
obj_head_headers = {'X-Newest': 'True'}
obj_head_headers.update(auth_token_header)
head_req = make_pre_authed_request(
req.environ, path=wsgi_quote(req.path_info), method='HEAD',
headers=obj_head_headers, swift_source='VW')
hresp = head_req.get_response(self.app)
drain_and_close(hresp)
if hresp.status_int != HTTP_NOT_FOUND:
self._check_response_error(req, hresp)
# if there's an existing object, then just let the delete
# through (i.e., restore to the delete-marker state):
break
# no data currently in the container (delete marker is current)
for version_to_restore in item_iter:
if version_to_restore['content_type'] == \
DELETE_MARKER_CONTENT_TYPE:
# Nothing to restore
break
obj_to_restore = bytes_to_wsgi(
version_to_restore['name'].encode('utf-8'))
req.environ['QUERY_STRING'] = ''
restored_path = self._restore_data(
req, versions_cont, api_version, account_name,
container_name, object_name, obj_to_restore)
if not restored_path:
continue
old_del_req = make_pre_authed_request(
req.environ, path=wsgi_quote(restored_path),
method='DELETE', headers=auth_token_header,
swift_source='VW')
del_resp = old_del_req.get_response(self.app)
drain_and_close(del_resp)
if del_resp.status_int != HTTP_NOT_FOUND:
self._check_response_error(req, del_resp)
# else, well, it existed long enough to do the
# copy; we won't worry too much
break
prev_obj_name = bytes_to_wsgi(
previous_version['name'].encode('utf-8'))
marker_path = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont,
prev_obj_name)
# done restoring, redirect the delete to the marker
req = make_pre_authed_request(
req.environ, path=wsgi_quote(marker_path), method='DELETE',
headers=auth_token_header, swift_source='VW')
else:
# there are older versions so copy the previous version to the
# current object and delete the previous version
prev_obj_name = bytes_to_wsgi(
previous_version['name'].encode('utf-8'))
req.environ['QUERY_STRING'] = ''
restored_path = self._restore_data(
req, versions_cont, api_version, account_name,
container_name, object_name, prev_obj_name)
if not restored_path:
continue
# redirect the original DELETE to the source of the reinstated
# version object - we already auth'd original req so make a
# pre-authed request
req = make_pre_authed_request(
req.environ, path=wsgi_quote(restored_path),
method='DELETE', headers=auth_token_header,
swift_source='VW')
# remove 'X-If-Delete-At', since it is not for the older copy
if 'X-If-Delete-At' in req.headers:
del req.headers['X-If-Delete-At']
break
# handle DELETE request here in case it was modified
return req.get_response(self.app)
def handle_container_request(self, env, start_response):
app_resp = self._app_call(env)
if self._response_headers is None:
self._response_headers = []
mode = location = ''
for key, val in self._response_headers:
if key.lower() == SYSMETA_VERSIONS_LOC:
location = val
elif key.lower() == SYSMETA_VERSIONS_MODE:
mode = val
if location:
if mode == 'history':
self._response_headers.extend([
(CLIENT_HISTORY_LOC.title(), location)])
else:
self._response_headers.extend([
(CLIENT_VERSIONS_LOC.title(), location)])
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
class VersionedWritesMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.logger = get_logger(conf, log_route='versioned_writes')
def container_request(self, req, start_response, enabled):
if CLIENT_VERSIONS_LOC in req.headers and \
CLIENT_HISTORY_LOC in req.headers:
if not req.headers[CLIENT_HISTORY_LOC]:
# defer to versions location entirely
del req.headers[CLIENT_HISTORY_LOC]
elif req.headers[CLIENT_VERSIONS_LOC]:
raise HTTPBadRequest(
request=req, content_type='text/plain',
body='Only one of %s or %s may be specified' % (
CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC))
else:
# history location is present and versions location is
# present but empty -- clean it up
del req.headers[CLIENT_VERSIONS_LOC]
if CLIENT_VERSIONS_LOC in req.headers or \
CLIENT_HISTORY_LOC in req.headers:
if CLIENT_VERSIONS_LOC in req.headers:
val = req.headers[CLIENT_VERSIONS_LOC]
mode = 'stack'
else:
val = req.headers[CLIENT_HISTORY_LOC]
mode = 'history'
if not val:
# empty value is the same as X-Remove-Versions-Location
req.headers['X-Remove-Versions-Location'] = 'x'
elif not config_true_value(enabled) and \
req.method in ('PUT', 'POST'):
# differently from previous version, we are actually
# returning an error if user tries to set versions location
# while feature is explicitly disabled.
raise HTTPPreconditionFailed(
request=req, content_type='text/plain',
body='Versioned Writes is disabled')
else:
# OK, we received a value, have versioning enabled, and aren't
# trying to set two modes at once. Validate the value and
# translate to sysmeta.
location = check_container_format(req, val)
req.headers[SYSMETA_VERSIONS_LOC] = location
req.headers[SYSMETA_VERSIONS_MODE] = mode
# reset original header on container server to maintain sanity
# now only sysmeta is source of Versions Location
req.headers[CLIENT_VERSIONS_LOC] = ''
# if both add and remove headers are in the same request
# adding location takes precedence over removing
for header in ['X-Remove-Versions-Location',
'X-Remove-History-Location']:
if header in req.headers:
del req.headers[header]
if any(req.headers.get(header) for header in [
'X-Remove-Versions-Location',
'X-Remove-History-Location']):
req.headers.update({CLIENT_VERSIONS_LOC: '',
SYSMETA_VERSIONS_LOC: '',
SYSMETA_VERSIONS_MODE: ''})
for header in ['X-Remove-Versions-Location',
'X-Remove-History-Location']:
if header in req.headers:
del req.headers[header]
# send request and translate sysmeta headers from response
vw_ctx = VersionedWritesContext(self.app, self.logger)
return vw_ctx.handle_container_request(req.environ, start_response)
def object_request(self, req, api_version, account, container, obj,
allow_versioned_writes):
"""
Handle request for object resource.
Note that account, container, obj should be unquoted by caller
if the url path is under url encoding (e.g. %FF)
:param req: swift.common.swob.Request instance
:param api_version: should be v1 unless swift bumps api version
:param account: account name string
:param container: container name string
:param object: object name string
"""
resp = None
is_enabled = config_true_value(allow_versioned_writes)
container_info = get_container_info(
req.environ, self.app, swift_source='VW')
# To maintain backwards compatibility, container version
# location could be stored as sysmeta or not, need to check both.
# If stored as sysmeta, check if middleware is enabled. If sysmeta
# is not set, but versions property is set in container_info, then
# for backwards compatibility feature is enabled.
versions_cont = container_info.get(
'sysmeta', {}).get('versions-location')
versioning_mode = container_info.get(
'sysmeta', {}).get('versions-mode', 'stack')
if not versions_cont:
versions_cont = container_info.get('versions')
# if allow_versioned_writes is not set in the configuration files
# but 'versions' is configured, enable feature to maintain
# backwards compatibility
if not allow_versioned_writes and versions_cont:
is_enabled = True
if is_enabled and versions_cont:
versions_cont = wsgi_unquote(str_to_wsgi(
versions_cont)).split('/')[0]
vw_ctx = VersionedWritesContext(self.app, self.logger)
if req.method == 'PUT':
resp = vw_ctx.handle_obj_versions_put(
req, versions_cont, api_version, account,
obj)
# handle DELETE
elif versioning_mode == 'history':
resp = vw_ctx.handle_obj_versions_delete_push(
req, versions_cont, api_version, account,
container, obj)
else:
resp = vw_ctx.handle_obj_versions_delete_pop(
req, versions_cont, api_version, account,
container, obj)
if resp:
return resp
else:
return self.app
def __call__(self, env, start_response):
req = Request(env)
try:
(api_version, account, container, obj) = req.split_path(3, 4, True)
is_cont_or_obj_req = True
except ValueError:
is_cont_or_obj_req = False
if not is_cont_or_obj_req:
return self.app(env, start_response)
# In case allow_versioned_writes is set in the filter configuration,
# the middleware becomes the authority on whether object
# versioning is enabled or not. In case it is not set, then
# the option in the container configuration is still checked
# for backwards compatibility
# For a container request, first just check if option is set,
# can be either true or false.
# If set, check if enabled when actually trying to set container
# header. If not set, let request be handled by container server
# for backwards compatibility.
# For an object request, also check if option is set (either T or F).
# If set, check if enabled when checking versions container in
# sysmeta property. If it is not set check 'versions' property in
# container_info
allow_versioned_writes = self.conf.get('allow_versioned_writes')
if allow_versioned_writes and container and not obj:
try:
return self.container_request(req, start_response,
allow_versioned_writes)
except HTTPException as error_response:
return error_response(env, start_response)
elif (obj and req.method in ('PUT', 'DELETE')):
try:
return self.object_request(
req, api_version, account, container, obj,
allow_versioned_writes)(env, start_response)
except HTTPException as error_response:
return error_response(env, start_response)
else:
return self.app(env, start_response)
| swift-master | swift/common/middleware/versioned_writes/legacy.py |
# Copyright (c) 2020 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Object versioning in Swift has 3 different modes. There are two
:ref:`legacy modes <versioned_writes>` that have similar API with a slight
difference in behavior and this middleware introduces a new mode with a
completely redesigned API and implementation.
In terms of the implementation, this middleware relies heavily on the use of
static links to reduce the amount of backend data movement that was part of the
two legacy modes. It also introduces a new API for enabling the feature and to
interact with older versions of an object.
Compatibility between modes
===========================
This new mode is not backwards compatible or interchangeable with the
two legacy modes. This means that existing containers that are being versioned
by the two legacy modes cannot enable the new mode. The new mode can only be
enabled on a new container or a container without either
``X-Versions-Location`` or ``X-History-Location`` header set. Attempting to
enable the new mode on a container with either header will result in a
``400 Bad Request`` response.
Enable Object Versioning in a Container
=======================================
After the introduction of this feature containers in a Swift cluster will be
in one of either 3 possible states: 1. Object versioning never enabled,
2. Object Versioning Enabled or 3. Object Versioning Disabled. Once versioning
has been enabled on a container, it will always have a flag stating whether it
is either enabled or disabled.
Clients enable object versioning on a container by performing either a PUT or
POST request with the header ``X-Versions-Enabled: true``. Upon enabling the
versioning for the first time, the middleware will create a hidden container
where object versions are stored. This hidden container will inherit the same
Storage Policy as its parent container.
To disable, clients send a POST request with the header
``X-Versions-Enabled: false``. When versioning is disabled, the old versions
remain unchanged.
To delete a versioned container, versioning must be disabled and all versions
of all objects must be deleted before the container can be deleted. At such
time, the hidden container will also be deleted.
Object CRUD Operations to a Versioned Container
===============================================
When data is ``PUT`` into a versioned container (a container with the
versioning flag enabled), the actual object is written to a hidden container
and a symlink object is written to the parent container. Every object is
assigned a version id. This id can be retrieved from the
``X-Object-Version-Id`` header in the PUT response.
.. note::
When object versioning is disabled on a container, new data will no longer
be versioned, but older versions remain untouched. Any new data ``PUT``
will result in a object with a ``null`` version-id. The versioning API can
be used to both list and operate on previous versions even while versioning
is disabled.
If versioning is re-enabled and an overwrite occurs on a `null` id object.
The object will be versioned off with a regular version-id.
A ``GET`` to a versioned object will return the current version of the object.
The ``X-Object-Version-Id`` header is also returned in the response.
A ``POST`` to a versioned object will update the most current object metadata
as normal, but will not create a new version of the object. In other words,
new versions are only created when the content of the object changes.
On ``DELETE``, the middleware will write a zero-byte "delete marker" object
version that notes **when** the delete took place. The symlink object will also
be deleted from the versioned container. The object will no longer appear in
container listings for the versioned container and future requests there will
return ``404 Not Found``. However, the previous versions content will still be
recoverable.
Object Versioning API
=====================
Clients can now operate on previous versions of an object using this new
versioning API.
First to list previous versions, issue a a ``GET`` request to the versioned
container with query parameter::
?versions
To list a container with a large number of object versions, clients can
also use the ``version_marker`` parameter together with the ``marker``
parameter. While the ``marker`` parameter is used to specify an object name
the ``version_marker`` will be used specify the version id.
All other pagination parameters can be used in conjunction with the
``versions`` parameter.
During container listings, delete markers can be identified with the
content-type ``application/x-deleted;swift_versions_deleted=1``. The most
current version of an object can be identified by the field ``is_latest``.
To operate on previous versions, clients can use the query parameter::
?version-id=<id>
where the ``<id>`` is the value from the ``X-Object-Version-Id`` header.
Only COPY, HEAD, GET and DELETE operations can be performed on previous
versions. Either a PUT or POST request with a ``version-id`` parameter will
result in a ``400 Bad Request`` response.
A HEAD/GET request to a delete-marker will result in a ``404 Not Found``
response.
When issuing DELETE requests with a ``version-id`` parameter, delete markers
are not written down. A DELETE request with a ``version-id`` parameter to
the current object will result in a both the symlink and the backing data
being deleted. A DELETE to any other version will result in that version only
be deleted and no changes made to the symlink pointing to the current version.
How to Enable Object Versioning in a Swift Cluster
==================================================
To enable this new mode in a Swift cluster the ``versioned_writes`` and
``symlink`` middlewares must be added to the proxy pipeline, you must also set
the option ``allow_object_versioning`` to ``True``.
"""
import calendar
import itertools
import json
import six
import time
from cgi import parse_header
from six.moves.urllib.parse import unquote
from swift.common.constraints import MAX_FILE_SIZE, valid_api_version, \
ACCOUNT_LISTING_LIMIT, CONTAINER_LISTING_LIMIT
from swift.common.http import is_success, is_client_error, HTTP_NOT_FOUND, \
HTTP_CONFLICT
from swift.common.request_helpers import get_sys_meta_prefix, \
copy_header_subset, get_reserved_name, split_reserved_name, \
constrain_req_limit
from swift.common.middleware import app_property
from swift.common.middleware.symlink import TGT_OBJ_SYMLINK_HDR, \
TGT_ETAG_SYSMETA_SYMLINK_HDR, SYMLOOP_EXTEND, ALLOW_RESERVED_NAMES, \
TGT_BYTES_SYSMETA_SYMLINK_HDR, TGT_ACCT_SYMLINK_HDR
from swift.common.swob import HTTPPreconditionFailed, HTTPServiceUnavailable, \
HTTPBadRequest, str_to_wsgi, bytes_to_wsgi, wsgi_quote, \
wsgi_to_str, wsgi_unquote, Request, HTTPNotFound, HTTPException, \
HTTPRequestEntityTooLarge, HTTPInternalServerError, HTTPNotAcceptable, \
HTTPConflict
from swift.common.storage_policy import POLICIES
from swift.common.utils import get_logger, Timestamp, drain_and_close, \
config_true_value, close_if_possible, closing_if_possible, \
FileLikeIter, split_path, parse_content_type, RESERVED_STR
from swift.common.wsgi import WSGIContext, make_pre_authed_request
from swift.proxy.controllers.base import get_container_info
DELETE_MARKER_CONTENT_TYPE = 'application/x-deleted;swift_versions_deleted=1'
CLIENT_VERSIONS_ENABLED = 'x-versions-enabled'
SYSMETA_VERSIONS_ENABLED = \
get_sys_meta_prefix('container') + 'versions-enabled'
SYSMETA_VERSIONS_CONT = get_sys_meta_prefix('container') + 'versions-container'
SYSMETA_PARENT_CONT = get_sys_meta_prefix('container') + 'parent-container'
SYSMETA_VERSIONS_SYMLINK = get_sys_meta_prefix('object') + 'versions-symlink'
def build_listing(*to_splice, **kwargs):
reverse = kwargs.pop('reverse')
limit = kwargs.pop('limit')
if kwargs:
raise TypeError('Invalid keyword arguments received: %r' % kwargs)
def merge_key(item):
if 'subdir' in item:
return item['subdir']
return item['name']
return json.dumps(sorted(
itertools.chain(*to_splice),
key=merge_key,
reverse=reverse,
)[:limit]).encode('ascii')
def non_expiry_header(header):
return header.lower() not in ('x-delete-at', 'x-delete-after')
class ByteCountingReader(object):
"""
Counts bytes read from file_like so we know how big the object is that
the client just PUT.
This is particularly important when the client sends a chunk-encoded body,
so we don't have a Content-Length header available.
"""
def __init__(self, file_like):
self.file_like = file_like
self.bytes_read = 0
def read(self, amt=-1):
chunk = self.file_like.read(amt)
self.bytes_read += len(chunk)
return chunk
class ObjectVersioningContext(WSGIContext):
def __init__(self, wsgi_app, logger):
super(ObjectVersioningContext, self).__init__(wsgi_app)
self.logger = logger
def _build_versions_object_prefix(self, object_name):
return get_reserved_name(object_name, '')
def _build_versions_container_name(self, container_name):
return get_reserved_name('versions', container_name)
def _build_versions_object_name(self, object_name, ts):
inv = ~Timestamp(ts)
return get_reserved_name(object_name, inv.internal)
def _split_version_from_name(self, versioned_name):
try:
name, inv = split_reserved_name(versioned_name)
ts = ~Timestamp(inv)
except ValueError:
return versioned_name, None
return name, ts
def _split_versions_container_name(self, versions_container):
try:
versions, container_name = split_reserved_name(versions_container)
except ValueError:
return versions_container
if versions != 'versions':
return versions_container
return container_name
class ObjectContext(ObjectVersioningContext):
def _get_source_object(self, req, path_info):
# make a pre_auth request in case the user has write access
# to container, but not READ. This was allowed in previous version
# (i.e., before middleware) so keeping the same behavior here
get_req = make_pre_authed_request(
req.environ, path=wsgi_quote(path_info) + '?symlink=get',
headers={'X-Newest': 'True'}, method='GET', swift_source='OV')
source_resp = get_req.get_response(self.app)
if source_resp.content_length is None or \
source_resp.content_length > MAX_FILE_SIZE:
close_if_possible(source_resp.app_iter)
return HTTPRequestEntityTooLarge(request=req)
return source_resp
def _put_versioned_obj(self, req, put_path_info, source_resp):
# Create a new Request object to PUT to the versions container, copying
# all headers from the source object apart from x-timestamp.
put_req = make_pre_authed_request(
req.environ, path=wsgi_quote(put_path_info), method='PUT',
headers={'X-Backend-Allow-Reserved-Names': 'true'},
swift_source='OV')
copy_header_subset(source_resp, put_req,
lambda k: k.lower() != 'x-timestamp')
put_req.environ['wsgi.input'] = FileLikeIter(source_resp.app_iter)
slo_size = put_req.headers.get('X-Object-Sysmeta-Slo-Size')
if slo_size:
put_req.headers['Content-Type'] += '; swift_bytes=%s' % slo_size
put_req.environ['swift.content_type_overridden'] = True
put_resp = put_req.get_response(self.app)
drain_and_close(put_resp)
# the PUT should have already drained source_resp
close_if_possible(source_resp.app_iter)
return put_resp
def _put_versioned_obj_from_client(self, req, versions_cont, api_version,
account_name, object_name):
vers_obj_name = self._build_versions_object_name(
object_name, req.timestamp.internal)
put_path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, vers_obj_name)
# Consciously *do not* set swift_source here -- this req is in charge
# of reading bytes from the client, don't let it look like that data
# movement is due to some internal-to-swift thing
put_req = make_pre_authed_request(
req.environ, path=wsgi_quote(put_path_info), method='PUT',
headers={'X-Backend-Allow-Reserved-Names': 'true'},
swift_source='OV')
# move the client request body over
# note that the WSGI environ may be *further* manipulated; hold on to
# a reference to the byte counter so we can get the bytes_read
if req.message_length() is None:
put_req.headers['transfer-encoding'] = \
req.headers.get('transfer-encoding')
else:
put_req.content_length = req.content_length
byte_counter = ByteCountingReader(req.environ['wsgi.input'])
put_req.environ['wsgi.input'] = byte_counter
req.body = b''
# move metadata over, including sysmeta
copy_header_subset(req, put_req, non_expiry_header)
if 'swift.content_type_overridden' in req.environ:
put_req.environ['swift.content_type_overridden'] = \
req.environ.pop('swift.content_type_overridden')
# do the write
put_resp = put_req.get_response(self.app)
close_if_possible(put_req.environ['wsgi.input'])
if put_resp.status_int == HTTP_NOT_FOUND:
drain_and_close(put_resp)
raise HTTPInternalServerError(
request=req, content_type='text/plain',
body=b'The versions container does not exist. You may '
b'want to re-enable object versioning.')
self._check_response_error(req, put_resp)
drain_and_close(put_resp)
put_bytes = byte_counter.bytes_read
# N.B. this is essentially the same hack that symlink does in
# _validate_etag_and_update_sysmeta to deal with SLO
slo_size = put_req.headers.get('X-Object-Sysmeta-Slo-Size')
if slo_size:
put_bytes = slo_size
put_content_type = parse_content_type(
put_req.headers['Content-Type'])[0]
return (put_resp, vers_obj_name, put_bytes, put_content_type)
def _put_symlink_to_version(self, req, versions_cont, put_vers_obj_name,
api_version, account_name, object_name,
put_etag, put_bytes, put_content_type):
req.method = 'PUT'
# inch x-timestamp forward, just in case
req.ensure_x_timestamp()
req.headers['X-Timestamp'] = Timestamp(
req.timestamp, offset=1).internal
req.headers[TGT_ETAG_SYSMETA_SYMLINK_HDR] = put_etag
req.headers[TGT_BYTES_SYSMETA_SYMLINK_HDR] = put_bytes
# N.B. in stack mode DELETE we use content_type from listing
req.headers['Content-Type'] = put_content_type
req.headers[TGT_OBJ_SYMLINK_HDR] = wsgi_quote('%s/%s' % (
versions_cont, put_vers_obj_name))
req.headers[SYSMETA_VERSIONS_SYMLINK] = 'true'
req.headers[SYMLOOP_EXTEND] = 'true'
req.headers[ALLOW_RESERVED_NAMES] = 'true'
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
not_for_symlink_headers = (
'ETag', 'X-If-Delete-At', TGT_ACCT_SYMLINK_HDR,
'X-Object-Manifest', 'X-Static-Large-Object',
'X-Object-Sysmeta-Slo-Etag', 'X-Object-Sysmeta-Slo-Size',
)
for header in not_for_symlink_headers:
req.headers.pop(header, None)
# *do* set swift_source here; this PUT is an implementation detail
req.environ['swift.source'] = 'OV'
req.body = b''
resp = req.get_response(self.app)
resp.headers['ETag'] = put_etag
resp.headers['X-Object-Version-Id'] = self._split_version_from_name(
put_vers_obj_name)[1].internal
return resp
def _check_response_error(self, req, resp):
"""
Raise Error Response in case of error
"""
if is_success(resp.status_int):
return
body = resp.body
drain_and_close(resp)
if is_client_error(resp.status_int):
# missing container or bad permissions
if resp.status_int == 404:
raise HTTPPreconditionFailed(request=req)
raise HTTPException(body=body, status=resp.status,
headers=resp.headers)
# could not version the data, bail
raise HTTPServiceUnavailable(request=req)
def _copy_current(self, req, versions_cont, api_version, account_name,
object_name):
'''
Check if the current version of the object is a versions-symlink
if not, it's because this object was added to the container when
versioning was not enabled. We'll need to copy it into the versions
containers now.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param object_name: name of object of original request
'''
# validate the write access to the versioned container before
# making any backend requests
if 'swift.authorize' in req.environ:
container_info = get_container_info(
req.environ, self.app, swift_source='OV')
req.acl = container_info.get('write_acl')
aresp = req.environ['swift.authorize'](req)
if aresp:
raise aresp
get_resp = self._get_source_object(req, req.path_info)
if get_resp.status_int == HTTP_NOT_FOUND:
# nothing to version, proceed with original request
drain_and_close(get_resp)
return get_resp
# check for any other errors
self._check_response_error(req, get_resp)
if get_resp.headers.get(SYSMETA_VERSIONS_SYMLINK) == 'true':
# existing object is a VW symlink; no action required
drain_and_close(get_resp)
return get_resp
# if there's an existing object, then copy it to
# X-Versions-Location
ts_source = get_resp.headers.get(
'x-timestamp',
calendar.timegm(time.strptime(
get_resp.headers['last-modified'],
'%a, %d %b %Y %H:%M:%S GMT')))
vers_obj_name = self._build_versions_object_name(
object_name, ts_source)
put_path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, vers_obj_name)
put_resp = self._put_versioned_obj(req, put_path_info, get_resp)
if put_resp.status_int == HTTP_NOT_FOUND:
raise HTTPInternalServerError(
request=req, content_type='text/plain',
body=b'The versions container does not exist. You may '
b'want to re-enable object versioning.')
self._check_response_error(req, put_resp)
def handle_put(self, req, versions_cont, api_version,
account_name, object_name, is_enabled):
"""
Check if the current version of the object is a versions-symlink
if not, it's because this object was added to the container when
versioning was not enabled. We'll need to copy it into the versions
containers now that versioning is enabled.
Also, put the new data from the client into the versions container
and add a static symlink in the versioned container.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param object_name: name of object of original request
"""
# handle object request for a disabled versioned container.
if not is_enabled:
return req.get_response(self.app)
# attempt to copy current object to versions container
self._copy_current(req, versions_cont, api_version, account_name,
object_name)
# write client's put directly to versioned container
req.ensure_x_timestamp()
put_resp, put_vers_obj_name, put_bytes, put_content_type = \
self._put_versioned_obj_from_client(req, versions_cont,
api_version, account_name,
object_name)
# and add an static symlink to original container
target_etag = put_resp.headers['Etag']
return self._put_symlink_to_version(req, versions_cont,
put_vers_obj_name, api_version,
account_name, object_name,
target_etag, put_bytes,
put_content_type)
def handle_delete(self, req, versions_cont, api_version,
account_name, container_name,
object_name, is_enabled):
"""
Handle DELETE requests.
Copy current version of object to versions_container and write a
delete marker before proceeding with original request.
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param api_version: api version.
:param account_name: account name.
:param object_name: name of object of original request
"""
# handle object request for a disabled versioned container.
if not is_enabled:
return req.get_response(self.app)
self._copy_current(req, versions_cont, api_version,
account_name, object_name)
req.ensure_x_timestamp()
marker_name = self._build_versions_object_name(
object_name, req.timestamp.internal)
marker_path = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, marker_name)
marker_headers = {
# Definitive source of truth is Content-Type, and since we add
# a swift_* param, we know users haven't set it themselves.
# This is still open to users POSTing to update the content-type
# but they're just shooting themselves in the foot then.
'content-type': DELETE_MARKER_CONTENT_TYPE,
'content-length': '0',
'x-auth-token': req.headers.get('x-auth-token'),
'X-Backend-Allow-Reserved-Names': 'true',
}
marker_req = make_pre_authed_request(
req.environ, path=wsgi_quote(marker_path),
headers=marker_headers, method='PUT', swift_source='OV')
marker_req.environ['swift.content_type_overridden'] = True
marker_resp = marker_req.get_response(self.app)
self._check_response_error(req, marker_resp)
drain_and_close(marker_resp)
# successfully copied and created delete marker; safe to delete
resp = req.get_response(self.app)
if resp.is_success or resp.status_int == 404:
resp.headers['X-Object-Version-Id'] = \
self._split_version_from_name(marker_name)[1].internal
resp.headers['X-Backend-Content-Type'] = DELETE_MARKER_CONTENT_TYPE
drain_and_close(resp)
return resp
def handle_post(self, req, versions_cont, account):
'''
Handle a POST request to an object in a versioned container.
If the response is a 307 because the POST went to a symlink,
follow the symlink and send the request to the versioned object
:param req: original request.
:param versions_cont: container where previous versions of the object
are stored.
:param account: account name.
'''
# create eventual post request before
# encryption middleware changes the request headers
post_req = make_pre_authed_request(
req.environ, path=wsgi_quote(req.path_info), method='POST',
headers={'X-Backend-Allow-Reserved-Names': 'true'},
swift_source='OV')
copy_header_subset(req, post_req, non_expiry_header)
# send original request
resp = req.get_response(self.app)
# if it's a versioning symlink, send post to versioned object
if resp.status_int == 307 and config_true_value(
resp.headers.get(SYSMETA_VERSIONS_SYMLINK, 'false')):
loc = wsgi_unquote(resp.headers['Location'])
# Only follow if the version container matches
if split_path(loc, 4, 4, True)[1:3] == [
account, versions_cont]:
drain_and_close(resp)
post_req.path_info = loc
resp = post_req.get_response(self.app)
return resp
def _check_head(self, req, auth_token_header):
obj_head_headers = {
'X-Newest': 'True',
}
obj_head_headers.update(auth_token_header)
head_req = make_pre_authed_request(
req.environ, path=wsgi_quote(req.path_info) + '?symlink=get',
method='HEAD', headers=obj_head_headers, swift_source='OV')
hresp = head_req.get_response(self.app)
head_is_tombstone = False
symlink_target = None
if hresp.status_int == HTTP_NOT_FOUND:
head_is_tombstone = True
else:
head_is_tombstone = False
# if there's any other kind of error with a broken link...
# I guess give up?
self._check_response_error(req, hresp)
if hresp.headers.get(SYSMETA_VERSIONS_SYMLINK) == 'true':
symlink_target = hresp.headers.get(TGT_OBJ_SYMLINK_HDR)
drain_and_close(hresp)
return head_is_tombstone, symlink_target
def handle_delete_version(self, req, versions_cont, api_version,
account_name, container_name,
object_name, is_enabled, version):
if version == 'null':
# let the request go directly through to the is_latest link
return
auth_token_header = {'X-Auth-Token': req.headers.get('X-Auth-Token')}
head_is_tombstone, symlink_target = self._check_head(
req, auth_token_header)
versions_obj = self._build_versions_object_name(
object_name, version)
req_obj_path = '%s/%s' % (versions_cont, versions_obj)
if head_is_tombstone or not symlink_target or (
wsgi_unquote(symlink_target) != wsgi_unquote(req_obj_path)):
# If there's no current version (i.e., tombstone or unversioned
# object) or if current version links to another version, then
# just delete the version requested to be deleted
req.path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, versions_obj)
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
if head_is_tombstone or not symlink_target:
resp_version_id = 'null'
else:
_, vers_obj_name = wsgi_unquote(symlink_target).split('/', 1)
resp_version_id = self._split_version_from_name(
vers_obj_name)[1].internal
else:
# if version-id is the latest version, delete the link too
# First, kill the link...
req.environ['QUERY_STRING'] = ''
link_resp = req.get_response(self.app)
self._check_response_error(req, link_resp)
drain_and_close(link_resp)
# *then* the backing data
req.path_info = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, versions_obj)
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
resp_version_id = 'null'
resp = req.get_response(self.app)
resp.headers['X-Object-Version-Id'] = version
resp.headers['X-Object-Current-Version-Id'] = resp_version_id
return resp
def handle_put_version(self, req, versions_cont, api_version, account_name,
container, object_name, is_enabled, version):
"""
Handle a PUT?version-id request and create/update the is_latest link to
point to the specific version. Expects a valid 'version' id.
"""
if req.content_length is None:
has_body = (req.body_file.read(1) != b'')
else:
has_body = (req.content_length != 0)
if has_body:
raise HTTPBadRequest(
body='PUT version-id requests require a zero byte body',
request=req,
content_type='text/plain')
versions_obj_name = self._build_versions_object_name(
object_name, version)
versioned_obj_path = "/%s/%s/%s/%s" % (
api_version, account_name, versions_cont, versions_obj_name)
obj_head_headers = {'X-Backend-Allow-Reserved-Names': 'true'}
head_req = make_pre_authed_request(
req.environ, path=wsgi_quote(versioned_obj_path) + '?symlink=get',
method='HEAD', headers=obj_head_headers, swift_source='OV')
head_resp = head_req.get_response(self.app)
if head_resp.status_int == HTTP_NOT_FOUND:
drain_and_close(head_resp)
if is_success(get_container_info(
head_req.environ, self.app, swift_source='OV')['status']):
raise HTTPNotFound(
request=req, content_type='text/plain',
body=b'The specified version does not exist')
else:
raise HTTPInternalServerError(
request=req, content_type='text/plain',
body=b'The versions container does not exist. You may '
b'want to re-enable object versioning.')
self._check_response_error(req, head_resp)
drain_and_close(head_resp)
put_etag = head_resp.headers['ETag']
put_bytes = head_resp.content_length
put_content_type = head_resp.headers['Content-Type']
resp = self._put_symlink_to_version(
req, versions_cont, versions_obj_name, api_version, account_name,
object_name, put_etag, put_bytes, put_content_type)
return resp
def handle_versioned_request(self, req, versions_cont, api_version,
account, container, obj, is_enabled, version):
"""
Handle 'version-id' request for object resource. When a request
contains a ``version-id=<id>`` parameter, the request is acted upon
the actual version of that object. Version-aware operations
require that the container is versioned, but do not require that
the versioning is currently enabled. Users should be able to
operate on older versions of an object even if versioning is
currently suspended.
PUT and POST requests are not allowed as that would overwrite
the contents of the versioned object.
:param req: The original request
:param versions_cont: container holding versions of the requested obj
:param api_version: should be v1 unless swift bumps api version
:param account: account name string
:param container: container name string
:param object: object name string
:param is_enabled: is versioning currently enabled
:param version: version of the object to act on
"""
# ?version-id requests are allowed for GET, HEAD, DELETE reqs
if req.method == 'POST':
raise HTTPBadRequest(
'%s to a specific version is not allowed' % req.method,
request=req)
elif not versions_cont and version != 'null':
raise HTTPBadRequest(
'version-aware operations require that the container is '
'versioned', request=req)
if version != 'null':
try:
Timestamp(version)
except ValueError:
raise HTTPBadRequest('Invalid version parameter', request=req)
if req.method == 'DELETE':
return self.handle_delete_version(
req, versions_cont, api_version, account,
container, obj, is_enabled, version)
elif req.method == 'PUT':
return self.handle_put_version(
req, versions_cont, api_version, account,
container, obj, is_enabled, version)
if version == 'null':
resp = req.get_response(self.app)
if resp.is_success:
if get_reserved_name('versions', '') in wsgi_unquote(
resp.headers.get('Content-Location', '')):
# Have a latest version, but it's got a real version-id.
# Since the user specifically asked for null, return 404
close_if_possible(resp.app_iter)
raise HTTPNotFound(request=req)
resp.headers['X-Object-Version-Id'] = 'null'
if req.method == 'HEAD':
drain_and_close(resp)
return resp
else:
# Re-write the path; most everything else goes through normally
req.path_info = "/%s/%s/%s/%s" % (
api_version, account, versions_cont,
self._build_versions_object_name(obj, version))
req.headers['X-Backend-Allow-Reserved-Names'] = 'true'
resp = req.get_response(self.app)
if resp.is_success:
resp.headers['X-Object-Version-Id'] = version
# Well, except for some delete marker business...
is_del_marker = DELETE_MARKER_CONTENT_TYPE == resp.headers.get(
'X-Backend-Content-Type', resp.headers['Content-Type'])
if req.method == 'HEAD':
drain_and_close(resp)
if is_del_marker:
hdrs = {'X-Object-Version-Id': version,
'Content-Type': DELETE_MARKER_CONTENT_TYPE}
raise HTTPNotFound(request=req, headers=hdrs)
return resp
def handle_request(self, req, versions_cont, api_version, account,
container, obj, is_enabled):
if req.method == 'PUT':
return self.handle_put(
req, versions_cont, api_version, account, obj,
is_enabled)
elif req.method == 'POST':
return self.handle_post(req, versions_cont, account)
elif req.method == 'DELETE':
return self.handle_delete(
req, versions_cont, api_version, account,
container, obj, is_enabled)
# GET/HEAD/OPTIONS
resp = req.get_response(self.app)
resp.headers['X-Object-Version-Id'] = 'null'
# Check for a "real" version
loc = wsgi_unquote(resp.headers.get('Content-Location', ''))
if loc:
_, acct, cont, version_obj = split_path(loc, 4, 4, True)
if acct == account and cont == versions_cont:
_, version = self._split_version_from_name(version_obj)
if version is not None:
resp.headers['X-Object-Version-Id'] = version.internal
content_loc = wsgi_quote('/%s/%s/%s/%s' % (
api_version, account, container, obj,
)) + '?version-id=%s' % (version.internal,)
resp.headers['Content-Location'] = content_loc
symlink_target = wsgi_unquote(resp.headers.get('X-Symlink-Target', ''))
if symlink_target:
cont, version_obj = split_path('/%s' % symlink_target, 2, 2, True)
if cont == versions_cont:
_, version = self._split_version_from_name(version_obj)
if version is not None:
resp.headers['X-Object-Version-Id'] = version.internal
symlink_target = wsgi_quote('%s/%s' % (container, obj)) + \
'?version-id=%s' % (version.internal,)
resp.headers['X-Symlink-Target'] = symlink_target
return resp
class ContainerContext(ObjectVersioningContext):
def handle_request(self, req, start_response):
"""
Handle request for container resource.
On PUT, POST set version location and enabled flag sysmeta.
For container listings of a versioned container, update the object's
bytes and etag to use the target's instead of using the symlink info.
"""
app_resp = self._app_call(req.environ)
_, account, container, _ = req.split_path(3, 4, True)
location = ''
curr_bytes = 0
bytes_idx = -1
for i, (header, value) in enumerate(self._response_headers):
if header == 'X-Container-Bytes-Used':
curr_bytes = value
bytes_idx = i
if header.lower() == SYSMETA_VERSIONS_CONT:
location = value
if header.lower() == SYSMETA_VERSIONS_ENABLED:
self._response_headers.extend([
(CLIENT_VERSIONS_ENABLED.title(), value)])
if location:
location = wsgi_unquote(location)
# update bytes header
if bytes_idx > -1:
head_req = make_pre_authed_request(
req.environ, method='HEAD', swift_source='OV',
path=wsgi_quote('/v1/%s/%s' % (account, location)),
headers={'X-Backend-Allow-Reserved-Names': 'true'})
vresp = head_req.get_response(self.app)
if vresp.is_success:
ver_bytes = vresp.headers.get('X-Container-Bytes-Used', 0)
self._response_headers[bytes_idx] = (
'X-Container-Bytes-Used',
str(int(curr_bytes) + int(ver_bytes)))
drain_and_close(vresp)
elif is_success(self._get_status_int()):
# If client is doing a version-aware listing for a container that
# (as best we could tell) has never had versioning enabled,
# err on the side of there being data anyway -- the metadata we
# found may not be the most up-to-date.
# Note that any extra listing request we make will likely 404.
try:
location = self._build_versions_container_name(container)
except ValueError:
# may be internal listing to a reserved namespace container
pass
# else, we won't need location anyway
if is_success(self._get_status_int()) and req.method == 'GET':
with closing_if_possible(app_resp):
body = b''.join(app_resp)
try:
listing = json.loads(body)
except ValueError:
app_resp = [body]
else:
for item in listing:
if not all(x in item for x in (
'symlink_path',
'symlink_etag',
'symlink_bytes')):
continue
path = wsgi_unquote(bytes_to_wsgi(
item['symlink_path'].encode('utf-8')))
_, tgt_acct, tgt_container, tgt_obj = split_path(
path, 4, 4, True)
if tgt_container != location:
# if the archive container changed, leave the extra
# info unmodified
continue
_, meta = parse_header(item['hash'])
tgt_bytes = int(item.pop('symlink_bytes'))
item['bytes'] = tgt_bytes
item['version_symlink'] = True
item['hash'] = item.pop('symlink_etag') + ''.join(
'; %s=%s' % (k, v) for k, v in meta.items())
tgt_obj, version = self._split_version_from_name(tgt_obj)
if version is not None and 'versions' not in req.params:
sp = wsgi_quote('/v1/%s/%s/%s' % (
tgt_acct, container, tgt_obj,
)) + '?version-id=' + version.internal
item['symlink_path'] = sp
if 'versions' in req.params:
return self._list_versions(
req, start_response, location,
listing)
body = json.dumps(listing).encode('ascii')
self.update_content_length(len(body))
app_resp = [body]
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
def handle_delete(self, req, start_response):
"""
Handle request to delete a user's container.
As part of deleting a container, this middleware will also delete
the hidden container holding object versions.
Before a user's container can be deleted, swift must check
if there are still old object versions from that container.
Only after disabling versioning and deleting *all* object versions
can a container be deleted.
"""
container_info = get_container_info(req.environ, self.app,
swift_source='OV')
versions_cont = unquote(container_info.get(
'sysmeta', {}).get('versions-container', ''))
if versions_cont:
account = req.split_path(3, 3, True)[1]
# using a HEAD request here as opposed to get_container_info
# to make sure we get an up-to-date value
versions_req = make_pre_authed_request(
req.environ, method='HEAD', swift_source='OV',
path=wsgi_quote('/v1/%s/%s' % (
account, str_to_wsgi(versions_cont))),
headers={'X-Backend-Allow-Reserved-Names': 'true'})
vresp = versions_req.get_response(self.app)
drain_and_close(vresp)
if vresp.is_success and int(vresp.headers.get(
'X-Container-Object-Count', 0)) > 0:
raise HTTPConflict(
'Delete all versions before deleting container.',
request=req)
elif not vresp.is_success and vresp.status_int != 404:
raise HTTPInternalServerError(
'Error deleting versioned container')
else:
versions_req.method = 'DELETE'
resp = versions_req.get_response(self.app)
drain_and_close(resp)
if not is_success(resp.status_int) and resp.status_int != 404:
raise HTTPInternalServerError(
'Error deleting versioned container')
app_resp = self._app_call(req.environ)
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
def enable_versioning(self, req, start_response):
container_info = get_container_info(req.environ, self.app,
swift_source='OV')
# if container is already configured to use old style versioning,
# we don't allow user to enable object versioning here. They must
# choose which middleware to use, only one style of versioning
# is supported for a given container
versions_cont = container_info.get(
'sysmeta', {}).get('versions-location')
legacy_versions_cont = container_info.get('versions')
if versions_cont or legacy_versions_cont:
raise HTTPBadRequest(
'Cannot enable object versioning on a container '
'that is already using the legacy versioned writes '
'feature.',
request=req)
# versioning and container-sync do not yet work well together
# container-sync needs to be enhanced to sync previous versions
sync_to = container_info.get('sync_to')
if sync_to:
raise HTTPBadRequest(
'Cannot enable object versioning on a container '
'configured as source of container syncing.',
request=req)
versions_cont = container_info.get(
'sysmeta', {}).get('versions-container')
is_enabled = config_true_value(
req.headers[CLIENT_VERSIONS_ENABLED])
req.headers[SYSMETA_VERSIONS_ENABLED] = is_enabled
# TODO: a POST request to a primary container that doesn't exist
# will fail, so we will create and delete the versions container
# for no reason
if config_true_value(is_enabled):
(version, account, container, _) = req.split_path(3, 4, True)
# Attempt to use same policy as primary container, otherwise
# use default policy
if is_success(container_info['status']):
primary_policy_idx = container_info['storage_policy']
if POLICIES[primary_policy_idx].is_deprecated:
# Do an auth check now, so we don't leak information
# about the container
aresp = req.environ['swift.authorize'](req)
if aresp:
raise aresp
# Proxy controller would catch the deprecated policy, too,
# but waiting until then would mean the error message
# would be a generic "Error enabling object versioning".
raise HTTPBadRequest(
'Cannot enable object versioning on a container '
'that uses a deprecated storage policy.',
request=req)
hdrs = {'X-Storage-Policy': POLICIES[primary_policy_idx].name}
else:
if req.method == 'PUT' and \
'X-Storage-Policy' in req.headers:
hdrs = {'X-Storage-Policy':
req.headers['X-Storage-Policy']}
else:
hdrs = {}
hdrs['X-Backend-Allow-Reserved-Names'] = 'true'
versions_cont = self._build_versions_container_name(container)
versions_cont_path = "/%s/%s/%s" % (
version, account, versions_cont)
ver_cont_req = make_pre_authed_request(
req.environ, path=wsgi_quote(versions_cont_path),
method='PUT', headers=hdrs, swift_source='OV')
resp = ver_cont_req.get_response(self.app)
# Should always be short; consume the body
drain_and_close(resp)
if is_success(resp.status_int) or resp.status_int == HTTP_CONFLICT:
req.headers[SYSMETA_VERSIONS_CONT] = wsgi_quote(versions_cont)
else:
raise HTTPInternalServerError(
'Error enabling object versioning')
# make original request
app_resp = self._app_call(req.environ)
# if we just created a versions container but the original
# request failed, delete the versions container
# and let user retry later
if not is_success(self._get_status_int()) and \
SYSMETA_VERSIONS_CONT in req.headers:
versions_cont_path = "/%s/%s/%s" % (
version, account, versions_cont)
ver_cont_req = make_pre_authed_request(
req.environ, path=wsgi_quote(versions_cont_path),
method='DELETE', headers=hdrs, swift_source='OV')
# TODO: what if this one fails??
resp = ver_cont_req.get_response(self.app)
drain_and_close(resp)
if self._response_headers is None:
self._response_headers = []
for key, val in self._response_headers:
if key.lower() == SYSMETA_VERSIONS_ENABLED:
self._response_headers.extend([
(CLIENT_VERSIONS_ENABLED.title(), val)])
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
def _list_versions(self, req, start_response, location, primary_listing):
# Only supports JSON listings
req.environ['swift.format_listing'] = False
if not req.accept.best_match(['application/json']):
raise HTTPNotAcceptable(request=req)
params = req.params
if 'version_marker' in params:
if 'marker' not in params:
raise HTTPBadRequest('version_marker param requires marker')
if params['version_marker'] != 'null':
try:
ts = Timestamp(params.pop('version_marker'))
except ValueError:
raise HTTPBadRequest('invalid version_marker param')
params['marker'] = self._build_versions_object_name(
params['marker'], ts)
elif 'marker' in params:
params['marker'] = self._build_versions_object_prefix(
params['marker']) + ':' # just past all numbers
delim = params.get('delimiter', '')
# Exclude the set of chars used in version_id from user delimiters
if set(delim).intersection('0123456789.%s' % RESERVED_STR):
raise HTTPBadRequest('invalid delimiter param')
null_listing = []
subdir_set = set()
current_versions = {}
is_latest_set = set()
for item in primary_listing:
if 'name' not in item:
subdir_set.add(item['subdir'])
else:
if item.get('version_symlink'):
path = wsgi_to_str(wsgi_unquote(bytes_to_wsgi(
item['symlink_path'].encode('utf-8'))))
current_versions[path] = item
else:
null_listing.append(dict(
item, version_id='null', is_latest=True))
is_latest_set.add(item['name'])
account = req.split_path(3, 3, True)[1]
versions_req = make_pre_authed_request(
req.environ, method='GET', swift_source='OV',
path=wsgi_quote('/v1/%s/%s' % (account, location)),
headers={'X-Backend-Allow-Reserved-Names': 'true'},
)
# NB: Not using self._build_versions_object_name here because
# we don't want to bookend the prefix with RESERVED_NAME as user
# could be using just part of object name as the prefix.
if 'prefix' in params:
params['prefix'] = get_reserved_name(params['prefix'])
# NB: no end_marker support (yet)
if get_container_info(versions_req.environ, self.app,
swift_source='OV')['status'] == 404:
# we don't usually like to LBYL like this, but 404s tend to be
# expensive (since we check all primaries and a bunch of handoffs)
# and we expect this to be a reasonably common way to listing
# objects since it's more complete from the user's perspective
# (see also: s3api and that client ecosystem)
versions_resp = None
else:
versions_req.params = {
k: params.get(k, '') for k in (
'prefix', 'marker', 'limit', 'delimiter', 'reverse')}
versions_resp = versions_req.get_response(self.app)
if versions_resp is None \
or versions_resp.status_int == HTTP_NOT_FOUND:
subdir_listing = [{'subdir': s} for s in subdir_set]
broken_listing = []
for item in current_versions.values():
linked_name = wsgi_to_str(wsgi_unquote(bytes_to_wsgi(
item['symlink_path'].encode('utf8')))).split('/', 4)[-1]
name, ts = self._split_version_from_name(linked_name)
if ts is None:
continue
name = name.decode('utf8') if six.PY2 else name
is_latest = False
if name not in is_latest_set:
is_latest_set.add(name)
is_latest = True
broken_listing.append({
'name': name,
'is_latest': is_latest,
'version_id': ts.internal,
'content_type': item['content_type'],
'bytes': item['bytes'],
'hash': item['hash'],
'last_modified': item['last_modified'],
})
limit = constrain_req_limit(req, CONTAINER_LISTING_LIMIT)
body = build_listing(
null_listing, subdir_listing, broken_listing,
reverse=config_true_value(params.get('reverse', 'no')),
limit=limit)
self.update_content_length(len(body))
app_resp = [body]
drain_and_close(versions_resp)
elif is_success(versions_resp.status_int):
try:
listing = json.loads(versions_resp.body)
except ValueError:
app_resp = [body]
else:
versions_listing = []
for item in listing:
if 'name' not in item:
# remove reserved chars from subdir
subdir = split_reserved_name(item['subdir'])[0]
subdir_set.add(subdir)
else:
name, ts = self._split_version_from_name(item['name'])
if ts is None:
continue
path = '/v1/%s/%s/%s' % (
wsgi_to_str(account),
wsgi_to_str(location),
item['name'].encode('utf8')
if six.PY2 else item['name'])
if path in current_versions:
item['is_latest'] = True
is_latest_set.add(name)
del current_versions[path]
elif (item['content_type'] ==
DELETE_MARKER_CONTENT_TYPE
and name not in is_latest_set):
item['is_latest'] = True
is_latest_set.add(name)
else:
item['is_latest'] = False
item['name'] = name
item['version_id'] = ts.internal
versions_listing.append(item)
subdir_listing = [{'subdir': s} for s in subdir_set]
broken_listing = []
for item in current_versions.values():
link_path = wsgi_to_str(wsgi_unquote(bytes_to_wsgi(
item['symlink_path'].encode('utf-8'))))
name, ts = self._split_version_from_name(
link_path.split('/', 1)[1])
if ts is None:
continue
broken_listing.append({
'name': name.decode('utf8') if six.PY2 else name,
'is_latest': True,
'version_id': ts.internal,
'content_type': item['content_type'],
'bytes': item['bytes'],
'hash': item['hash'],
'last_modified': item['last_modified'],
})
limit = constrain_req_limit(req, CONTAINER_LISTING_LIMIT)
body = build_listing(
null_listing, versions_listing,
subdir_listing, broken_listing,
reverse=config_true_value(params.get('reverse', 'no')),
limit=limit,
)
self.update_content_length(len(body))
app_resp = [body]
else:
return versions_resp(versions_req.environ, start_response)
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
class AccountContext(ObjectVersioningContext):
def list_containers(self, req, api_version, account, start_response):
app_resp = self._app_call(req.environ)
if is_success(self._get_status_int()):
with closing_if_possible(app_resp):
body = b''.join(app_resp)
try:
listing = json.loads(body)
except ValueError:
app_resp = [body]
else:
# list hidden versions containers
# It might be necessary to issue multiple listing requests
# because of paging limitations, hence the while loop.
params = req.params
versions_dict = {}
versions_req = make_pre_authed_request(
req.environ, method='GET', swift_source='OV',
path=wsgi_quote('/v1/%s' % account),
headers={'X-Backend-Allow-Reserved-Names': 'true'},
)
if 'prefix' in params:
try:
params['prefix'] = \
self._build_versions_container_name(
params['prefix'])
except ValueError:
# don't touch params['prefix'],
# RESERVED_STR probably came from looping around
pass
else:
params['prefix'] = get_reserved_name('versions')
for p in ('marker', 'end_marker'):
if p in params:
try:
params[p] = \
self._build_versions_container_name(
params[p])
except ValueError:
# don't touch params[p]
pass
versions_req.params = params
versions_resp = versions_req.get_response(self.app)
try:
versions_listing = json.loads(versions_resp.body)
except ValueError:
versions_listing = []
finally:
close_if_possible(versions_resp.app_iter)
# create a dict from versions listing to facilitate
# look-up by name. Ignore 'subdir' items
for item in [item for item in versions_listing
if 'name' in item]:
container_name = self._split_versions_container_name(
item['name'])
versions_dict[container_name] = item
# update bytes from original listing with bytes from
# versions cont
if len(versions_dict) > 0:
# ignore 'subdir' items
for item in [item for item in listing if 'name' in item]:
if item['name'] in versions_dict:
v_info = versions_dict.pop(item['name'])
item['bytes'] = item['bytes'] + v_info['bytes']
# if there are items left in versions_dict, it indicates an
# error scenario where there are orphan hidden containers
# (possibly storing data) that should have been deleted
# along with the primary container. In this case, let's add
# those containers to listing so users can be aware and
# clean them up
for key, item in versions_dict.items():
item['name'] = key
item['count'] = 0 # None of these are current
listing.append(item)
limit = constrain_req_limit(req, ACCOUNT_LISTING_LIMIT)
body = build_listing(
listing,
reverse=config_true_value(params.get('reverse', 'no')),
limit=limit,
)
self.update_content_length(len(body))
app_resp = [body]
start_response(self._response_status,
self._response_headers,
self._response_exc_info)
return app_resp
class ObjectVersioningMiddleware(object):
def __init__(self, app, conf):
self.app = app
self.conf = conf
self.logger = get_logger(conf, log_route='object_versioning')
# Pass these along so get_container_info will have the configured
# odds to skip cache
_pipeline_final_app = app_property('_pipeline_final_app')
_pipeline_request_logging_app = app_property(
'_pipeline_request_logging_app')
def account_request(self, req, api_version, account, start_response):
account_ctx = AccountContext(self.app, self.logger)
if req.method == 'GET':
return account_ctx.list_containers(
req, api_version, account, start_response)
else:
return self.app(req.environ, start_response)
def container_request(self, req, start_response):
container_ctx = ContainerContext(self.app, self.logger)
if req.method in ('PUT', 'POST') and \
CLIENT_VERSIONS_ENABLED in req.headers:
return container_ctx.enable_versioning(req, start_response)
elif req.method == 'DELETE':
return container_ctx.handle_delete(req, start_response)
# send request and translate sysmeta headers from response
return container_ctx.handle_request(req, start_response)
def object_request(self, req, api_version, account, container, obj):
"""
Handle request for object resource.
Note that account, container, obj should be unquoted by caller
if the url path is under url encoding (e.g. %FF)
:param req: swift.common.swob.Request instance
:param api_version: should be v1 unless swift bumps api version
:param account: account name string
:param container: container name string
:param object: object name string
"""
resp = None
container_info = get_container_info(
req.environ, self.app, swift_source='OV')
versions_cont = container_info.get(
'sysmeta', {}).get('versions-container', '')
is_enabled = config_true_value(container_info.get(
'sysmeta', {}).get('versions-enabled'))
if versions_cont:
versions_cont = wsgi_unquote(str_to_wsgi(
versions_cont)).split('/')[0]
if req.params.get('version-id'):
vw_ctx = ObjectContext(self.app, self.logger)
resp = vw_ctx.handle_versioned_request(
req, versions_cont, api_version, account, container, obj,
is_enabled, req.params['version-id'])
elif versions_cont:
# handle object request for a enabled versioned container
vw_ctx = ObjectContext(self.app, self.logger)
resp = vw_ctx.handle_request(
req, versions_cont, api_version, account, container, obj,
is_enabled)
if resp:
return resp
else:
return self.app
def __call__(self, env, start_response):
req = Request(env)
try:
(api_version, account, container, obj) = req.split_path(2, 4, True)
bad_path = False
except ValueError:
bad_path = True
# use of bad_path bool is to avoid recursive tracebacks
if bad_path or not valid_api_version(api_version):
return self.app(env, start_response)
try:
if not container:
return self.account_request(req, api_version, account,
start_response)
if container and not obj:
return self.container_request(req, start_response)
else:
return self.object_request(
req, api_version, account, container,
obj)(env, start_response)
except HTTPException as error_response:
return error_response(env, start_response)
| swift-master | swift/common/middleware/versioned_writes/object_versioning.py |
# Copyright (c) 2019 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Implements middleware for object versioning which comprises an instance of a
:class:`~swift.common.middleware.versioned_writes.legacy.
VersionedWritesMiddleware` combined with an instance of an
:class:`~swift.common.middleware.versioned_writes.object_versioning.
ObjectVersioningMiddleware`.
"""
from swift.common.middleware.versioned_writes. \
legacy import CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC, \
VersionedWritesMiddleware
from swift.common.middleware.versioned_writes. \
object_versioning import ObjectVersioningMiddleware
from swift.common.utils import config_true_value
from swift.common.registry import register_swift_info, get_swift_info
def filter_factory(global_conf, **local_conf):
"""Provides a factory function for loading versioning middleware."""
conf = global_conf.copy()
conf.update(local_conf)
if config_true_value(conf.get('allow_versioned_writes')):
register_swift_info('versioned_writes', allowed_flags=(
CLIENT_VERSIONS_LOC, CLIENT_HISTORY_LOC))
allow_object_versioning = config_true_value(conf.get(
'allow_object_versioning'))
if allow_object_versioning:
register_swift_info('object_versioning')
def versioning_filter(app):
if allow_object_versioning:
if 'symlink' not in get_swift_info():
raise ValueError('object versioning requires symlinks')
app = ObjectVersioningMiddleware(app, conf)
return VersionedWritesMiddleware(app, conf)
return versioning_filter
| swift-master | swift/common/middleware/versioned_writes/__init__.py |
# Copyright (c) 2010-2023 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions Swift uses to interact with libc and other low-level APIs."""
import ctypes
import ctypes.util
import fcntl
import logging
import os
import platform
import socket
# These are lazily pulled from libc elsewhere
_posix_fadvise = None
_libc_socket = None
_libc_bind = None
_libc_accept = None
# see man -s 2 setpriority
_libc_setpriority = None
# see man -s 2 syscall
_posix_syscall = None
# from /usr/src/linux-headers-*/include/uapi/linux/resource.h
PRIO_PROCESS = 0
# /usr/include/x86_64-linux-gnu/asm/unistd_64.h defines syscalls there
# are many like it, but this one is mine, see man -s 2 ioprio_set
def NR_ioprio_set():
"""Give __NR_ioprio_set value for your system."""
architecture = os.uname()[4]
arch_bits = platform.architecture()[0]
# check if supported system, now support x86_64 and AArch64
if architecture == 'x86_64' and arch_bits == '64bit':
return 251
elif architecture == 'aarch64' and arch_bits == '64bit':
return 30
raise OSError("Swift doesn't support ionice priority for %s %s" %
(architecture, arch_bits))
# this syscall integer probably only works on x86_64 linux systems, you
# can check if it's correct on yours with something like this:
"""
#include <stdio.h>
#include <sys/syscall.h>
int main(int argc, const char* argv[]) {
printf("%d\n", __NR_ioprio_set);
return 0;
}
"""
# this is the value for "which" that says our who value will be a pid
# pulled out of /usr/src/linux-headers-*/include/linux/ioprio.h
IOPRIO_WHO_PROCESS = 1
IO_CLASS_ENUM = {
'IOPRIO_CLASS_RT': 1,
'IOPRIO_CLASS_BE': 2,
'IOPRIO_CLASS_IDLE': 3,
}
# the IOPRIO_PRIO_VALUE "macro" is also pulled from
# /usr/src/linux-headers-*/include/linux/ioprio.h
IOPRIO_CLASS_SHIFT = 13
def IOPRIO_PRIO_VALUE(class_, data):
return (((class_) << IOPRIO_CLASS_SHIFT) | data)
# These constants are Linux-specific, and Python doesn't seem to know
# about them. We ask anyway just in case that ever gets fixed.
#
# The values were copied from the Linux 3.x kernel headers.
AF_ALG = getattr(socket, 'AF_ALG', 38)
F_SETPIPE_SZ = getattr(fcntl, 'F_SETPIPE_SZ', 1031)
def noop_libc_function(*args):
return 0
def load_libc_function(func_name, log_error=True,
fail_if_missing=False, errcheck=False):
"""
Attempt to find the function in libc, otherwise return a no-op func.
:param func_name: name of the function to pull from libc.
:param log_error: log an error when a function can't be found
:param fail_if_missing: raise an exception when a function can't be found.
Default behavior is to return a no-op function.
:param errcheck: boolean, if true install a wrapper on the function
to check for a return values of -1 and call
ctype.get_errno and raise an OSError
"""
try:
libc = ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True)
func = getattr(libc, func_name)
except AttributeError:
if fail_if_missing:
raise
if log_error:
logging.warning("Unable to locate %s in libc. Leaving as a "
"no-op.", func_name)
return noop_libc_function
if errcheck:
def _errcheck(result, f, args):
if result == -1:
errcode = ctypes.get_errno()
raise OSError(errcode, os.strerror(errcode))
return result
func.errcheck = _errcheck
return func
class _LibcWrapper(object):
"""
A callable object that forwards its calls to a C function from libc.
These objects are lazy. libc will not be checked until someone tries to
either call the function or check its availability.
_LibcWrapper objects have an "available" property; if true, then libc
has the function of that name. If false, then calls will fail with a
NotImplementedError.
"""
def __init__(self, func_name):
self._func_name = func_name
self._func_handle = None
self._loaded = False
def _ensure_loaded(self):
if not self._loaded:
func_name = self._func_name
try:
# Keep everything in this try-block in local variables so
# that a typo in self.some_attribute_name doesn't raise a
# spurious AttributeError.
func_handle = load_libc_function(
func_name, fail_if_missing=True)
self._func_handle = func_handle
except AttributeError:
# We pass fail_if_missing=True to load_libc_function and
# then ignore the error. It's weird, but otherwise we have
# to check if self._func_handle is noop_libc_function, and
# that's even weirder.
pass
self._loaded = True
@property
def available(self):
self._ensure_loaded()
return bool(self._func_handle)
def __call__(self, *args):
if self.available:
return self._func_handle(*args)
else:
raise NotImplementedError(
"No function %r found in libc" % self._func_name)
def drop_buffer_cache(fd, offset, length):
"""
Drop 'buffer' cache for the given range of the given file.
:param fd: file descriptor
:param offset: start offset
:param length: length
"""
global _posix_fadvise
if _posix_fadvise is None:
_posix_fadvise = load_libc_function('posix_fadvise64')
# 4 means "POSIX_FADV_DONTNEED"
ret = _posix_fadvise(fd, ctypes.c_uint64(offset),
ctypes.c_uint64(length), 4)
if ret != 0:
logging.warning("posix_fadvise64(%(fd)s, %(offset)s, %(length)s, 4) "
"-> %(ret)s", {'fd': fd, 'offset': offset,
'length': length, 'ret': ret})
class sockaddr_alg(ctypes.Structure):
_fields_ = [("salg_family", ctypes.c_ushort),
("salg_type", ctypes.c_ubyte * 14),
("salg_feat", ctypes.c_uint),
("salg_mask", ctypes.c_uint),
("salg_name", ctypes.c_ubyte * 64)]
_bound_md5_sockfd = None
def get_md5_socket():
"""
Get an MD5 socket file descriptor. One can MD5 data with it by writing it
to the socket with os.write, then os.read the 16 bytes of the checksum out
later.
NOTE: It is the caller's responsibility to ensure that os.close() is
called on the returned file descriptor. This is a bare file descriptor,
not a Python object. It doesn't close itself.
"""
# Linux's AF_ALG sockets work like this:
#
# First, initialize a socket with socket() and bind(). This tells the
# socket what algorithm to use, as well as setting up any necessary bits
# like crypto keys. Of course, MD5 doesn't need any keys, so it's just the
# algorithm name.
#
# Second, to hash some data, get a second socket by calling accept() on
# the first socket. Write data to the socket, then when finished, read the
# checksum from the socket and close it. This lets you checksum multiple
# things without repeating all the setup code each time.
#
# Since we only need to bind() one socket, we do that here and save it for
# future re-use. That way, we only use one file descriptor to get an MD5
# socket instead of two, and we also get to save some syscalls.
global _bound_md5_sockfd
global _libc_socket
global _libc_bind
global _libc_accept
if _libc_accept is None:
_libc_accept = load_libc_function('accept', fail_if_missing=True)
if _libc_socket is None:
_libc_socket = load_libc_function('socket', fail_if_missing=True)
if _libc_bind is None:
_libc_bind = load_libc_function('bind', fail_if_missing=True)
# Do this at first call rather than at import time so that we don't use a
# file descriptor on systems that aren't using any MD5 sockets.
if _bound_md5_sockfd is None:
sockaddr_setup = sockaddr_alg(
AF_ALG,
(ord('h'), ord('a'), ord('s'), ord('h'), 0),
0, 0,
(ord('m'), ord('d'), ord('5'), 0))
hash_sockfd = _libc_socket(ctypes.c_int(AF_ALG),
ctypes.c_int(socket.SOCK_SEQPACKET),
ctypes.c_int(0))
if hash_sockfd < 0:
raise IOError(ctypes.get_errno(),
"Failed to initialize MD5 socket")
bind_result = _libc_bind(ctypes.c_int(hash_sockfd),
ctypes.pointer(sockaddr_setup),
ctypes.c_int(ctypes.sizeof(sockaddr_alg)))
if bind_result < 0:
os.close(hash_sockfd)
raise IOError(ctypes.get_errno(), "Failed to bind MD5 socket")
_bound_md5_sockfd = hash_sockfd
md5_sockfd = _libc_accept(ctypes.c_int(_bound_md5_sockfd), None, 0)
if md5_sockfd < 0:
raise IOError(ctypes.get_errno(), "Failed to accept MD5 socket")
return md5_sockfd
def modify_priority(conf, logger):
"""
Modify priority by nice and ionice.
"""
global _libc_setpriority
if _libc_setpriority is None:
_libc_setpriority = load_libc_function('setpriority',
errcheck=True)
def _setpriority(nice_priority):
"""
setpriority for this pid
:param nice_priority: valid values are -19 to 20
"""
try:
_libc_setpriority(PRIO_PROCESS, os.getpid(),
int(nice_priority))
except (ValueError, OSError):
print("WARNING: Unable to modify scheduling priority of process."
" Keeping unchanged! Check logs for more info. ")
logger.exception('Unable to modify nice priority')
else:
logger.debug('set nice priority to %s' % nice_priority)
nice_priority = conf.get('nice_priority')
if nice_priority is not None:
_setpriority(nice_priority)
global _posix_syscall
if _posix_syscall is None:
_posix_syscall = load_libc_function('syscall', errcheck=True)
def _ioprio_set(io_class, io_priority):
"""
ioprio_set for this process
:param io_class: the I/O class component, can be
IOPRIO_CLASS_RT, IOPRIO_CLASS_BE,
or IOPRIO_CLASS_IDLE
:param io_priority: priority value in the I/O class
"""
try:
io_class = IO_CLASS_ENUM[io_class]
io_priority = int(io_priority)
_posix_syscall(NR_ioprio_set(),
IOPRIO_WHO_PROCESS,
os.getpid(),
IOPRIO_PRIO_VALUE(io_class, io_priority))
except (KeyError, ValueError, OSError):
print("WARNING: Unable to modify I/O scheduling class "
"and priority of process. Keeping unchanged! "
"Check logs for more info.")
logger.exception("Unable to modify ionice priority")
else:
logger.debug('set ionice class %s priority %s',
io_class, io_priority)
io_class = conf.get("ionice_class")
if io_class is None:
return
io_priority = conf.get("ionice_priority", 0)
_ioprio_set(io_class, io_priority)
| swift-master | swift/common/utils/libc.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Miscellaneous utility functions for use with Swift."""
from __future__ import print_function
import base64
import binascii
import bisect
import collections
import errno
import fcntl
import grp
import hashlib
import json
import operator
import os
import pwd
import re
import string
import struct
import sys
import time
import uuid
import functools
import email.parser
from random import random, shuffle
from contextlib import contextmanager, closing
import ctypes
import ctypes.util
from optparse import OptionParser
import traceback
import warnings
from tempfile import gettempdir, mkstemp, NamedTemporaryFile
import glob
import itertools
import stat
import datetime
import eventlet
import eventlet.debug
import eventlet.greenthread
import eventlet.patcher
import eventlet.semaphore
try:
import importlib.metadata
pkg_resources = None
except ImportError:
# python < 3.8
import pkg_resources
from eventlet import GreenPool, sleep, Timeout
from eventlet.event import Event
from eventlet.green import socket, threading
import eventlet.hubs
import eventlet.queue
import codecs
utf8_decoder = codecs.getdecoder('utf-8')
utf8_encoder = codecs.getencoder('utf-8')
import six
if six.PY2:
from eventlet.green import httplib as green_http_client
else:
from eventlet.green.http import client as green_http_client
utf16_decoder = codecs.getdecoder('utf-16')
utf16_encoder = codecs.getencoder('utf-16')
from six.moves import cPickle as pickle
from six.moves import configparser
from six.moves.configparser import (ConfigParser, NoSectionError,
NoOptionError, RawConfigParser)
from six.moves import range, http_client
from six.moves.urllib.parse import quote as _quote, unquote
from six.moves.urllib.parse import urlparse
from six.moves import UserList
import swift.common.exceptions
from swift.common.http import is_server_error
from swift.common.header_key_dict import HeaderKeyDict
from swift.common.linkat import linkat
# For backwards compatability with 3rd party middlewares
from swift.common.registry import register_swift_info, get_swift_info # noqa
from swift.common.utils.libc import ( # noqa
F_SETPIPE_SZ,
load_libc_function,
drop_buffer_cache,
get_md5_socket,
modify_priority,
_LibcWrapper,
)
from swift.common.utils.timestamp import ( # noqa
NORMAL_FORMAT,
INTERNAL_FORMAT,
SHORT_FORMAT,
MAX_OFFSET,
PRECISION,
Timestamp,
encode_timestamps,
decode_timestamps,
normalize_timestamp,
EPOCH,
last_modified_date_to_timestamp,
normalize_delete_at_timestamp,
)
from swift.common.utils.ipaddrs import ( # noqa
is_valid_ip,
is_valid_ipv4,
is_valid_ipv6,
expand_ipv6,
parse_socket_string,
whataremyips,
)
from logging.handlers import SysLogHandler
import logging
NOTICE = 25
# These are lazily pulled from libc elsewhere
_sys_fallocate = None
# If set to non-zero, fallocate routines will fail based on free space
# available being at or below this amount, in bytes.
FALLOCATE_RESERVE = 0
# Indicates if FALLOCATE_RESERVE is the percentage of free space (True) or
# the number of bytes (False).
FALLOCATE_IS_PERCENT = False
# from /usr/include/linux/falloc.h
FALLOC_FL_KEEP_SIZE = 1
FALLOC_FL_PUNCH_HOLE = 2
# Used by hash_path to offer a bit more security when generating hashes for
# paths. It simply appends this value to all paths; guessing the hash a path
# will end up with would also require knowing this suffix.
HASH_PATH_SUFFIX = b''
HASH_PATH_PREFIX = b''
SWIFT_CONF_FILE = '/etc/swift/swift.conf'
# These constants are Linux-specific, and Python doesn't seem to know
# about them. We ask anyway just in case that ever gets fixed.
#
# The values were copied from the Linux 3.x kernel headers.
O_TMPFILE = getattr(os, 'O_TMPFILE', 0o20000000 | os.O_DIRECTORY)
MD5_OF_EMPTY_STRING = 'd41d8cd98f00b204e9800998ecf8427e'
RESERVED_BYTE = b'\x00'
RESERVED_STR = u'\x00'
RESERVED = '\x00'
LOG_LINE_DEFAULT_FORMAT = '{remote_addr} - - [{time.d}/{time.b}/{time.Y}' \
':{time.H}:{time.M}:{time.S} +0000] ' \
'"{method} {path}" {status} {content_length} ' \
'"{referer}" "{txn_id}" "{user_agent}" ' \
'{trans_time:.4f} "{additional_info}" {pid} ' \
'{policy_index}'
DEFAULT_LOCK_TIMEOUT = 10
class InvalidHashPathConfigError(ValueError):
def __str__(self):
return "[swift-hash]: both swift_hash_path_suffix and " \
"swift_hash_path_prefix are missing from %s" % SWIFT_CONF_FILE
def set_swift_dir(swift_dir):
"""
Sets the directory from which swift config files will be read. If the given
directory differs from that already set then the swift.conf file in the new
directory will be validated and storage policies will be reloaded from the
new swift.conf file.
:param swift_dir: non-default directory to read swift.conf from
"""
global HASH_PATH_SUFFIX
global HASH_PATH_PREFIX
global SWIFT_CONF_FILE
if (swift_dir is not None and
swift_dir != os.path.dirname(SWIFT_CONF_FILE)):
SWIFT_CONF_FILE = os.path.join(
swift_dir, os.path.basename(SWIFT_CONF_FILE))
HASH_PATH_PREFIX = b''
HASH_PATH_SUFFIX = b''
validate_configuration()
return True
return False
def validate_hash_conf():
global HASH_PATH_SUFFIX
global HASH_PATH_PREFIX
if not HASH_PATH_SUFFIX and not HASH_PATH_PREFIX:
hash_conf = ConfigParser()
if six.PY3:
# Use Latin1 to accept arbitrary bytes in the hash prefix/suffix
with open(SWIFT_CONF_FILE, encoding='latin1') as swift_conf_file:
hash_conf.read_file(swift_conf_file)
else:
with open(SWIFT_CONF_FILE) as swift_conf_file:
hash_conf.readfp(swift_conf_file)
try:
HASH_PATH_SUFFIX = hash_conf.get('swift-hash',
'swift_hash_path_suffix')
if six.PY3:
HASH_PATH_SUFFIX = HASH_PATH_SUFFIX.encode('latin1')
except (NoSectionError, NoOptionError):
pass
try:
HASH_PATH_PREFIX = hash_conf.get('swift-hash',
'swift_hash_path_prefix')
if six.PY3:
HASH_PATH_PREFIX = HASH_PATH_PREFIX.encode('latin1')
except (NoSectionError, NoOptionError):
pass
if not HASH_PATH_SUFFIX and not HASH_PATH_PREFIX:
raise InvalidHashPathConfigError()
try:
validate_hash_conf()
except (InvalidHashPathConfigError, IOError):
# could get monkey patched or lazy loaded
pass
def backward(f, blocksize=4096):
"""
A generator returning lines from a file starting with the last line,
then the second last line, etc. i.e., it reads lines backwards.
Stops when the first line (if any) is read.
This is useful when searching for recent activity in very
large files.
:param f: file object to read
:param blocksize: no of characters to go backwards at each block
"""
f.seek(0, os.SEEK_END)
if f.tell() == 0:
return
last_row = b''
while f.tell() != 0:
try:
f.seek(-blocksize, os.SEEK_CUR)
except IOError:
blocksize = f.tell()
f.seek(-blocksize, os.SEEK_CUR)
block = f.read(blocksize)
f.seek(-blocksize, os.SEEK_CUR)
rows = block.split(b'\n')
rows[-1] = rows[-1] + last_row
while rows:
last_row = rows.pop(-1)
if rows and last_row:
yield last_row
yield last_row
# Used when reading config values
TRUE_VALUES = set(('true', '1', 'yes', 'on', 't', 'y'))
def non_negative_float(value):
"""
Check that the value casts to a float and is non-negative.
:param value: value to check
:raises ValueError: if the value cannot be cast to a float or is negative.
:return: a float
"""
try:
value = float(value)
if value < 0:
raise ValueError
except (TypeError, ValueError):
raise ValueError('Value must be a non-negative float number, not "%s".'
% value)
return value
def non_negative_int(value):
"""
Check that the value casts to an int and is a whole number.
:param value: value to check
:raises ValueError: if the value cannot be cast to an int or does not
represent a whole number.
:return: an int
"""
int_value = int(value)
if int_value != non_negative_float(value):
raise ValueError
return int_value
def config_true_value(value):
"""
Returns True if the value is either True or a string in TRUE_VALUES.
Returns False otherwise.
"""
return value is True or \
(isinstance(value, six.string_types) and value.lower() in TRUE_VALUES)
def config_positive_int_value(value):
"""
Returns positive int value if it can be cast by int() and it's an
integer > 0. (not including zero) Raises ValueError otherwise.
"""
try:
result = int(value)
if result < 1:
raise ValueError()
except (TypeError, ValueError):
raise ValueError(
'Config option must be an positive int number, not "%s".' % value)
return result
def config_float_value(value, minimum=None, maximum=None):
try:
val = float(value)
if minimum is not None and val < minimum:
raise ValueError()
if maximum is not None and val > maximum:
raise ValueError()
return val
except (TypeError, ValueError):
min_ = ', greater than %s' % minimum if minimum is not None else ''
max_ = ', less than %s' % maximum if maximum is not None else ''
raise ValueError('Config option must be a number%s%s, not "%s".' %
(min_, max_, value))
def config_auto_int_value(value, default):
"""
Returns default if value is None or 'auto'.
Returns value as an int or raises ValueError otherwise.
"""
if value is None or \
(isinstance(value, six.string_types) and value.lower() == 'auto'):
return default
try:
value = int(value)
except (TypeError, ValueError):
raise ValueError('Config option must be an integer or the '
'string "auto", not "%s".' % value)
return value
def config_percent_value(value):
try:
return config_float_value(value, 0, 100) / 100.0
except ValueError as err:
raise ValueError("%s: %s" % (str(err), value))
def config_request_node_count_value(value):
try:
value_parts = value.lower().split()
rnc_value = int(value_parts[0])
except (ValueError, AttributeError):
pass
else:
if len(value_parts) == 1:
return lambda replicas: rnc_value
elif (len(value_parts) == 3 and
value_parts[1] == '*' and
value_parts[2] == 'replicas'):
return lambda replicas: rnc_value * replicas
raise ValueError(
'Invalid request_node_count value: %r' % value)
def append_underscore(prefix):
if prefix and not prefix.endswith('_'):
prefix += '_'
return prefix
def config_read_reseller_options(conf, defaults):
"""
Read reseller_prefix option and associated options from configuration
Reads the reseller_prefix option, then reads options that may be
associated with a specific reseller prefix. Reads options such that an
option without a prefix applies to all reseller prefixes unless an option
has an explicit prefix.
:param conf: the configuration
:param defaults: a dict of default values. The key is the option
name. The value is either an array of strings or a string
:return: tuple of an array of reseller prefixes and a dict of option values
"""
reseller_prefix_opt = conf.get('reseller_prefix', 'AUTH').split(',')
reseller_prefixes = []
for prefix in [pre.strip() for pre in reseller_prefix_opt if pre.strip()]:
if prefix == "''":
prefix = ''
prefix = append_underscore(prefix)
if prefix not in reseller_prefixes:
reseller_prefixes.append(prefix)
if len(reseller_prefixes) == 0:
reseller_prefixes.append('')
# Get prefix-using config options
associated_options = {}
for prefix in reseller_prefixes:
associated_options[prefix] = dict(defaults)
associated_options[prefix].update(
config_read_prefixed_options(conf, '', defaults))
prefix_name = prefix if prefix != '' else "''"
associated_options[prefix].update(
config_read_prefixed_options(conf, prefix_name, defaults))
return reseller_prefixes, associated_options
def config_read_prefixed_options(conf, prefix_name, defaults):
"""
Read prefixed options from configuration
:param conf: the configuration
:param prefix_name: the prefix (including, if needed, an underscore)
:param defaults: a dict of default values. The dict supplies the
option name and type (string or comma separated string)
:return: a dict containing the options
"""
params = {}
for option_name in defaults.keys():
value = conf.get('%s%s' % (prefix_name, option_name))
if value:
if isinstance(defaults.get(option_name), list):
params[option_name] = []
for role in value.lower().split(','):
params[option_name].append(role.strip())
else:
params[option_name] = value.strip()
return params
def logging_monkey_patch():
# explicitly patch the logging lock
logging._lock = logging.threading.RLock()
# setup notice level logging
logging.addLevelName(NOTICE, 'NOTICE')
SysLogHandler.priority_map['NOTICE'] = 'notice'
# Trying to log threads while monkey-patched can lead to deadlocks; see
# https://bugs.launchpad.net/swift/+bug/1895739
logging.logThreads = 0
def eventlet_monkey_patch():
"""
Install the appropriate Eventlet monkey patches.
"""
# NOTE(sileht):
# monkey-patching thread is required by python-keystoneclient;
# monkey-patching select is required by oslo.messaging pika driver
# if thread is monkey-patched.
eventlet.patcher.monkey_patch(all=False, socket=True, select=True,
thread=True)
def monkey_patch():
"""
Apply all swift monkey patching consistently in one place.
"""
eventlet_monkey_patch()
logging_monkey_patch()
def validate_configuration():
try:
validate_hash_conf()
except InvalidHashPathConfigError as e:
sys.exit("Error: %s" % e)
def generate_trans_id(trans_id_suffix):
return 'tx%s-%010x%s' % (
uuid.uuid4().hex[:21], int(time.time()), quote(trans_id_suffix))
def get_policy_index(req_headers, res_headers):
"""
Returns the appropriate index of the storage policy for the request from
a proxy server
:param req_headers: dict of the request headers.
:param res_headers: dict of the response headers.
:returns: string index of storage policy, or None
"""
header = 'X-Backend-Storage-Policy-Index'
policy_index = res_headers.get(header, req_headers.get(header))
if isinstance(policy_index, six.binary_type) and not six.PY2:
policy_index = policy_index.decode('ascii')
return str(policy_index) if policy_index is not None else None
class _UTC(datetime.tzinfo):
"""
A tzinfo class for datetime objects that returns a 0 timedelta (UTC time)
"""
def dst(self, dt):
return datetime.timedelta(0)
utcoffset = dst
def tzname(self, dt):
return 'UTC'
UTC = _UTC()
class LogStringFormatter(string.Formatter):
def __init__(self, default='', quote=False):
super(LogStringFormatter, self).__init__()
self.default = default
self.quote = quote
def format_field(self, value, spec):
if not value:
return self.default
else:
log = super(LogStringFormatter, self).format_field(value, spec)
if self.quote:
return quote(log, ':/{}')
else:
return log
class StrAnonymizer(str):
"""
Class that permits to get a string anonymized or simply quoted.
"""
def __new__(cls, data, method, salt):
method = method.lower()
if method not in (hashlib.algorithms if six.PY2 else
hashlib.algorithms_guaranteed):
raise ValueError('Unsupported hashing method: %r' % method)
s = str.__new__(cls, data or '')
s.method = method
s.salt = salt
return s
@property
def anonymized(self):
if not self:
return self
else:
if self.method == 'md5':
h = md5(usedforsecurity=False)
else:
h = getattr(hashlib, self.method)()
if self.salt:
h.update(six.b(self.salt))
h.update(six.b(self))
return '{%s%s}%s' % ('S' if self.salt else '', self.method.upper(),
h.hexdigest())
class StrFormatTime(object):
"""
Class that permits to get formats or parts of a time.
"""
def __init__(self, ts):
self.time = ts
self.time_struct = time.gmtime(ts)
def __str__(self):
return "%.9f" % self.time
def __getattr__(self, attr):
if attr not in ['a', 'A', 'b', 'B', 'c', 'd', 'H',
'I', 'j', 'm', 'M', 'p', 'S', 'U',
'w', 'W', 'x', 'X', 'y', 'Y', 'Z']:
raise ValueError(("The attribute %s is not a correct directive "
"for time.strftime formater.") % attr)
return datetime.datetime(*self.time_struct[:-2],
tzinfo=UTC).strftime('%' + attr)
@property
def asctime(self):
return time.asctime(self.time_struct)
@property
def datetime(self):
return time.strftime('%d/%b/%Y/%H/%M/%S', self.time_struct)
@property
def iso8601(self):
return time.strftime('%Y-%m-%dT%H:%M:%S', self.time_struct)
@property
def ms(self):
return self.__str__().split('.')[1][:3]
@property
def us(self):
return self.__str__().split('.')[1][:6]
@property
def ns(self):
return self.__str__().split('.')[1]
@property
def s(self):
return self.__str__().split('.')[0]
def get_log_line(req, res, trans_time, additional_info, fmt,
anonymization_method, anonymization_salt):
"""
Make a line for logging that matches the documented log line format
for backend servers.
:param req: the request.
:param res: the response.
:param trans_time: the time the request took to complete, a float.
:param additional_info: a string to log at the end of the line
:returns: a properly formatted line for logging.
"""
policy_index = get_policy_index(req.headers, res.headers)
if req.path.startswith('/'):
disk, partition, account, container, obj = split_path(req.path, 0, 5,
True)
else:
disk, partition, account, container, obj = (None, ) * 5
replacements = {
'remote_addr': StrAnonymizer(req.remote_addr, anonymization_method,
anonymization_salt),
'time': StrFormatTime(time.time()),
'method': req.method,
'path': StrAnonymizer(req.path, anonymization_method,
anonymization_salt),
'disk': disk,
'partition': partition,
'account': StrAnonymizer(account, anonymization_method,
anonymization_salt),
'container': StrAnonymizer(container, anonymization_method,
anonymization_salt),
'object': StrAnonymizer(obj, anonymization_method,
anonymization_salt),
'status': res.status.split()[0],
'content_length': res.content_length,
'referer': StrAnonymizer(req.referer, anonymization_method,
anonymization_salt),
'txn_id': req.headers.get('x-trans-id'),
'user_agent': StrAnonymizer(req.user_agent, anonymization_method,
anonymization_salt),
'trans_time': trans_time,
'additional_info': additional_info,
'pid': os.getpid(),
'policy_index': policy_index,
}
return LogStringFormatter(default='-').format(fmt, **replacements)
def get_trans_id_time(trans_id):
if len(trans_id) >= 34 and \
trans_id.startswith('tx') and trans_id[23] == '-':
try:
return int(trans_id[24:34], 16)
except ValueError:
pass
return None
def config_fallocate_value(reserve_value):
"""
Returns fallocate reserve_value as an int or float.
Returns is_percent as a boolean.
Returns a ValueError on invalid fallocate value.
"""
try:
if str(reserve_value[-1:]) == '%':
reserve_value = float(reserve_value[:-1])
is_percent = True
else:
reserve_value = int(reserve_value)
is_percent = False
except ValueError:
raise ValueError('Error: %s is an invalid value for fallocate'
'_reserve.' % reserve_value)
return reserve_value, is_percent
class FileLikeIter(object):
def __init__(self, iterable):
"""
Wraps an iterable to behave as a file-like object.
The iterable must be a byte string or yield byte strings.
"""
if isinstance(iterable, bytes):
iterable = (iterable, )
self.iterator = iter(iterable)
self.buf = None
self.closed = False
def __iter__(self):
return self
def next(self):
"""
next(x) -> the next value, or raise StopIteration
"""
if self.closed:
raise ValueError('I/O operation on closed file')
if self.buf:
rv = self.buf
self.buf = None
return rv
else:
return next(self.iterator)
__next__ = next
def read(self, size=-1):
"""
read([size]) -> read at most size bytes, returned as a bytes string.
If the size argument is negative or omitted, read until EOF is reached.
Notice that when in non-blocking mode, less data than what was
requested may be returned, even if no size parameter was given.
"""
if self.closed:
raise ValueError('I/O operation on closed file')
if size < 0:
return b''.join(self)
elif not size:
chunk = b''
elif self.buf:
chunk = self.buf
self.buf = None
else:
try:
chunk = next(self.iterator)
except StopIteration:
return b''
if len(chunk) > size:
self.buf = chunk[size:]
chunk = chunk[:size]
return chunk
def readline(self, size=-1):
"""
readline([size]) -> next line from the file, as a bytes string.
Retain newline. A non-negative size argument limits the maximum
number of bytes to return (an incomplete line may be returned then).
Return an empty string at EOF.
"""
if self.closed:
raise ValueError('I/O operation on closed file')
data = b''
while b'\n' not in data and (size < 0 or len(data) < size):
if size < 0:
chunk = self.read(1024)
else:
chunk = self.read(size - len(data))
if not chunk:
break
data += chunk
if b'\n' in data:
data, sep, rest = data.partition(b'\n')
data += sep
if self.buf:
self.buf = rest + self.buf
else:
self.buf = rest
return data
def readlines(self, sizehint=-1):
"""
readlines([size]) -> list of bytes strings, each a line from the file.
Call readline() repeatedly and return a list of the lines so read.
The optional size argument, if given, is an approximate bound on the
total number of bytes in the lines returned.
"""
if self.closed:
raise ValueError('I/O operation on closed file')
lines = []
while True:
line = self.readline(sizehint)
if not line:
break
lines.append(line)
if sizehint >= 0:
sizehint -= len(line)
if sizehint <= 0:
break
return lines
def close(self):
"""
close() -> None or (perhaps) an integer. Close the file.
Sets data attribute .closed to True. A closed file cannot be used for
further I/O operations. close() may be called more than once without
error. Some kinds of file objects (for example, opened by popen())
may return an exit status upon closing.
"""
self.iterator = None
self.closed = True
def fs_has_free_space(fs_path, space_needed, is_percent):
"""
Check to see whether or not a filesystem has the given amount of space
free. Unlike fallocate(), this does not reserve any space.
:param fs_path: path to a file or directory on the filesystem; typically
the path to the filesystem's mount point
:param space_needed: minimum bytes or percentage of free space
:param is_percent: if True, then space_needed is treated as a percentage
of the filesystem's capacity; if False, space_needed is a number of
free bytes.
:returns: True if the filesystem has at least that much free space,
False otherwise
:raises OSError: if fs_path does not exist
"""
st = os.statvfs(fs_path)
free_bytes = st.f_frsize * st.f_bavail
if is_percent:
size_bytes = st.f_frsize * st.f_blocks
free_percent = float(free_bytes) / float(size_bytes) * 100
return free_percent >= space_needed
else:
return free_bytes >= space_needed
_fallocate_enabled = True
_fallocate_warned_about_missing = False
_sys_fallocate = _LibcWrapper('fallocate')
_sys_posix_fallocate = _LibcWrapper('posix_fallocate')
def disable_fallocate():
global _fallocate_enabled
_fallocate_enabled = False
def fallocate(fd, size, offset=0):
"""
Pre-allocate disk space for a file.
This function can be disabled by calling disable_fallocate(). If no
suitable C function is available in libc, this function is a no-op.
:param fd: file descriptor
:param size: size to allocate (in bytes)
"""
global _fallocate_enabled
if not _fallocate_enabled:
return
if size < 0:
size = 0 # Done historically; not really sure why
if size >= (1 << 63):
raise ValueError('size must be less than 2 ** 63')
if offset < 0:
raise ValueError('offset must be non-negative')
if offset >= (1 << 63):
raise ValueError('offset must be less than 2 ** 63')
# Make sure there's some (configurable) amount of free space in
# addition to the number of bytes we're allocating.
if FALLOCATE_RESERVE:
st = os.fstatvfs(fd)
free = st.f_frsize * st.f_bavail - size
if FALLOCATE_IS_PERCENT:
free = (float(free) / float(st.f_frsize * st.f_blocks)) * 100
if float(free) <= float(FALLOCATE_RESERVE):
raise OSError(
errno.ENOSPC,
'FALLOCATE_RESERVE fail %g <= %g' %
(free, FALLOCATE_RESERVE))
if _sys_fallocate.available:
# Parameters are (fd, mode, offset, length).
#
# mode=FALLOC_FL_KEEP_SIZE pre-allocates invisibly (without
# affecting the reported file size).
ret = _sys_fallocate(
fd, FALLOC_FL_KEEP_SIZE, ctypes.c_uint64(offset),
ctypes.c_uint64(size))
err = ctypes.get_errno()
elif _sys_posix_fallocate.available:
# Parameters are (fd, offset, length).
ret = _sys_posix_fallocate(fd, ctypes.c_uint64(offset),
ctypes.c_uint64(size))
err = ctypes.get_errno()
else:
# No suitable fallocate-like function is in our libc. Warn about it,
# but just once per process, and then do nothing.
global _fallocate_warned_about_missing
if not _fallocate_warned_about_missing:
logging.warning("Unable to locate fallocate, posix_fallocate in "
"libc. Leaving as a no-op.")
_fallocate_warned_about_missing = True
return
if ret and err not in (0, errno.ENOSYS, errno.EOPNOTSUPP,
errno.EINVAL):
raise OSError(err, 'Unable to fallocate(%s)' % size)
def punch_hole(fd, offset, length):
"""
De-allocate disk space in the middle of a file.
:param fd: file descriptor
:param offset: index of first byte to de-allocate
:param length: number of bytes to de-allocate
"""
if offset < 0:
raise ValueError('offset must be non-negative')
if offset >= (1 << 63):
raise ValueError('offset must be less than 2 ** 63')
if length <= 0:
raise ValueError('length must be positive')
if length >= (1 << 63):
raise ValueError('length must be less than 2 ** 63')
if _sys_fallocate.available:
# Parameters are (fd, mode, offset, length).
ret = _sys_fallocate(
fd,
FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE,
ctypes.c_uint64(offset),
ctypes.c_uint64(length))
err = ctypes.get_errno()
if ret and err:
mode_str = "FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE"
raise OSError(err, "Unable to fallocate(%d, %s, %d, %d)" % (
fd, mode_str, offset, length))
else:
raise OSError(errno.ENOTSUP,
'No suitable C function found for hole punching')
def fsync(fd):
"""
Sync modified file data and metadata to disk.
:param fd: file descriptor
"""
if hasattr(fcntl, 'F_FULLSYNC'):
try:
fcntl.fcntl(fd, fcntl.F_FULLSYNC)
except IOError as e:
raise OSError(e.errno, 'Unable to F_FULLSYNC(%s)' % fd)
else:
os.fsync(fd)
def fdatasync(fd):
"""
Sync modified file data to disk.
:param fd: file descriptor
"""
try:
os.fdatasync(fd)
except AttributeError:
fsync(fd)
def fsync_dir(dirpath):
"""
Sync directory entries to disk.
:param dirpath: Path to the directory to be synced.
"""
dirfd = None
try:
dirfd = os.open(dirpath, os.O_DIRECTORY | os.O_RDONLY)
fsync(dirfd)
except OSError as err:
if err.errno == errno.ENOTDIR:
# Raise error if someone calls fsync_dir on a non-directory
raise
logging.warning('Unable to perform fsync() on directory %(dir)s:'
' %(err)s',
{'dir': dirpath, 'err': os.strerror(err.errno)})
finally:
if dirfd:
os.close(dirfd)
def mkdirs(path):
"""
Ensures the path is a directory or makes it if not. Errors if the path
exists but is a file or on permissions failure.
:param path: path to create
"""
if not os.path.isdir(path):
try:
os.makedirs(path)
except OSError as err:
if err.errno != errno.EEXIST or not os.path.isdir(path):
raise
def makedirs_count(path, count=0):
"""
Same as os.makedirs() except that this method returns the number of
new directories that had to be created.
Also, this does not raise an error if target directory already exists.
This behaviour is similar to Python 3.x's os.makedirs() called with
exist_ok=True. Also similar to swift.common.utils.mkdirs()
https://hg.python.org/cpython/file/v3.4.2/Lib/os.py#l212
"""
head, tail = os.path.split(path)
if not tail:
head, tail = os.path.split(head)
if head and tail and not os.path.exists(head):
count = makedirs_count(head, count)
if tail == os.path.curdir:
return
try:
os.mkdir(path)
except OSError as e:
# EEXIST may also be raised if path exists as a file
# Do not let that pass.
if e.errno != errno.EEXIST or not os.path.isdir(path):
raise
else:
count += 1
return count
def renamer(old, new, fsync=True):
"""
Attempt to fix / hide race conditions like empty object directories
being removed by backend processes during uploads, by retrying.
The containing directory of 'new' and of all newly created directories are
fsync'd by default. This _will_ come at a performance penalty. In cases
where these additional fsyncs are not necessary, it is expected that the
caller of renamer() turn it off explicitly.
:param old: old path to be renamed
:param new: new path to be renamed to
:param fsync: fsync on containing directory of new and also all
the newly created directories.
"""
dirpath = os.path.dirname(new)
try:
count = makedirs_count(dirpath)
os.rename(old, new)
except OSError:
count = makedirs_count(dirpath)
os.rename(old, new)
if fsync:
# If count=0, no new directories were created. But we still need to
# fsync leaf dir after os.rename().
# If count>0, starting from leaf dir, fsync parent dirs of all
# directories created by makedirs_count()
for i in range(0, count + 1):
fsync_dir(dirpath)
dirpath = os.path.dirname(dirpath)
def link_fd_to_path(fd, target_path, dirs_created=0, retries=2, fsync=True):
"""
Creates a link to file descriptor at target_path specified. This method
does not close the fd for you. Unlike rename, as linkat() cannot
overwrite target_path if it exists, we unlink and try again.
Attempts to fix / hide race conditions like empty object directories
being removed by backend processes during uploads, by retrying.
:param fd: File descriptor to be linked
:param target_path: Path in filesystem where fd is to be linked
:param dirs_created: Number of newly created directories that needs to
be fsync'd.
:param retries: number of retries to make
:param fsync: fsync on containing directory of target_path and also all
the newly created directories.
"""
dirpath = os.path.dirname(target_path)
for _junk in range(0, retries):
try:
linkat(linkat.AT_FDCWD, "/proc/self/fd/%d" % (fd),
linkat.AT_FDCWD, target_path, linkat.AT_SYMLINK_FOLLOW)
break
except IOError as err:
if err.errno == errno.ENOENT:
dirs_created = makedirs_count(dirpath)
elif err.errno == errno.EEXIST:
try:
os.unlink(target_path)
except OSError as e:
if e.errno != errno.ENOENT:
raise
else:
raise
if fsync:
for i in range(0, dirs_created + 1):
fsync_dir(dirpath)
dirpath = os.path.dirname(dirpath)
def split_path(path, minsegs=1, maxsegs=None, rest_with_last=False):
"""
Validate and split the given HTTP request path.
**Examples**::
['a'] = split_path('/a')
['a', None] = split_path('/a', 1, 2)
['a', 'c'] = split_path('/a/c', 1, 2)
['a', 'c', 'o/r'] = split_path('/a/c/o/r', 1, 3, True)
:param path: HTTP Request path to be split
:param minsegs: Minimum number of segments to be extracted
:param maxsegs: Maximum number of segments to be extracted
:param rest_with_last: If True, trailing data will be returned as part
of last segment. If False, and there is
trailing data, raises ValueError.
:returns: list of segments with a length of maxsegs (non-existent
segments will return as None)
:raises ValueError: if given an invalid path
"""
if not maxsegs:
maxsegs = minsegs
if minsegs > maxsegs:
raise ValueError('minsegs > maxsegs: %d > %d' % (minsegs, maxsegs))
if rest_with_last:
segs = path.split('/', maxsegs)
minsegs += 1
maxsegs += 1
count = len(segs)
if (segs[0] or count < minsegs or count > maxsegs or
'' in segs[1:minsegs]):
raise ValueError('Invalid path: %s' % quote(path))
else:
minsegs += 1
maxsegs += 1
segs = path.split('/', maxsegs)
count = len(segs)
if (segs[0] or count < minsegs or count > maxsegs + 1 or
'' in segs[1:minsegs] or
(count == maxsegs + 1 and segs[maxsegs])):
raise ValueError('Invalid path: %s' % quote(path))
segs = segs[1:maxsegs]
segs.extend([None] * (maxsegs - 1 - len(segs)))
return segs
def validate_device_partition(device, partition):
"""
Validate that a device and a partition are valid and won't lead to
directory traversal when used.
:param device: device to validate
:param partition: partition to validate
:raises ValueError: if given an invalid device or partition
"""
if not device or '/' in device or device in ['.', '..']:
raise ValueError('Invalid device: %s' % quote(device or ''))
if not partition or '/' in partition or partition in ['.', '..']:
raise ValueError('Invalid partition: %s' % quote(partition or ''))
class RateLimitedIterator(object):
"""
Wrap an iterator to only yield elements at a rate of N per second.
:param iterable: iterable to wrap
:param elements_per_second: the rate at which to yield elements
:param limit_after: rate limiting kicks in only after yielding
this many elements; default is 0 (rate limit
immediately)
"""
def __init__(self, iterable, elements_per_second, limit_after=0,
ratelimit_if=lambda _junk: True):
self.iterator = iter(iterable)
self.elements_per_second = elements_per_second
self.limit_after = limit_after
self.rate_limiter = EventletRateLimiter(elements_per_second)
self.ratelimit_if = ratelimit_if
def __iter__(self):
return self
def next(self):
next_value = next(self.iterator)
if self.ratelimit_if(next_value):
if self.limit_after > 0:
self.limit_after -= 1
else:
self.rate_limiter.wait()
return next_value
__next__ = next
class GreenthreadSafeIterator(object):
"""
Wrap an iterator to ensure that only one greenthread is inside its next()
method at a time.
This is useful if an iterator's next() method may perform network IO, as
that may trigger a greenthread context switch (aka trampoline), which can
give another greenthread a chance to call next(). At that point, you get
an error like "ValueError: generator already executing". By wrapping calls
to next() with a mutex, we avoid that error.
"""
def __init__(self, unsafe_iterable):
self.unsafe_iter = iter(unsafe_iterable)
self.semaphore = eventlet.semaphore.Semaphore(value=1)
def __iter__(self):
return self
def next(self):
with self.semaphore:
return next(self.unsafe_iter)
__next__ = next
class NullLogger(object):
"""A no-op logger for eventlet wsgi."""
def write(self, *args):
# "Logs" the args to nowhere
pass
def exception(self, *args):
pass
def critical(self, *args):
pass
def error(self, *args):
pass
def warning(self, *args):
pass
def info(self, *args):
pass
def debug(self, *args):
pass
def log(self, *args):
pass
class LoggerFileObject(object):
# Note: this is greenthread-local storage
_cls_thread_local = threading.local()
def __init__(self, logger, log_type='STDOUT'):
self.logger = logger
self.log_type = log_type
def write(self, value):
# We can get into a nasty situation when logs are going to syslog
# and syslog dies.
#
# It's something like this:
#
# (A) someone logs something
#
# (B) there's an exception in sending to /dev/log since syslog is
# not working
#
# (C) logging takes that exception and writes it to stderr (see
# logging.Handler.handleError)
#
# (D) stderr was replaced with a LoggerFileObject at process start,
# so the LoggerFileObject takes the provided string and tells
# its logger to log it (to syslog, naturally).
#
# Then, steps B through D repeat until we run out of stack.
if getattr(self._cls_thread_local, 'already_called_write', False):
return
self._cls_thread_local.already_called_write = True
try:
value = value.strip()
if value:
if 'Connection reset by peer' in value:
self.logger.error(
'%s: Connection reset by peer', self.log_type)
else:
self.logger.error('%(type)s: %(value)s',
{'type': self.log_type, 'value': value})
finally:
self._cls_thread_local.already_called_write = False
def writelines(self, values):
if getattr(self._cls_thread_local, 'already_called_writelines', False):
return
self._cls_thread_local.already_called_writelines = True
try:
self.logger.error('%(type)s: %(value)s',
{'type': self.log_type,
'value': '#012'.join(values)})
finally:
self._cls_thread_local.already_called_writelines = False
def close(self):
pass
def flush(self):
pass
def __iter__(self):
return self
def next(self):
raise IOError(errno.EBADF, 'Bad file descriptor')
__next__ = next
def read(self, size=-1):
raise IOError(errno.EBADF, 'Bad file descriptor')
def readline(self, size=-1):
raise IOError(errno.EBADF, 'Bad file descriptor')
def tell(self):
return 0
def xreadlines(self):
return self
class StatsdClient(object):
def __init__(self, host, port, base_prefix='', tail_prefix='',
default_sample_rate=1, sample_rate_factor=1, logger=None):
self._host = host
self._port = port
self._base_prefix = base_prefix
self._set_prefix(tail_prefix)
self._default_sample_rate = default_sample_rate
self._sample_rate_factor = sample_rate_factor
self.random = random
self.logger = logger
# Determine if host is IPv4 or IPv6
addr_info, self._sock_family = self._determine_sock_family(host, port)
# NOTE: we use the original host value, not the DNS-resolved one
# because if host is a hostname, we don't want to cache the DNS
# resolution for the entire lifetime of this process. Let standard
# name resolution caching take effect. This should help operators use
# DNS trickery if they want.
if addr_info is not None:
# addr_info is a list of 5-tuples with the following structure:
# (family, socktype, proto, canonname, sockaddr)
# where sockaddr is the only thing of interest to us, and we only
# use the first result. We want to use the originally supplied
# host (see note above) and the remainder of the variable-length
# sockaddr: IPv4 has (address, port) while IPv6 has (address,
# port, flow info, scope id).
sockaddr = addr_info[0][-1]
self._target = (host,) + (sockaddr[1:])
else:
self._target = (host, port)
def _determine_sock_family(self, host, port):
addr_info = sock_family = None
try:
addr_info = socket.getaddrinfo(host, port, socket.AF_INET)
sock_family = socket.AF_INET
except socket.gaierror:
try:
addr_info = socket.getaddrinfo(host, port, socket.AF_INET6)
sock_family = socket.AF_INET6
except socket.gaierror:
# Don't keep the server from starting from what could be a
# transient DNS failure. Any hostname will get re-resolved as
# necessary in the .sendto() calls.
# However, we don't know if we're IPv4 or IPv6 in this case, so
# we assume legacy IPv4.
sock_family = socket.AF_INET
return addr_info, sock_family
def _set_prefix(self, tail_prefix):
"""
Modifies the prefix that is added to metric names. The resulting prefix
is the concatenation of the component parts `base_prefix` and
`tail_prefix`. Only truthy components are included. Each included
component is followed by a period, e.g.::
<base_prefix>.<tail_prefix>.
<tail_prefix>.
<base_prefix>.
<the empty string>
Note: this method is expected to be called from the constructor only,
but exists to provide backwards compatible functionality for the
deprecated set_prefix() method.
:param tail_prefix: The new value of tail_prefix
"""
if tail_prefix and self._base_prefix:
self._prefix = '.'.join([self._base_prefix, tail_prefix, ''])
elif tail_prefix:
self._prefix = tail_prefix + '.'
elif self._base_prefix:
self._prefix = self._base_prefix + '.'
else:
self._prefix = ''
def set_prefix(self, tail_prefix):
"""
This method is deprecated; use the ``tail_prefix`` argument of the
constructor when instantiating the class instead.
"""
warnings.warn(
'set_prefix() is deprecated; use the ``tail_prefix`` argument of '
'the constructor when instantiating the class instead.',
DeprecationWarning, stacklevel=2
)
self._set_prefix(tail_prefix)
def _send(self, m_name, m_value, m_type, sample_rate):
if sample_rate is None:
sample_rate = self._default_sample_rate
sample_rate = sample_rate * self._sample_rate_factor
parts = ['%s%s:%s' % (self._prefix, m_name, m_value), m_type]
if sample_rate < 1:
if self.random() < sample_rate:
parts.append('@%s' % (sample_rate,))
else:
return
if six.PY3:
parts = [part.encode('utf-8') for part in parts]
# Ideally, we'd cache a sending socket in self, but that
# results in a socket getting shared by multiple green threads.
with closing(self._open_socket()) as sock:
try:
return sock.sendto(b'|'.join(parts), self._target)
except IOError as err:
if self.logger:
self.logger.warning(
'Error sending UDP message to %(target)r: %(err)s',
{'target': self._target, 'err': err})
def _open_socket(self):
return socket.socket(self._sock_family, socket.SOCK_DGRAM)
def update_stats(self, m_name, m_value, sample_rate=None):
return self._send(m_name, m_value, 'c', sample_rate)
def increment(self, metric, sample_rate=None):
return self.update_stats(metric, 1, sample_rate)
def decrement(self, metric, sample_rate=None):
return self.update_stats(metric, -1, sample_rate)
def _timing(self, metric, timing_ms, sample_rate):
# This method was added to disagregate timing metrics when testing
return self._send(metric, timing_ms, 'ms', sample_rate)
def timing(self, metric, timing_ms, sample_rate=None):
return self._timing(metric, timing_ms, sample_rate)
def timing_since(self, metric, orig_time, sample_rate=None):
return self._timing(metric, (time.time() - orig_time) * 1000,
sample_rate)
def transfer_rate(self, metric, elapsed_time, byte_xfer, sample_rate=None):
if byte_xfer:
return self.timing(metric,
elapsed_time * 1000 / byte_xfer * 1000,
sample_rate)
def timing_stats(**dec_kwargs):
"""
Returns a decorator that logs timing events or errors for public methods in
swift's wsgi server controllers, based on response code.
"""
def decorating_func(func):
method = func.__name__
@functools.wraps(func)
def _timing_stats(ctrl, *args, **kwargs):
start_time = time.time()
resp = func(ctrl, *args, **kwargs)
# .timing is for successful responses *or* error codes that are
# not Swift's fault. For example, 500 is definitely the server's
# fault, but 412 is an error code (4xx are all errors) that is
# due to a header the client sent.
#
# .errors.timing is for failures that *are* Swift's fault.
# Examples include 507 for an unmounted drive or 500 for an
# unhandled exception.
if not is_server_error(resp.status_int):
ctrl.logger.timing_since(method + '.timing',
start_time, **dec_kwargs)
else:
ctrl.logger.timing_since(method + '.errors.timing',
start_time, **dec_kwargs)
return resp
return _timing_stats
return decorating_func
def memcached_timing_stats(**dec_kwargs):
"""
Returns a decorator that logs timing events or errors for public methods in
MemcacheRing class, such as memcached set, get and etc.
"""
def decorating_func(func):
method = func.__name__
@functools.wraps(func)
def _timing_stats(cache, *args, **kwargs):
start_time = time.time()
result = func(cache, *args, **kwargs)
cache.logger.timing_since(
'memcached.' + method + '.timing', start_time, **dec_kwargs)
return result
return _timing_stats
return decorating_func
class SwiftLoggerAdapter(logging.LoggerAdapter):
"""
A logging.LoggerAdapter subclass that also passes through StatsD method
calls.
Like logging.LoggerAdapter, you have to subclass this and override the
process() method to accomplish anything useful.
"""
@property
def name(self):
# py3 does this for us already; add it for py2
return self.logger.name
def get_metric_name(self, metric):
# subclasses may override this method to annotate the metric name
return metric
def update_stats(self, metric, *a, **kw):
return self.logger.update_stats(self.get_metric_name(metric), *a, **kw)
def increment(self, metric, *a, **kw):
return self.logger.increment(self.get_metric_name(metric), *a, **kw)
def decrement(self, metric, *a, **kw):
return self.logger.decrement(self.get_metric_name(metric), *a, **kw)
def timing(self, metric, *a, **kw):
return self.logger.timing(self.get_metric_name(metric), *a, **kw)
def timing_since(self, metric, *a, **kw):
return self.logger.timing_since(self.get_metric_name(metric), *a, **kw)
def transfer_rate(self, metric, *a, **kw):
return self.logger.transfer_rate(
self.get_metric_name(metric), *a, **kw)
@property
def thread_locals(self):
return self.logger.thread_locals
@thread_locals.setter
def thread_locals(self, thread_locals):
self.logger.thread_locals = thread_locals
def exception(self, msg, *a, **kw):
# We up-call to exception() where stdlib uses error() so we can get
# some of the traceback suppression from LogAdapter, below
self.logger.exception(msg, *a, **kw)
class PrefixLoggerAdapter(SwiftLoggerAdapter):
"""
Adds an optional prefix to all its log messages. When the prefix has not
been set, messages are unchanged.
"""
def set_prefix(self, prefix):
self.extra['prefix'] = prefix
def exception(self, msg, *a, **kw):
if 'prefix' in self.extra:
msg = self.extra['prefix'] + msg
super(PrefixLoggerAdapter, self).exception(msg, *a, **kw)
def process(self, msg, kwargs):
msg, kwargs = super(PrefixLoggerAdapter, self).process(msg, kwargs)
if 'prefix' in self.extra:
msg = self.extra['prefix'] + msg
return (msg, kwargs)
class MetricsPrefixLoggerAdapter(SwiftLoggerAdapter):
"""
Adds a prefix to all Statsd metrics' names.
"""
def __init__(self, logger, extra, metric_prefix):
"""
:param logger: an instance of logging.Logger
:param extra: a dict-like object
:param metric_prefix: A prefix that will be added to the start of each
metric name such that the metric name is transformed to:
``<metric_prefix>.<metric name>``. Note that the logger's
StatsdClient also adds its configured prefix to metric names.
"""
super(MetricsPrefixLoggerAdapter, self).__init__(logger, extra)
self.metric_prefix = metric_prefix
def get_metric_name(self, metric):
return '%s.%s' % (self.metric_prefix, metric)
# double inheritance to support property with setter
class LogAdapter(logging.LoggerAdapter, object):
"""
A Logger like object which performs some reformatting on calls to
:meth:`exception`. Can be used to store a threadlocal transaction id and
client ip.
"""
_cls_thread_local = threading.local()
def __init__(self, logger, server):
logging.LoggerAdapter.__init__(self, logger, {})
self.server = server
self.warn = self.warning
# There are a few properties needed for py35; see
# - https://bugs.python.org/issue31457
# - https://github.com/python/cpython/commit/1bbd482
# - https://github.com/python/cpython/commit/0b6a118
# - https://github.com/python/cpython/commit/ce9e625
def _log(self, level, msg, args, exc_info=None, extra=None,
stack_info=False):
"""
Low-level log implementation, proxied to allow nested logger adapters.
"""
return self.logger._log(
level,
msg,
args,
exc_info=exc_info,
extra=extra,
stack_info=stack_info,
)
@property
def manager(self):
return self.logger.manager
@manager.setter
def manager(self, value):
self.logger.manager = value
@property
def name(self):
return self.logger.name
@property
def txn_id(self):
if hasattr(self._cls_thread_local, 'txn_id'):
return self._cls_thread_local.txn_id
@txn_id.setter
def txn_id(self, value):
self._cls_thread_local.txn_id = value
@property
def client_ip(self):
if hasattr(self._cls_thread_local, 'client_ip'):
return self._cls_thread_local.client_ip
@client_ip.setter
def client_ip(self, value):
self._cls_thread_local.client_ip = value
@property
def thread_locals(self):
return (self.txn_id, self.client_ip)
@thread_locals.setter
def thread_locals(self, value):
self.txn_id, self.client_ip = value
def getEffectiveLevel(self):
return self.logger.getEffectiveLevel()
def process(self, msg, kwargs):
"""
Add extra info to message
"""
kwargs['extra'] = {'server': self.server, 'txn_id': self.txn_id,
'client_ip': self.client_ip}
return msg, kwargs
def notice(self, msg, *args, **kwargs):
"""
Convenience function for syslog priority LOG_NOTICE. The python
logging lvl is set to 25, just above info. SysLogHandler is
monkey patched to map this log lvl to the LOG_NOTICE syslog
priority.
"""
self.log(NOTICE, msg, *args, **kwargs)
def _exception(self, msg, *args, **kwargs):
logging.LoggerAdapter.exception(self, msg, *args, **kwargs)
def exception(self, msg, *args, **kwargs):
_junk, exc, _junk = sys.exc_info()
call = self.error
emsg = ''
if isinstance(exc, (http_client.BadStatusLine,
green_http_client.BadStatusLine)):
# Use error(); not really exceptional
emsg = repr(exc)
# Note that on py3, we've seen a RemoteDisconnected error getting
# raised, which inherits from *both* BadStatusLine and OSError;
# we want it getting caught here
elif isinstance(exc, (OSError, socket.error)):
if exc.errno in (errno.EIO, errno.ENOSPC):
emsg = str(exc)
elif exc.errno == errno.ECONNREFUSED:
emsg = 'Connection refused'
elif exc.errno == errno.ECONNRESET:
emsg = 'Connection reset'
elif exc.errno == errno.EHOSTUNREACH:
emsg = 'Host unreachable'
elif exc.errno == errno.ENETUNREACH:
emsg = 'Network unreachable'
elif exc.errno == errno.ETIMEDOUT:
emsg = 'Connection timeout'
elif exc.errno == errno.EPIPE:
emsg = 'Broken pipe'
else:
call = self._exception
elif isinstance(exc, eventlet.Timeout):
emsg = exc.__class__.__name__
detail = '%ss' % exc.seconds
if hasattr(exc, 'created_at'):
detail += ' after %0.2fs' % (time.time() - exc.created_at)
emsg += ' (%s)' % detail
if isinstance(exc, swift.common.exceptions.MessageTimeout):
if exc.msg:
emsg += ' %s' % exc.msg
else:
call = self._exception
call('%s: %s' % (msg, emsg), *args, **kwargs)
def set_statsd_prefix(self, prefix):
"""
This method is deprecated. Callers should use the
``statsd_tail_prefix`` argument of ``get_logger`` when instantiating a
logger.
The StatsD client prefix defaults to the "name" of the logger. This
method may override that default with a specific value. Currently used
in the proxy-server to differentiate the Account, Container, and Object
controllers.
"""
warnings.warn(
'set_statsd_prefix() is deprecated; use the '
'``statsd_tail_prefix`` argument to ``get_logger`` instead.',
DeprecationWarning, stacklevel=2
)
if self.logger.statsd_client:
self.logger.statsd_client._set_prefix(prefix)
def statsd_delegate(statsd_func_name):
"""
Factory to create methods which delegate to methods on
self.logger.statsd_client (an instance of StatsdClient). The
created methods conditionally delegate to a method whose name is given
in 'statsd_func_name'. The created delegate methods are a no-op when
StatsD logging is not configured.
:param statsd_func_name: the name of a method on StatsdClient.
"""
func = getattr(StatsdClient, statsd_func_name)
@functools.wraps(func)
def wrapped(self, *a, **kw):
if getattr(self.logger, 'statsd_client'):
func = getattr(self.logger.statsd_client, statsd_func_name)
return func(*a, **kw)
return wrapped
update_stats = statsd_delegate('update_stats')
increment = statsd_delegate('increment')
decrement = statsd_delegate('decrement')
timing = statsd_delegate('timing')
timing_since = statsd_delegate('timing_since')
transfer_rate = statsd_delegate('transfer_rate')
class SwiftLogFormatter(logging.Formatter):
"""
Custom logging.Formatter will append txn_id to a log message if the
record has one and the message does not. Optionally it can shorten
overly long log lines.
"""
def __init__(self, fmt=None, datefmt=None, max_line_length=0):
logging.Formatter.__init__(self, fmt=fmt, datefmt=datefmt)
self.max_line_length = max_line_length
def format(self, record):
if not hasattr(record, 'server'):
# Catch log messages that were not initiated by swift
# (for example, the keystone auth middleware)
record.server = record.name
# Included from Python's logging.Formatter and then altered slightly to
# replace \n with #012
record.message = record.getMessage()
if self._fmt.find('%(asctime)') >= 0:
record.asctime = self.formatTime(record, self.datefmt)
msg = (self._fmt % record.__dict__).replace('\n', '#012')
if record.exc_info:
# Cache the traceback text to avoid converting it multiple times
# (it's constant anyway)
if not record.exc_text:
record.exc_text = self.formatException(
record.exc_info).replace('\n', '#012')
if record.exc_text:
if not msg.endswith('#012'):
msg = msg + '#012'
msg = msg + record.exc_text
if (hasattr(record, 'txn_id') and record.txn_id and
record.txn_id not in msg):
msg = "%s (txn: %s)" % (msg, record.txn_id)
if (hasattr(record, 'client_ip') and record.client_ip and
record.levelno != logging.INFO and
record.client_ip not in msg):
msg = "%s (client_ip: %s)" % (msg, record.client_ip)
if self.max_line_length > 0 and len(msg) > self.max_line_length:
if self.max_line_length < 7:
msg = msg[:self.max_line_length]
else:
approxhalf = (self.max_line_length - 5) // 2
msg = msg[:approxhalf] + " ... " + msg[-approxhalf:]
return msg
class LogLevelFilter(object):
"""
Drop messages for the logger based on level.
This is useful when dependencies log too much information.
:param level: All messages at or below this level are dropped
(DEBUG < INFO < WARN < ERROR < CRITICAL|FATAL)
Default: DEBUG
"""
def __init__(self, level=logging.DEBUG):
self.level = level
def filter(self, record):
if record.levelno <= self.level:
return 0
return 1
def get_logger(conf, name=None, log_to_console=False, log_route=None,
fmt="%(server)s: %(message)s", statsd_tail_prefix=None):
"""
Get the current system logger using config settings.
**Log config and defaults**::
log_facility = LOG_LOCAL0
log_level = INFO
log_name = swift
log_max_line_length = 0
log_udp_host = (disabled)
log_udp_port = logging.handlers.SYSLOG_UDP_PORT
log_address = /dev/log
log_statsd_host = (disabled)
log_statsd_port = 8125
log_statsd_default_sample_rate = 1.0
log_statsd_sample_rate_factor = 1.0
log_statsd_metric_prefix = (empty-string)
:param conf: Configuration dict to read settings from
:param name: This value is used to populate the ``server`` field in the log
format, as the prefix for statsd messages, and as the default
value for ``log_route``; defaults to the ``log_name`` value in
``conf``, if it exists, or to 'swift'.
:param log_to_console: Add handler which writes to console on stderr
:param log_route: Route for the logging, not emitted to the log, just used
to separate logging configurations; defaults to the value
of ``name`` or whatever ``name`` defaults to. This value
is used as the name attribute of the
``logging.LogAdapter`` that is returned.
:param fmt: Override log format
:param statsd_tail_prefix: tail prefix to pass to statsd client; if None
then the tail prefix defaults to the value of ``name``.
:return: an instance of ``LogAdapter``
"""
# note: log_name is typically specified in conf (i.e. defined by
# operators), whereas log_route is typically hard-coded in callers of
# get_logger (i.e. defined by developers)
if not conf:
conf = {}
if name is None:
name = conf.get('log_name', 'swift')
if not log_route:
log_route = name
logger = logging.getLogger(log_route)
logger.propagate = False
# all new handlers will get the same formatter
formatter = SwiftLogFormatter(
fmt=fmt, max_line_length=int(conf.get('log_max_line_length', 0)))
# get_logger will only ever add one SysLog Handler to a logger
if not hasattr(get_logger, 'handler4logger'):
get_logger.handler4logger = {}
if logger in get_logger.handler4logger:
logger.removeHandler(get_logger.handler4logger[logger])
# facility for this logger will be set by last call wins
facility = getattr(SysLogHandler, conf.get('log_facility', 'LOG_LOCAL0'),
SysLogHandler.LOG_LOCAL0)
udp_host = conf.get('log_udp_host')
if udp_host:
udp_port = int(conf.get('log_udp_port',
logging.handlers.SYSLOG_UDP_PORT))
handler = ThreadSafeSysLogHandler(address=(udp_host, udp_port),
facility=facility)
else:
log_address = conf.get('log_address', '/dev/log')
handler = None
try:
mode = os.stat(log_address).st_mode
if stat.S_ISSOCK(mode):
handler = ThreadSafeSysLogHandler(address=log_address,
facility=facility)
except (OSError, socket.error) as e:
# If either /dev/log isn't a UNIX socket or it does not exist at
# all then py2 would raise an error
if e.errno not in [errno.ENOTSOCK, errno.ENOENT]:
raise
if handler is None:
# fallback to default UDP
handler = ThreadSafeSysLogHandler(facility=facility)
handler.setFormatter(formatter)
logger.addHandler(handler)
get_logger.handler4logger[logger] = handler
# setup console logging
if log_to_console or hasattr(get_logger, 'console_handler4logger'):
# remove pre-existing console handler for this logger
if not hasattr(get_logger, 'console_handler4logger'):
get_logger.console_handler4logger = {}
if logger in get_logger.console_handler4logger:
logger.removeHandler(get_logger.console_handler4logger[logger])
console_handler = logging.StreamHandler(sys.__stderr__)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
get_logger.console_handler4logger[logger] = console_handler
# set the level for the logger
logger.setLevel(
getattr(logging, conf.get('log_level', 'INFO').upper(), logging.INFO))
# Setup logger with a StatsD client if so configured
statsd_host = conf.get('log_statsd_host')
if statsd_host:
statsd_port = int(conf.get('log_statsd_port', 8125))
base_prefix = conf.get('log_statsd_metric_prefix', '')
default_sample_rate = float(conf.get(
'log_statsd_default_sample_rate', 1))
sample_rate_factor = float(conf.get(
'log_statsd_sample_rate_factor', 1))
if statsd_tail_prefix is None:
statsd_tail_prefix = name
statsd_client = StatsdClient(statsd_host, statsd_port, base_prefix,
statsd_tail_prefix, default_sample_rate,
sample_rate_factor, logger=logger)
logger.statsd_client = statsd_client
else:
logger.statsd_client = None
adapted_logger = LogAdapter(logger, name)
other_handlers = conf.get('log_custom_handlers', None)
if other_handlers:
log_custom_handlers = [s.strip() for s in other_handlers.split(',')
if s.strip()]
for hook in log_custom_handlers:
try:
mod, fnc = hook.rsplit('.', 1)
logger_hook = getattr(__import__(mod, fromlist=[fnc]), fnc)
logger_hook(conf, name, log_to_console, log_route, fmt,
logger, adapted_logger)
except (AttributeError, ImportError):
print('Error calling custom handler [%s]' % hook,
file=sys.stderr)
except ValueError:
print('Invalid custom handler format [%s]' % hook,
file=sys.stderr)
return adapted_logger
def get_hub():
"""
Checks whether poll is available and falls back
on select if it isn't.
Note about epoll:
Review: https://review.opendev.org/#/c/18806/
There was a problem where once out of every 30 quadrillion
connections, a coroutine wouldn't wake up when the client
closed its end. Epoll was not reporting the event or it was
getting swallowed somewhere. Then when that file descriptor
was re-used, eventlet would freak right out because it still
thought it was waiting for activity from it in some other coro.
Another note about epoll: it's hard to use when forking. epoll works
like so:
* create an epoll instance: ``efd = epoll_create(...)``
* register file descriptors of interest with
``epoll_ctl(efd, EPOLL_CTL_ADD, fd, ...)``
* wait for events with ``epoll_wait(efd, ...)``
If you fork, you and all your child processes end up using the same
epoll instance, and everyone becomes confused. It is possible to use
epoll and fork and still have a correct program as long as you do the
right things, but eventlet doesn't do those things. Really, it can't
even try to do those things since it doesn't get notified of forks.
In contrast, both poll() and select() specify the set of interesting
file descriptors with each call, so there's no problem with forking.
As eventlet monkey patching is now done before call get_hub() in wsgi.py
if we use 'import select' we get the eventlet version, but since version
0.20.0 eventlet removed select.poll() function in patched select (see:
http://eventlet.net/doc/changelog.html and
https://github.com/eventlet/eventlet/commit/614a20462).
We use eventlet.patcher.original function to get python select module
to test if poll() is available on platform.
"""
try:
select = eventlet.patcher.original('select')
if hasattr(select, "poll"):
return "poll"
return "selects"
except ImportError:
return None
def drop_privileges(user):
"""
Sets the userid/groupid of the current process, get session leader, etc.
:param user: User name to change privileges to
"""
if os.geteuid() == 0:
groups = [g.gr_gid for g in grp.getgrall() if user in g.gr_mem]
os.setgroups(groups)
user = pwd.getpwnam(user)
os.setgid(user[3])
os.setuid(user[2])
os.environ['HOME'] = user[5]
def clean_up_daemon_hygiene():
try:
os.setsid()
except OSError:
pass
os.chdir('/') # in case you need to rmdir on where you started the daemon
os.umask(0o22) # ensure files are created with the correct privileges
def capture_stdio(logger, **kwargs):
"""
Log unhandled exceptions, close stdio, capture stdout and stderr.
param logger: Logger object to use
"""
# log uncaught exceptions
sys.excepthook = lambda * exc_info: \
logger.critical('UNCAUGHT EXCEPTION', exc_info=exc_info)
# collect stdio file desc not in use for logging
stdio_files = [sys.stdin, sys.stdout, sys.stderr]
console_fds = [h.stream.fileno() for _junk, h in getattr(
get_logger, 'console_handler4logger', {}).items()]
stdio_files = [f for f in stdio_files if f.fileno() not in console_fds]
with open(os.devnull, 'r+b') as nullfile:
# close stdio (excludes fds open for logging)
for f in stdio_files:
# some platforms throw an error when attempting an stdin flush
try:
f.flush()
except IOError:
pass
try:
os.dup2(nullfile.fileno(), f.fileno())
except OSError:
pass
# redirect stdio
if kwargs.pop('capture_stdout', True):
sys.stdout = LoggerFileObject(logger)
if kwargs.pop('capture_stderr', True):
sys.stderr = LoggerFileObject(logger, 'STDERR')
def parse_options(parser=None, once=False, test_config=False, test_args=None):
"""Parse standard swift server/daemon options with optparse.OptionParser.
:param parser: OptionParser to use. If not sent one will be created.
:param once: Boolean indicating the "once" option is available
:param test_config: Boolean indicating the "test-config" option is
available
:param test_args: Override sys.argv; used in testing
:returns: Tuple of (config, options); config is an absolute path to the
config file, options is the parser options as a dictionary.
:raises SystemExit: First arg (CONFIG) is required, file must exist
"""
if not parser:
parser = OptionParser(usage="%prog CONFIG [options]")
parser.add_option("-v", "--verbose", default=False, action="store_true",
help="log to console")
if once:
parser.add_option("-o", "--once", default=False, action="store_true",
help="only run one pass of daemon")
if test_config:
parser.add_option("-t", "--test-config",
default=False, action="store_true",
help="exit after loading and validating config; "
"do not run the daemon")
# if test_args is None, optparse will use sys.argv[:1]
options, args = parser.parse_args(args=test_args)
if not args:
parser.print_usage()
print("Error: missing config path argument")
sys.exit(1)
config = os.path.abspath(args.pop(0))
if not os.path.exists(config):
parser.print_usage()
print("Error: unable to locate %s" % config)
sys.exit(1)
extra_args = []
# if any named options appear in remaining args, set the option to True
for arg in args:
if arg in options.__dict__:
setattr(options, arg, True)
else:
extra_args.append(arg)
options = vars(options)
if extra_args:
options['extra_args'] = extra_args
return config, options
def select_ip_port(node_dict, use_replication=False):
"""
Get the ip address and port that should be used for the given
``node_dict``.
If ``use_replication`` is True then the replication ip address and port are
returned.
If ``use_replication`` is False (the default) and the ``node`` dict has an
item with key ``use_replication`` then that item's value will determine if
the replication ip address and port are returned.
If neither ``use_replication`` nor ``node_dict['use_replication']``
indicate otherwise then the normal ip address and port are returned.
:param node_dict: a dict describing a node
:param use_replication: if True then the replication ip address and port
are returned.
:return: a tuple of (ip address, port)
"""
if use_replication or node_dict.get('use_replication', False):
node_ip = node_dict['replication_ip']
node_port = node_dict['replication_port']
else:
node_ip = node_dict['ip']
node_port = node_dict['port']
return node_ip, node_port
def node_to_string(node_dict, replication=False):
"""
Get a string representation of a node's location.
:param node_dict: a dict describing a node
:param replication: if True then the replication ip address and port are
used, otherwise the normal ip address and port are used.
:return: a string of the form <ip address>:<port>/<device>
"""
node_ip, node_port = select_ip_port(node_dict, use_replication=replication)
if ':' in node_ip:
# IPv6
node_ip = '[%s]' % node_ip
return '{}:{}/{}'.format(node_ip, node_port, node_dict['device'])
def storage_directory(datadir, partition, name_hash):
"""
Get the storage directory
:param datadir: Base data directory
:param partition: Partition
:param name_hash: Account, container or object name hash
:returns: Storage directory
"""
return os.path.join(datadir, str(partition), name_hash[-3:], name_hash)
def hash_path(account, container=None, object=None, raw_digest=False):
"""
Get the canonical hash for an account/container/object
:param account: Account
:param container: Container
:param object: Object
:param raw_digest: If True, return the raw version rather than a hex digest
:returns: hash string
"""
if object and not container:
raise ValueError('container is required if object is provided')
paths = [account if isinstance(account, six.binary_type)
else account.encode('utf8')]
if container:
paths.append(container if isinstance(container, six.binary_type)
else container.encode('utf8'))
if object:
paths.append(object if isinstance(object, six.binary_type)
else object.encode('utf8'))
if raw_digest:
return md5(HASH_PATH_PREFIX + b'/' + b'/'.join(paths)
+ HASH_PATH_SUFFIX, usedforsecurity=False).digest()
else:
return md5(HASH_PATH_PREFIX + b'/' + b'/'.join(paths)
+ HASH_PATH_SUFFIX, usedforsecurity=False).hexdigest()
def get_zero_indexed_base_string(base, index):
"""
This allows the caller to make a list of things with indexes, where the
first item (zero indexed) is just the bare base string, and subsequent
indexes are appended '-1', '-2', etc.
e.g.::
'lock', None => 'lock'
'lock', 0 => 'lock'
'lock', 1 => 'lock-1'
'object', 2 => 'object-2'
:param base: a string, the base string; when ``index`` is 0 (or None) this
is the identity function.
:param index: a digit, typically an integer (or None); for values other
than 0 or None this digit is appended to the base string
separated by a hyphen.
"""
if index == 0 or index is None:
return_string = base
else:
return_string = base + "-%d" % int(index)
return return_string
def _get_any_lock(fds):
for fd in fds:
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
return True
except IOError as err:
if err.errno != errno.EAGAIN:
raise
return False
@contextmanager
def lock_path(directory, timeout=None, timeout_class=None,
limit=1, name=None):
"""
Context manager that acquires a lock on a directory. This will block until
the lock can be acquired, or the timeout time has expired (whichever occurs
first).
For locking exclusively, file or directory has to be opened in Write mode.
Python doesn't allow directories to be opened in Write Mode. So we
workaround by locking a hidden file in the directory.
:param directory: directory to be locked
:param timeout: timeout (in seconds). If None, defaults to
DEFAULT_LOCK_TIMEOUT
:param timeout_class: The class of the exception to raise if the
lock cannot be granted within the timeout. Will be
constructed as timeout_class(timeout, lockpath). Default:
LockTimeout
:param limit: The maximum number of locks that may be held concurrently on
the same directory at the time this method is called. Note that this
limit is only applied during the current call to this method and does
not prevent subsequent calls giving a larger limit. Defaults to 1.
:param name: A string to distinguishes different type of locks in a
directory
:raises TypeError: if limit is not an int.
:raises ValueError: if limit is less than 1.
"""
if timeout is None:
timeout = DEFAULT_LOCK_TIMEOUT
if timeout_class is None:
timeout_class = swift.common.exceptions.LockTimeout
if limit < 1:
raise ValueError('limit must be greater than or equal to 1')
mkdirs(directory)
lockpath = '%s/.lock' % directory
if name:
lockpath += '-%s' % str(name)
fds = [os.open(get_zero_indexed_base_string(lockpath, i),
os.O_WRONLY | os.O_CREAT)
for i in range(limit)]
sleep_time = 0.01
slower_sleep_time = max(timeout * 0.01, sleep_time)
slowdown_at = timeout * 0.01
time_slept = 0
try:
with timeout_class(timeout, lockpath):
while True:
if _get_any_lock(fds):
break
if time_slept > slowdown_at:
sleep_time = slower_sleep_time
sleep(sleep_time)
time_slept += sleep_time
yield True
finally:
for fd in fds:
os.close(fd)
@contextmanager
def lock_file(filename, timeout=None, append=False, unlink=True):
"""
Context manager that acquires a lock on a file. This will block until
the lock can be acquired, or the timeout time has expired (whichever occurs
first).
:param filename: file to be locked
:param timeout: timeout (in seconds). If None, defaults to
DEFAULT_LOCK_TIMEOUT
:param append: True if file should be opened in append mode
:param unlink: True if the file should be unlinked at the end
"""
if timeout is None:
timeout = DEFAULT_LOCK_TIMEOUT
flags = os.O_CREAT | os.O_RDWR
if append:
flags |= os.O_APPEND
mode = 'a+b'
else:
mode = 'r+b'
while True:
fd = os.open(filename, flags)
file_obj = os.fdopen(fd, mode)
try:
with swift.common.exceptions.LockTimeout(timeout, filename):
while True:
try:
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
except IOError as err:
if err.errno != errno.EAGAIN:
raise
sleep(0.01)
try:
if os.stat(filename).st_ino != os.fstat(fd).st_ino:
continue
except OSError as err:
if err.errno == errno.ENOENT:
continue
raise
yield file_obj
if unlink:
os.unlink(filename)
break
finally:
file_obj.close()
def lock_parent_directory(filename, timeout=None):
"""
Context manager that acquires a lock on the parent directory of the given
file path. This will block until the lock can be acquired, or the timeout
time has expired (whichever occurs first).
:param filename: file path of the parent directory to be locked
:param timeout: timeout (in seconds). If None, defaults to
DEFAULT_LOCK_TIMEOUT
"""
return lock_path(os.path.dirname(filename), timeout=timeout)
def get_time_units(time_amount):
"""
Get a nomralized length of time in the largest unit of time (hours,
minutes, or seconds.)
:param time_amount: length of time in seconds
:returns: A touple of (length of time, unit of time) where unit of time is
one of ('h', 'm', 's')
"""
time_unit = 's'
if time_amount > 60:
time_amount /= 60
time_unit = 'm'
if time_amount > 60:
time_amount /= 60
time_unit = 'h'
return time_amount, time_unit
def compute_eta(start_time, current_value, final_value):
"""
Compute an ETA. Now only if we could also have a progress bar...
:param start_time: Unix timestamp when the operation began
:param current_value: Current value
:param final_value: Final value
:returns: ETA as a tuple of (length of time, unit of time) where unit of
time is one of ('h', 'm', 's')
"""
elapsed = time.time() - start_time
completion = (float(current_value) / final_value) or 0.00001
return get_time_units(1.0 / completion * elapsed - elapsed)
def unlink_older_than(path, mtime):
"""
Remove any file in a given path that was last modified before mtime.
:param path: path to remove file from
:param mtime: timestamp of oldest file to keep
"""
filepaths = map(functools.partial(os.path.join, path), listdir(path))
return unlink_paths_older_than(filepaths, mtime)
def unlink_paths_older_than(filepaths, mtime):
"""
Remove any files from the given list that were
last modified before mtime.
:param filepaths: a list of strings, the full paths of files to check
:param mtime: timestamp of oldest file to keep
"""
for fpath in filepaths:
try:
if os.path.getmtime(fpath) < mtime:
os.unlink(fpath)
except OSError:
pass
def item_from_env(env, item_name, allow_none=False):
"""
Get a value from the wsgi environment
:param env: wsgi environment dict
:param item_name: name of item to get
:returns: the value from the environment
"""
item = env.get(item_name, None)
if item is None and not allow_none:
logging.error("ERROR: %s could not be found in env!", item_name)
return item
def cache_from_env(env, allow_none=False):
"""
Get memcache connection pool from the environment (which had been
previously set by the memcache middleware
:param env: wsgi environment dict
:returns: swift.common.memcached.MemcacheRing from environment
"""
return item_from_env(env, 'swift.cache', allow_none)
def read_conf_dir(parser, conf_dir):
conf_files = []
for f in os.listdir(conf_dir):
if f.endswith('.conf') and not f.startswith('.'):
conf_files.append(os.path.join(conf_dir, f))
return parser.read(sorted(conf_files))
if six.PY2:
NicerInterpolation = None # just don't cause ImportErrors over in wsgi.py
else:
class NicerInterpolation(configparser.BasicInterpolation):
def before_get(self, parser, section, option, value, defaults):
if '%(' not in value:
return value
return super(NicerInterpolation, self).before_get(
parser, section, option, value, defaults)
def readconf(conf_path, section_name=None, log_name=None, defaults=None,
raw=False):
"""
Read config file(s) and return config items as a dict
:param conf_path: path to config file/directory, or a file-like object
(hasattr readline)
:param section_name: config section to read (will return all sections if
not defined)
:param log_name: name to be used with logging (will use section_name if
not defined)
:param defaults: dict of default values to pre-populate the config with
:returns: dict of config items
:raises ValueError: if section_name does not exist
:raises IOError: if reading the file failed
"""
if defaults is None:
defaults = {}
if raw:
c = RawConfigParser(defaults)
else:
if six.PY2:
c = ConfigParser(defaults)
else:
# In general, we haven't really thought much about interpolation
# in configs. Python's default ConfigParser has always supported
# it, though, so *we* got it "for free". Unfortunatley, since we
# "supported" interpolation, we have to assume there are
# deployments in the wild that use it, and try not to break them.
# So, do what we can to mimic the py2 behavior of passing through
# values like "1%" (which we want to support for
# fallocate_reserve).
c = ConfigParser(defaults, interpolation=NicerInterpolation())
c.optionxform = str # Don't lower-case keys
if hasattr(conf_path, 'readline'):
if hasattr(conf_path, 'seek'):
conf_path.seek(0)
if six.PY2:
c.readfp(conf_path)
else:
c.read_file(conf_path)
else:
if os.path.isdir(conf_path):
# read all configs in directory
success = read_conf_dir(c, conf_path)
else:
success = c.read(conf_path)
if not success:
raise IOError("Unable to read config from %s" %
conf_path)
if section_name:
if c.has_section(section_name):
conf = dict(c.items(section_name))
else:
raise ValueError(
"Unable to find %(section)s config section in %(conf)s" %
{'section': section_name, 'conf': conf_path})
if "log_name" not in conf:
if log_name is not None:
conf['log_name'] = log_name
else:
conf['log_name'] = section_name
else:
conf = {}
for s in c.sections():
conf.update({s: dict(c.items(s))})
if 'log_name' not in conf:
conf['log_name'] = log_name
conf['__file__'] = conf_path
return conf
def parse_prefixed_conf(conf_file, prefix):
"""
Search the config file for any common-prefix sections and load those
sections to a dict mapping the after-prefix reference to options.
:param conf_file: the file name of the config to parse
:param prefix: the common prefix of the sections
:return: a dict mapping policy reference -> dict of policy options
:raises ValueError: if a policy config section has an invalid name
"""
ret_config = {}
all_conf = readconf(conf_file)
for section, options in all_conf.items():
if not section.startswith(prefix):
continue
target_ref = section[len(prefix):]
ret_config[target_ref] = options
return ret_config
def write_pickle(obj, dest, tmp=None, pickle_protocol=0):
"""
Ensure that a pickle file gets written to disk. The file
is first written to a tmp location, ensure it is synced to disk, then
perform a move to its final location
:param obj: python object to be pickled
:param dest: path of final destination file
:param tmp: path to tmp to use, defaults to None
:param pickle_protocol: protocol to pickle the obj with, defaults to 0
"""
if tmp is None:
tmp = os.path.dirname(dest)
mkdirs(tmp)
fd, tmppath = mkstemp(dir=tmp, suffix='.tmp')
with os.fdopen(fd, 'wb') as fo:
pickle.dump(obj, fo, pickle_protocol)
fo.flush()
os.fsync(fd)
renamer(tmppath, dest)
def search_tree(root, glob_match, ext='', exts=None, dir_ext=None):
"""Look in root, for any files/dirs matching glob, recursively traversing
any found directories looking for files ending with ext
:param root: start of search path
:param glob_match: glob to match in root, matching dirs are traversed with
os.walk
:param ext: only files that end in ext will be returned
:param exts: a list of file extensions; only files that end in one of these
extensions will be returned; if set this list overrides any
extension specified using the 'ext' param.
:param dir_ext: if present directories that end with dir_ext will not be
traversed and instead will be returned as a matched path
:returns: list of full paths to matching files, sorted
"""
exts = exts or [ext]
found_files = []
for path in glob.glob(os.path.join(root, glob_match)):
if os.path.isdir(path):
for root, dirs, files in os.walk(path):
if dir_ext and root.endswith(dir_ext):
found_files.append(root)
# the root is a config dir, descend no further
break
for file_ in files:
if any(exts) and not any(file_.endswith(e) for e in exts):
continue
found_files.append(os.path.join(root, file_))
found_dir = False
for dir_ in dirs:
if dir_ext and dir_.endswith(dir_ext):
found_dir = True
found_files.append(os.path.join(root, dir_))
if found_dir:
# do not descend further into matching directories
break
else:
if ext and not path.endswith(ext):
continue
found_files.append(path)
return sorted(found_files)
def write_file(path, contents):
"""Write contents to file at path
:param path: any path, subdirs will be created as needed
:param contents: data to write to file, will be converted to string
"""
dirname, name = os.path.split(path)
if not os.path.exists(dirname):
try:
os.makedirs(dirname)
except OSError as err:
if err.errno == errno.EACCES:
sys.exit('Unable to create %s. Running as '
'non-root?' % dirname)
with open(path, 'w') as f:
f.write('%s' % contents)
def remove_file(path):
"""Quiet wrapper for os.unlink, OSErrors are suppressed
:param path: first and only argument passed to os.unlink
"""
try:
os.unlink(path)
except OSError:
pass
def remove_directory(path):
"""Wrapper for os.rmdir, ENOENT and ENOTEMPTY are ignored
:param path: first and only argument passed to os.rmdir
"""
try:
os.rmdir(path)
except OSError as e:
if e.errno not in (errno.ENOENT, errno.ENOTEMPTY):
raise
def is_file_older(path, age):
"""
Test if a file mtime is older than the given age, suppressing any OSErrors.
:param path: first and only argument passed to os.stat
:param age: age in seconds
:return: True if age is less than or equal to zero or if the file mtime is
more than ``age`` in the past; False if age is greater than zero and
the file mtime is less than or equal to ``age`` in the past or if there
is an OSError while stat'ing the file.
"""
if age <= 0:
return True
try:
return time.time() - os.stat(path).st_mtime > age
except OSError:
return False
def audit_location_generator(devices, datadir, suffix='',
mount_check=True, logger=None,
devices_filter=None, partitions_filter=None,
suffixes_filter=None, hashes_filter=None,
hook_pre_device=None, hook_post_device=None,
hook_pre_partition=None, hook_post_partition=None,
hook_pre_suffix=None, hook_post_suffix=None,
hook_pre_hash=None, hook_post_hash=None,
error_counter=None, yield_hash_dirs=False):
"""
Given a devices path and a data directory, yield (path, device,
partition) for all files in that directory
(devices|partitions|suffixes|hashes)_filter are meant to modify the list of
elements that will be iterated. eg: they can be used to exclude some
elements based on a custom condition defined by the caller.
hook_pre_(device|partition|suffix|hash) are called before yielding the
element, hook_pos_(device|partition|suffix|hash) are called after the
element was yielded. They are meant to do some pre/post processing.
eg: saving a progress status.
:param devices: parent directory of the devices to be audited
:param datadir: a directory located under self.devices. This should be
one of the DATADIR constants defined in the account,
container, and object servers.
:param suffix: path name suffix required for all names returned
(ignored if yield_hash_dirs is True)
:param mount_check: Flag to check if a mount check should be performed
on devices
:param logger: a logger object
:param devices_filter: a callable taking (devices, [list of devices]) as
parameters and returning a [list of devices]
:param partitions_filter: a callable taking (datadir_path, [list of parts])
as parameters and returning a [list of parts]
:param suffixes_filter: a callable taking (part_path, [list of suffixes])
as parameters and returning a [list of suffixes]
:param hashes_filter: a callable taking (suff_path, [list of hashes]) as
parameters and returning a [list of hashes]
:param hook_pre_device: a callable taking device_path as parameter
:param hook_post_device: a callable taking device_path as parameter
:param hook_pre_partition: a callable taking part_path as parameter
:param hook_post_partition: a callable taking part_path as parameter
:param hook_pre_suffix: a callable taking suff_path as parameter
:param hook_post_suffix: a callable taking suff_path as parameter
:param hook_pre_hash: a callable taking hash_path as parameter
:param hook_post_hash: a callable taking hash_path as parameter
:param error_counter: a dictionary used to accumulate error counts; may
add keys 'unmounted' and 'unlistable_partitions'
:param yield_hash_dirs: if True, yield hash dirs instead of individual
files
"""
device_dir = listdir(devices)
# randomize devices in case of process restart before sweep completed
shuffle(device_dir)
if devices_filter:
device_dir = devices_filter(devices, device_dir)
for device in device_dir:
if mount_check and not ismount(os.path.join(devices, device)):
if error_counter is not None:
error_counter.setdefault('unmounted', [])
error_counter['unmounted'].append(device)
if logger:
logger.warning(
'Skipping %s as it is not mounted', device)
continue
if hook_pre_device:
hook_pre_device(os.path.join(devices, device))
datadir_path = os.path.join(devices, device, datadir)
try:
partitions = listdir(datadir_path)
except OSError as e:
# NB: listdir ignores non-existent datadir_path
if error_counter is not None:
error_counter.setdefault('unlistable_partitions', [])
error_counter['unlistable_partitions'].append(datadir_path)
if logger:
logger.warning('Skipping %(datadir)s because %(err)s',
{'datadir': datadir_path, 'err': e})
continue
if partitions_filter:
partitions = partitions_filter(datadir_path, partitions)
for partition in partitions:
part_path = os.path.join(datadir_path, partition)
if hook_pre_partition:
hook_pre_partition(part_path)
try:
suffixes = listdir(part_path)
except OSError as e:
if e.errno != errno.ENOTDIR:
raise
continue
if suffixes_filter:
suffixes = suffixes_filter(part_path, suffixes)
for asuffix in suffixes:
suff_path = os.path.join(part_path, asuffix)
if hook_pre_suffix:
hook_pre_suffix(suff_path)
try:
hashes = listdir(suff_path)
except OSError as e:
if e.errno != errno.ENOTDIR:
raise
continue
if hashes_filter:
hashes = hashes_filter(suff_path, hashes)
for hsh in hashes:
hash_path = os.path.join(suff_path, hsh)
if hook_pre_hash:
hook_pre_hash(hash_path)
if yield_hash_dirs:
if os.path.isdir(hash_path):
yield hash_path, device, partition
else:
try:
files = sorted(listdir(hash_path), reverse=True)
except OSError as e:
if e.errno != errno.ENOTDIR:
raise
continue
for fname in files:
if suffix and not fname.endswith(suffix):
continue
path = os.path.join(hash_path, fname)
yield path, device, partition
if hook_post_hash:
hook_post_hash(hash_path)
if hook_post_suffix:
hook_post_suffix(suff_path)
if hook_post_partition:
hook_post_partition(part_path)
if hook_post_device:
hook_post_device(os.path.join(devices, device))
class AbstractRateLimiter(object):
# 1,000 milliseconds = 1 second
clock_accuracy = 1000.0
def __init__(self, max_rate, rate_buffer=5, burst_after_idle=False,
running_time=0):
"""
:param max_rate: The maximum rate per second allowed for the process.
Must be > 0 to engage rate-limiting behavior.
:param rate_buffer: Number of seconds the rate counter can drop and be
allowed to catch up (at a faster than listed rate). A larger number
will result in larger spikes in rate but better average accuracy.
:param burst_after_idle: If False (the default) then the rate_buffer
allowance is lost after the rate limiter has not been called for
more than rate_buffer seconds. If True then the rate_buffer
allowance is preserved during idle periods which means that a burst
of requests may be granted immediately after the idle period.
:param running_time: The running time in milliseconds of the next
allowable request. Setting this to any time in the past will cause
the rate limiter to immediately allow requests; setting this to a
future time will cause the rate limiter to deny requests until that
time. If ``burst_after_idle`` is True then this can
be set to current time (ms) to avoid an initial burst, or set to
running_time < (current time - rate_buffer ms) to allow an initial
burst.
"""
self.max_rate = max_rate
self.rate_buffer_ms = rate_buffer * self.clock_accuracy
self.burst_after_idle = burst_after_idle
self.running_time = running_time
self.time_per_incr = (self.clock_accuracy / self.max_rate
if self.max_rate else 0)
def _sleep(self, seconds):
# subclasses should override to implement a sleep
raise NotImplementedError
def is_allowed(self, incr_by=1, now=None, block=False):
"""
Check if the calling process is allowed to proceed according to the
rate limit.
:param incr_by: How much to increment the counter. Useful if you want
to ratelimit 1024 bytes/sec and have differing sizes
of requests. Must be > 0 to engage rate-limiting
behavior.
:param now: The time in seconds; defaults to time.time()
:param block: if True, the call will sleep until the calling process
is allowed to proceed; otherwise the call returns immediately.
:return: True if the the calling process is allowed to proceed, False
otherwise.
"""
if self.max_rate <= 0 or incr_by <= 0:
return True
now = now or time.time()
# Convert seconds to milliseconds
now = now * self.clock_accuracy
# Calculate time per request in milliseconds
time_per_request = self.time_per_incr * float(incr_by)
# Convert rate_buffer to milliseconds and compare
if now - self.running_time > self.rate_buffer_ms:
self.running_time = now
if self.burst_after_idle:
self.running_time -= self.rate_buffer_ms
if now >= self.running_time:
self.running_time += time_per_request
allowed = True
elif block:
sleep_time = (self.running_time - now) / self.clock_accuracy
# increment running time before sleeping in case the sleep allows
# another thread to inspect the rate limiter state
self.running_time += time_per_request
# Convert diff to a floating point number of seconds and sleep
self._sleep(sleep_time)
allowed = True
else:
allowed = False
return allowed
def wait(self, incr_by=1, now=None):
self.is_allowed(incr_by=incr_by, now=now, block=True)
class EventletRateLimiter(AbstractRateLimiter):
def __init__(self, max_rate, rate_buffer=5, running_time=0,
burst_after_idle=False):
super(EventletRateLimiter, self).__init__(
max_rate, rate_buffer=rate_buffer, running_time=running_time,
burst_after_idle=burst_after_idle)
def _sleep(self, seconds):
eventlet.sleep(seconds)
def ratelimit_sleep(running_time, max_rate, incr_by=1, rate_buffer=5):
"""
Will eventlet.sleep() for the appropriate time so that the max_rate
is never exceeded. If max_rate is 0, will not ratelimit. The
maximum recommended rate should not exceed (1000 * incr_by) a second
as eventlet.sleep() does involve some overhead. Returns running_time
that should be used for subsequent calls.
:param running_time: the running time in milliseconds of the next
allowable request. Best to start at zero.
:param max_rate: The maximum rate per second allowed for the process.
:param incr_by: How much to increment the counter. Useful if you want
to ratelimit 1024 bytes/sec and have differing sizes
of requests. Must be > 0 to engage rate-limiting
behavior.
:param rate_buffer: Number of seconds the rate counter can drop and be
allowed to catch up (at a faster than listed rate).
A larger number will result in larger spikes in rate
but better average accuracy. Must be > 0 to engage
rate-limiting behavior.
:return: The absolute time for the next interval in milliseconds; note
that time could have passed well beyond that point, but the next call
will catch that and skip the sleep.
"""
warnings.warn(
'ratelimit_sleep() is deprecated; use the ``EventletRateLimiter`` '
'class instead.', DeprecationWarning, stacklevel=2
)
rate_limit = EventletRateLimiter(max_rate, rate_buffer=rate_buffer,
running_time=running_time)
rate_limit.wait(incr_by=incr_by)
return rate_limit.running_time
class ContextPool(GreenPool):
"""GreenPool subclassed to kill its coros when it gets gc'ed"""
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def close(self):
for coro in list(self.coroutines_running):
coro.kill()
class GreenAsyncPileWaitallTimeout(Timeout):
pass
DEAD = object()
class GreenAsyncPile(object):
"""
Runs jobs in a pool of green threads, and the results can be retrieved by
using this object as an iterator.
This is very similar in principle to eventlet.GreenPile, except it returns
results as they become available rather than in the order they were
launched.
Correlating results with jobs (if necessary) is left to the caller.
"""
def __init__(self, size_or_pool):
"""
:param size_or_pool: thread pool size or a pool to use
"""
if isinstance(size_or_pool, GreenPool):
self._pool = size_or_pool
size = self._pool.size
else:
self._pool = GreenPool(size_or_pool)
size = size_or_pool
self._responses = eventlet.queue.LightQueue(size)
self._inflight = 0
self._pending = 0
def _run_func(self, func, args, kwargs):
try:
self._responses.put(func(*args, **kwargs))
except Exception:
if eventlet.hubs.get_hub().debug_exceptions:
traceback.print_exception(*sys.exc_info())
self._responses.put(DEAD)
finally:
self._inflight -= 1
@property
def inflight(self):
return self._inflight
def spawn(self, func, *args, **kwargs):
"""
Spawn a job in a green thread on the pile.
"""
self._pending += 1
self._inflight += 1
self._pool.spawn(self._run_func, func, args, kwargs)
def waitfirst(self, timeout):
"""
Wait up to timeout seconds for first result to come in.
:param timeout: seconds to wait for results
:returns: first item to come back, or None
"""
for result in self._wait(timeout, first_n=1):
return result
def waitall(self, timeout):
"""
Wait timeout seconds for any results to come in.
:param timeout: seconds to wait for results
:returns: list of results accrued in that time
"""
return self._wait(timeout)
def _wait(self, timeout, first_n=None):
results = []
try:
with GreenAsyncPileWaitallTimeout(timeout):
while True:
results.append(next(self))
if first_n and len(results) >= first_n:
break
except (GreenAsyncPileWaitallTimeout, StopIteration):
pass
return results
def __iter__(self):
return self
def next(self):
while True:
try:
rv = self._responses.get_nowait()
except eventlet.queue.Empty:
if self._inflight == 0:
raise StopIteration()
rv = self._responses.get()
self._pending -= 1
if rv is DEAD:
continue
return rv
__next__ = next
class StreamingPile(GreenAsyncPile):
"""
Runs jobs in a pool of green threads, spawning more jobs as results are
retrieved and worker threads become available.
When used as a context manager, has the same worker-killing properties as
:class:`ContextPool`.
"""
def __init__(self, size):
""":param size: number of worker threads to use"""
self.pool = ContextPool(size)
super(StreamingPile, self).__init__(self.pool)
def asyncstarmap(self, func, args_iter):
"""
This is the same as :func:`itertools.starmap`, except that *func* is
executed in a separate green thread for each item, and results won't
necessarily have the same order as inputs.
"""
args_iter = iter(args_iter)
# Initialize the pile
for args in itertools.islice(args_iter, self.pool.size):
self.spawn(func, *args)
# Keep populating the pile as greenthreads become available
for args in args_iter:
try:
to_yield = next(self)
except StopIteration:
break
yield to_yield
self.spawn(func, *args)
# Drain the pile
for result in self:
yield result
def __enter__(self):
self.pool.__enter__()
return self
def __exit__(self, type, value, traceback):
self.pool.__exit__(type, value, traceback)
def validate_sync_to(value, allowed_sync_hosts, realms_conf):
"""
Validates an X-Container-Sync-To header value, returning the
validated endpoint, realm, and realm_key, or an error string.
:param value: The X-Container-Sync-To header value to validate.
:param allowed_sync_hosts: A list of allowed hosts in endpoints,
if realms_conf does not apply.
:param realms_conf: An instance of
swift.common.container_sync_realms.ContainerSyncRealms to
validate against.
:returns: A tuple of (error_string, validated_endpoint, realm,
realm_key). The error_string will None if the rest of the
values have been validated. The validated_endpoint will be
the validated endpoint to sync to. The realm and realm_key
will be set if validation was done through realms_conf.
"""
orig_value = value
value = value.rstrip('/')
if not value:
return (None, None, None, None)
if value.startswith('//'):
if not realms_conf:
return (None, None, None, None)
data = value[2:].split('/')
if len(data) != 4:
return (
'Invalid X-Container-Sync-To format %r' % orig_value,
None, None, None)
realm, cluster, account, container = data
realm_key = realms_conf.key(realm)
if not realm_key:
return ('No realm key for %r' % realm, None, None, None)
endpoint = realms_conf.endpoint(realm, cluster)
if not endpoint:
return (
'No cluster endpoint for %(realm)r %(cluster)r'
% {'realm': realm, 'cluster': cluster},
None, None, None)
return (
None,
'%s/%s/%s' % (endpoint.rstrip('/'), account, container),
realm.upper(), realm_key)
p = urlparse(value)
if p.scheme not in ('http', 'https'):
return (
'Invalid scheme %r in X-Container-Sync-To, must be "//", '
'"http", or "https".' % p.scheme,
None, None, None)
if not p.path:
return ('Path required in X-Container-Sync-To', None, None, None)
if p.params or p.query or p.fragment:
return (
'Params, queries, and fragments not allowed in '
'X-Container-Sync-To',
None, None, None)
if p.hostname not in allowed_sync_hosts:
return (
'Invalid host %r in X-Container-Sync-To' % p.hostname,
None, None, None)
return (None, value, None, None)
def affinity_key_function(affinity_str):
"""Turns an affinity config value into a function suitable for passing to
sort(). After doing so, the array will be sorted with respect to the given
ordering.
For example, if affinity_str is "r1=1, r2z7=2, r2z8=2", then the array
will be sorted with all nodes from region 1 (r1=1) first, then all the
nodes from region 2 zones 7 and 8 (r2z7=2 and r2z8=2), then everything
else.
Note that the order of the pieces of affinity_str is irrelevant; the
priority values are what comes after the equals sign.
If affinity_str is empty or all whitespace, then the resulting function
will not alter the ordering of the nodes.
:param affinity_str: affinity config value, e.g. "r1z2=3"
or "r1=1, r2z1=2, r2z2=2"
:returns: single-argument function
:raises ValueError: if argument invalid
"""
affinity_str = affinity_str.strip()
if not affinity_str:
return lambda x: 0
priority_matchers = []
pieces = [s.strip() for s in affinity_str.split(',')]
for piece in pieces:
# matches r<number>=<number> or r<number>z<number>=<number>
match = re.match(r"r(\d+)(?:z(\d+))?=(\d+)$", piece)
if match:
region, zone, priority = match.groups()
region = int(region)
priority = int(priority)
zone = int(zone) if zone else None
matcher = {'region': region, 'priority': priority}
if zone is not None:
matcher['zone'] = zone
priority_matchers.append(matcher)
else:
raise ValueError("Invalid affinity value: %r" % affinity_str)
priority_matchers.sort(key=operator.itemgetter('priority'))
def keyfn(ring_node):
for matcher in priority_matchers:
if (matcher['region'] == ring_node['region']
and ('zone' not in matcher
or matcher['zone'] == ring_node['zone'])):
return matcher['priority']
return 4294967296 # 2^32, i.e. "a big number"
return keyfn
def affinity_locality_predicate(write_affinity_str):
"""
Turns a write-affinity config value into a predicate function for nodes.
The returned value will be a 1-arg function that takes a node dictionary
and returns a true value if it is "local" and a false value otherwise. The
definition of "local" comes from the affinity_str argument passed in here.
For example, if affinity_str is "r1, r2z2", then only nodes where region=1
or where (region=2 and zone=2) are considered local.
If affinity_str is empty or all whitespace, then the resulting function
will consider everything local
:param write_affinity_str: affinity config value, e.g. "r1z2"
or "r1, r2z1, r2z2"
:returns: single-argument function, or None if affinity_str is empty
:raises ValueError: if argument invalid
"""
affinity_str = write_affinity_str.strip()
if not affinity_str:
return None
matchers = []
pieces = [s.strip() for s in affinity_str.split(',')]
for piece in pieces:
# matches r<number> or r<number>z<number>
match = re.match(r"r(\d+)(?:z(\d+))?$", piece)
if match:
region, zone = match.groups()
region = int(region)
zone = int(zone) if zone else None
matcher = {'region': region}
if zone is not None:
matcher['zone'] = zone
matchers.append(matcher)
else:
raise ValueError("Invalid write-affinity value: %r" % affinity_str)
def is_local(ring_node):
for matcher in matchers:
if (matcher['region'] == ring_node['region']
and ('zone' not in matcher
or matcher['zone'] == ring_node['zone'])):
return True
return False
return is_local
def get_remote_client(req):
# remote host for zeus
client = req.headers.get('x-cluster-client-ip')
if not client and 'x-forwarded-for' in req.headers:
# remote host for other lbs
client = req.headers['x-forwarded-for'].split(',')[0].strip()
if not client:
client = req.remote_addr
return client
def human_readable(value):
"""
Returns the number in a human readable format; for example 1048576 = "1Mi".
"""
value = float(value)
index = -1
suffixes = 'KMGTPEZY'
while value >= 1024 and index + 1 < len(suffixes):
index += 1
value = round(value / 1024)
if index == -1:
return '%d' % value
return '%d%si' % (round(value), suffixes[index])
def put_recon_cache_entry(cache_entry, key, item):
"""
Update a recon cache entry item.
If ``item`` is an empty dict then any existing ``key`` in ``cache_entry``
will be deleted. Similarly if ``item`` is a dict and any of its values are
empty dicts then the corrsponsing key will be deleted from the nested dict
in ``cache_entry``.
We use nested recon cache entries when the object auditor
runs in parallel or else in 'once' mode with a specified subset of devices.
:param cache_entry: a dict of existing cache entries
:param key: key for item to update
:param item: value for item to update
"""
if isinstance(item, dict):
if not item:
cache_entry.pop(key, None)
return
if key not in cache_entry or key in cache_entry and not \
isinstance(cache_entry[key], dict):
cache_entry[key] = {}
for k, v in item.items():
if v == {}:
cache_entry[key].pop(k, None)
else:
cache_entry[key][k] = v
else:
cache_entry[key] = item
def dump_recon_cache(cache_dict, cache_file, logger, lock_timeout=2,
set_owner=None):
"""Update recon cache values
:param cache_dict: Dictionary of cache key/value pairs to write out
:param cache_file: cache file to update
:param logger: the logger to use to log an encountered error
:param lock_timeout: timeout (in seconds)
:param set_owner: Set owner of recon cache file
"""
try:
with lock_file(cache_file, lock_timeout, unlink=False) as cf:
cache_entry = {}
try:
existing_entry = cf.readline()
if existing_entry:
cache_entry = json.loads(existing_entry)
except ValueError:
# file doesn't have a valid entry, we'll recreate it
pass
for cache_key, cache_value in cache_dict.items():
put_recon_cache_entry(cache_entry, cache_key, cache_value)
tf = None
try:
with NamedTemporaryFile(dir=os.path.dirname(cache_file),
delete=False) as tf:
cache_data = json.dumps(cache_entry, ensure_ascii=True,
sort_keys=True)
tf.write(cache_data.encode('ascii') + b'\n')
if set_owner:
os.chown(tf.name, pwd.getpwnam(set_owner).pw_uid, -1)
renamer(tf.name, cache_file, fsync=False)
finally:
if tf is not None:
try:
os.unlink(tf.name)
except OSError as err:
if err.errno != errno.ENOENT:
raise
except (Exception, Timeout) as err:
logger.exception('Exception dumping recon cache: %s' % err)
def load_recon_cache(cache_file):
"""
Load a recon cache file. Treats missing file as empty.
"""
try:
with open(cache_file) as fh:
return json.load(fh)
except IOError as e:
if e.errno == errno.ENOENT:
return {}
else:
raise
except ValueError: # invalid JSON
return {}
def listdir(path):
try:
return os.listdir(path)
except OSError as err:
if err.errno != errno.ENOENT:
raise
return []
def streq_const_time(s1, s2):
"""Constant-time string comparison.
:params s1: the first string
:params s2: the second string
:return: True if the strings are equal.
This function takes two strings and compares them. It is intended to be
used when doing a comparison for authentication purposes to help guard
against timing attacks.
"""
if len(s1) != len(s2):
return False
result = 0
for (a, b) in zip(s1, s2):
result |= ord(a) ^ ord(b)
return result == 0
def pairs(item_list):
"""
Returns an iterator of all pairs of elements from item_list.
:param item_list: items (no duplicates allowed)
"""
for i, item1 in enumerate(item_list):
for item2 in item_list[(i + 1):]:
yield (item1, item2)
def replication(func):
"""
Decorator to declare which methods are accessible for different
type of servers:
* If option replication_server is None then this decorator
doesn't matter.
* If option replication_server is True then ONLY decorated with
this decorator methods will be started.
* If option replication_server is False then decorated with this
decorator methods will NOT be started.
:param func: function to mark accessible for replication
"""
func.replication = True
return func
def public(func):
"""
Decorator to declare which methods are publicly accessible as HTTP
requests
:param func: function to make public
"""
func.publicly_accessible = True
return func
def private(func):
"""
Decorator to declare which methods are privately accessible as HTTP
requests with an ``X-Backend-Allow-Private-Methods: True`` override
:param func: function to make private
"""
func.privately_accessible = True
return func
def majority_size(n):
return (n // 2) + 1
def quorum_size(n):
"""
quorum size as it applies to services that use 'replication' for data
integrity (Account/Container services). Object quorum_size is defined
on a storage policy basis.
Number of successful backend requests needed for the proxy to consider
the client request successful.
"""
return (n + 1) // 2
def rsync_ip(ip):
"""
Transform ip string to an rsync-compatible form
Will return ipv4 addresses unchanged, but will nest ipv6 addresses
inside square brackets.
:param ip: an ip string (ipv4 or ipv6)
:returns: a string ip address
"""
return '[%s]' % ip if is_valid_ipv6(ip) else ip
def rsync_module_interpolation(template, device):
"""
Interpolate devices variables inside a rsync module template
:param template: rsync module template as a string
:param device: a device from a ring
:returns: a string with all variables replaced by device attributes
"""
replacements = {
'ip': rsync_ip(device.get('ip', '')),
'port': device.get('port', ''),
'replication_ip': rsync_ip(device.get('replication_ip', '')),
'replication_port': device.get('replication_port', ''),
'region': device.get('region', ''),
'zone': device.get('zone', ''),
'device': device.get('device', ''),
'meta': device.get('meta', ''),
}
try:
module = template.format(**replacements)
except KeyError as e:
raise ValueError('Cannot interpolate rsync_module, invalid variable: '
'%s' % e)
return module
def get_valid_utf8_str(str_or_unicode):
"""
Get valid parts of utf-8 str from str, unicode and even invalid utf-8 str
:param str_or_unicode: a string or an unicode which can be invalid utf-8
"""
if six.PY2:
if isinstance(str_or_unicode, six.text_type):
(str_or_unicode, _len) = utf8_encoder(str_or_unicode, 'replace')
(valid_unicode_str, _len) = utf8_decoder(str_or_unicode, 'replace')
else:
# Apparently under py3 we need to go to utf-16 to collapse surrogates?
if isinstance(str_or_unicode, six.binary_type):
try:
(str_or_unicode, _len) = utf8_decoder(str_or_unicode,
'surrogatepass')
except UnicodeDecodeError:
(str_or_unicode, _len) = utf8_decoder(str_or_unicode,
'replace')
(str_or_unicode, _len) = utf16_encoder(str_or_unicode, 'surrogatepass')
(valid_unicode_str, _len) = utf16_decoder(str_or_unicode, 'replace')
return valid_unicode_str.encode('utf-8')
class Everything(object):
"""
A container that contains everything. If "e" is an instance of
Everything, then "x in e" is true for all x.
"""
def __contains__(self, element):
return True
def list_from_csv(comma_separated_str):
"""
Splits the str given and returns a properly stripped list of the comma
separated values.
"""
if comma_separated_str:
return [v.strip() for v in comma_separated_str.split(',') if v.strip()]
return []
def csv_append(csv_string, item):
"""
Appends an item to a comma-separated string.
If the comma-separated string is empty/None, just returns item.
"""
if csv_string:
return ",".join((csv_string, item))
else:
return item
class CloseableChain(object):
"""
Like itertools.chain, but with a close method that will attempt to invoke
its sub-iterators' close methods, if any.
"""
def __init__(self, *iterables):
self.iterables = iterables
self.chained_iter = itertools.chain(*self.iterables)
def __iter__(self):
return self
def __next__(self):
return next(self.chained_iter)
next = __next__ # py2
def close(self):
for it in self.iterables:
close_if_possible(it)
def reiterate(iterable):
"""
Consume the first truthy item from an iterator, then re-chain it to the
rest of the iterator. This is useful when you want to make sure the
prologue to downstream generators have been executed before continuing.
:param iterable: an iterable object
"""
if isinstance(iterable, (list, tuple)):
return iterable
else:
iterator = iter(iterable)
try:
chunk = next(iterator)
while not chunk:
chunk = next(iterator)
return CloseableChain([chunk], iterator)
except StopIteration:
close_if_possible(iterable)
return iter([])
class InputProxy(object):
"""
File-like object that counts bytes read.
To be swapped in for wsgi.input for accounting purposes.
"""
def __init__(self, wsgi_input):
"""
:param wsgi_input: file-like object to wrap the functionality of
"""
self.wsgi_input = wsgi_input
self.bytes_received = 0
self.client_disconnect = False
def read(self, *args, **kwargs):
"""
Pass read request to the underlying file-like object and
add bytes read to total.
"""
try:
chunk = self.wsgi_input.read(*args, **kwargs)
except Exception:
self.client_disconnect = True
raise
self.bytes_received += len(chunk)
return chunk
def readline(self, *args, **kwargs):
"""
Pass readline request to the underlying file-like object and
add bytes read to total.
"""
try:
line = self.wsgi_input.readline(*args, **kwargs)
except Exception:
self.client_disconnect = True
raise
self.bytes_received += len(line)
return line
class LRUCache(object):
"""
Decorator for size/time bound memoization that evicts the least
recently used members.
"""
PREV, NEXT, KEY, CACHED_AT, VALUE = 0, 1, 2, 3, 4 # link fields
def __init__(self, maxsize=1000, maxtime=3600):
self.maxsize = maxsize
self.maxtime = maxtime
self.reset()
def reset(self):
self.mapping = {}
self.head = [None, None, None, None, None] # oldest
self.tail = [self.head, None, None, None, None] # newest
self.head[self.NEXT] = self.tail
def set_cache(self, value, *key):
while len(self.mapping) >= self.maxsize:
old_next, old_key = self.head[self.NEXT][self.NEXT:self.NEXT + 2]
self.head[self.NEXT], old_next[self.PREV] = old_next, self.head
del self.mapping[old_key]
last = self.tail[self.PREV]
link = [last, self.tail, key, time.time(), value]
self.mapping[key] = last[self.NEXT] = self.tail[self.PREV] = link
return value
def get_cached(self, link, *key):
link_prev, link_next, key, cached_at, value = link
if cached_at + self.maxtime < time.time():
raise KeyError('%r has timed out' % (key,))
link_prev[self.NEXT] = link_next
link_next[self.PREV] = link_prev
last = self.tail[self.PREV]
last[self.NEXT] = self.tail[self.PREV] = link
link[self.PREV] = last
link[self.NEXT] = self.tail
return value
def __call__(self, f):
class LRUCacheWrapped(object):
@functools.wraps(f)
def __call__(im_self, *key):
link = self.mapping.get(key, self.head)
if link is not self.head:
try:
return self.get_cached(link, *key)
except KeyError:
pass
value = f(*key)
self.set_cache(value, *key)
return value
def size(im_self):
"""
Return the size of the cache
"""
return len(self.mapping)
def reset(im_self):
return self.reset()
def get_maxsize(im_self):
return self.maxsize
def set_maxsize(im_self, i):
self.maxsize = i
def get_maxtime(im_self):
return self.maxtime
def set_maxtime(im_self, i):
self.maxtime = i
maxsize = property(get_maxsize, set_maxsize)
maxtime = property(get_maxtime, set_maxtime)
def __repr__(im_self):
return '<%s %r>' % (im_self.__class__.__name__, f)
return LRUCacheWrapped()
class Spliterator(object):
"""
Takes an iterator yielding sliceable things (e.g. strings or lists) and
yields subiterators, each yielding up to the requested number of items
from the source.
>>> si = Spliterator(["abcde", "fg", "hijkl"])
>>> ''.join(si.take(4))
"abcd"
>>> ''.join(si.take(3))
"efg"
>>> ''.join(si.take(1))
"h"
>>> ''.join(si.take(3))
"ijk"
>>> ''.join(si.take(3))
"l" # shorter than requested; this can happen with the last iterator
"""
def __init__(self, source_iterable):
self.input_iterator = iter(source_iterable)
self.leftovers = None
self.leftovers_index = 0
self._iterator_in_progress = False
def take(self, n):
if self._iterator_in_progress:
raise ValueError(
"cannot call take() again until the first iterator is"
" exhausted (has raised StopIteration)")
self._iterator_in_progress = True
try:
if self.leftovers:
# All this string slicing is a little awkward, but it's for
# a good reason. Consider a length N string that someone is
# taking k bytes at a time.
#
# With this implementation, we create one new string of
# length k (copying the bytes) on each call to take(). Once
# the whole input has been consumed, each byte has been
# copied exactly once, giving O(N) bytes copied.
#
# If, instead of this, we were to set leftovers =
# leftovers[k:] and omit leftovers_index, then each call to
# take() would copy k bytes to create the desired substring,
# then copy all the remaining bytes to reset leftovers,
# resulting in an overall O(N^2) bytes copied.
llen = len(self.leftovers) - self.leftovers_index
if llen <= n:
n -= llen
to_yield = self.leftovers[self.leftovers_index:]
self.leftovers = None
self.leftovers_index = 0
yield to_yield
else:
to_yield = self.leftovers[
self.leftovers_index:(self.leftovers_index + n)]
self.leftovers_index += n
n = 0
yield to_yield
while n > 0:
try:
chunk = next(self.input_iterator)
except StopIteration:
return
cl = len(chunk)
if cl <= n:
n -= cl
yield chunk
else:
self.leftovers = chunk
self.leftovers_index = n
yield chunk[:n]
n = 0
finally:
self._iterator_in_progress = False
def ismount(path):
"""
Test whether a path is a mount point. This will catch any
exceptions and translate them into a False return value
Use ismount_raw to have the exceptions raised instead.
"""
try:
return ismount_raw(path)
except OSError:
return False
def ismount_raw(path):
"""
Test whether a path is a mount point. Whereas ismount will catch
any exceptions and just return False, this raw version will not
catch exceptions.
This is code hijacked from C Python 2.6.8, adapted to remove the extra
lstat() system call.
"""
try:
s1 = os.lstat(path)
except os.error as err:
if err.errno == errno.ENOENT:
# It doesn't exist -- so not a mount point :-)
return False
raise
if stat.S_ISLNK(s1.st_mode):
# Some environments (like vagrant-swift-all-in-one) use a symlink at
# the device level but could still provide a stubfile in the target
# to indicate that it should be treated as a mount point for swift's
# purposes.
if os.path.isfile(os.path.join(path, ".ismount")):
return True
# Otherwise, a symlink can never be a mount point
return False
s2 = os.lstat(os.path.join(path, '..'))
dev1 = s1.st_dev
dev2 = s2.st_dev
if dev1 != dev2:
# path/.. on a different device as path
return True
ino1 = s1.st_ino
ino2 = s2.st_ino
if ino1 == ino2:
# path/.. is the same i-node as path
return True
# Device and inode checks are not properly working inside containerized
# environments, therefore using a workaround to check if there is a
# stubfile placed by an operator
if os.path.isfile(os.path.join(path, ".ismount")):
return True
return False
def close_if_possible(maybe_closable):
close_method = getattr(maybe_closable, 'close', None)
if callable(close_method):
return close_method()
@contextmanager
def closing_if_possible(maybe_closable):
"""
Like contextlib.closing(), but doesn't crash if the object lacks a close()
method.
PEP 333 (WSGI) says: "If the iterable returned by the application has a
close() method, the server or gateway must call that method upon
completion of the current request[.]" This function makes that easier.
"""
try:
yield maybe_closable
finally:
close_if_possible(maybe_closable)
def drain_and_close(response_or_app_iter):
"""
Drain and close a swob or WSGI response.
This ensures we don't log a 499 in the proxy just because we realized we
don't care about the body of an error.
"""
app_iter = getattr(response_or_app_iter, 'app_iter', response_or_app_iter)
if app_iter is None: # for example, if we used the Response.body property
return
for _chunk in app_iter:
pass
close_if_possible(app_iter)
_rfc_token = r'[^()<>@,;:\"/\[\]?={}\x00-\x20\x7f]+'
_rfc_extension_pattern = re.compile(
r'(?:\s*;\s*(' + _rfc_token + r")\s*(?:=\s*(" + _rfc_token +
r'|"(?:[^"\\]|\\.)*"))?)')
_content_range_pattern = re.compile(r'^bytes (\d+)-(\d+)/(\d+)$')
def parse_content_range(content_range):
"""
Parse a content-range header into (first_byte, last_byte, total_size).
See RFC 7233 section 4.2 for details on the header format, but it's
basically "Content-Range: bytes ${start}-${end}/${total}".
:param content_range: Content-Range header value to parse,
e.g. "bytes 100-1249/49004"
:returns: 3-tuple (start, end, total)
:raises ValueError: if malformed
"""
found = re.search(_content_range_pattern, content_range)
if not found:
raise ValueError("malformed Content-Range %r" % (content_range,))
return tuple(int(x) for x in found.groups())
def parse_content_type(content_type):
"""
Parse a content-type and its parameters into values.
RFC 2616 sec 14.17 and 3.7 are pertinent.
**Examples**::
'text/plain; charset=UTF-8' -> ('text/plain', [('charset, 'UTF-8')])
'text/plain; charset=UTF-8; level=1' ->
('text/plain', [('charset, 'UTF-8'), ('level', '1')])
:param content_type: content_type to parse
:returns: a tuple containing (content type, list of k, v parameter tuples)
"""
parm_list = []
if ';' in content_type:
content_type, parms = content_type.split(';', 1)
parms = ';' + parms
for m in _rfc_extension_pattern.findall(parms):
key = m[0].strip()
value = m[1].strip()
parm_list.append((key, value))
return content_type, parm_list
def extract_swift_bytes(content_type):
"""
Parse a content-type and return a tuple containing:
- the content_type string minus any swift_bytes param,
- the swift_bytes value or None if the param was not found
:param content_type: a content-type string
:return: a tuple of (content-type, swift_bytes or None)
"""
content_type, params = parse_content_type(content_type)
swift_bytes = None
for k, v in params:
if k == 'swift_bytes':
swift_bytes = v
else:
content_type += ';%s=%s' % (k, v)
return content_type, swift_bytes
def override_bytes_from_content_type(listing_dict, logger=None):
"""
Takes a dict from a container listing and overrides the content_type,
bytes fields if swift_bytes is set.
"""
listing_dict['content_type'], swift_bytes = extract_swift_bytes(
listing_dict['content_type'])
if swift_bytes is not None:
try:
listing_dict['bytes'] = int(swift_bytes)
except ValueError:
if logger:
logger.exception("Invalid swift_bytes")
def clean_content_type(value):
if ';' in value:
left, right = value.rsplit(';', 1)
if right.lstrip().startswith('swift_bytes='):
return left
return value
def quote(value, safe='/'):
"""
Patched version of urllib.quote that encodes utf-8 strings before quoting
"""
quoted = _quote(get_valid_utf8_str(value), safe)
if isinstance(value, six.binary_type):
quoted = quoted.encode('utf-8')
return quoted
def get_expirer_container(x_delete_at, expirer_divisor, acc, cont, obj):
"""
Returns an expiring object container name for given X-Delete-At and
(native string) a/c/o.
"""
shard_int = int(hash_path(acc, cont, obj), 16) % 100
return normalize_delete_at_timestamp(
int(x_delete_at) // expirer_divisor * expirer_divisor - shard_int)
class _MultipartMimeFileLikeObject(object):
def __init__(self, wsgi_input, boundary, input_buffer, read_chunk_size):
self.no_more_data_for_this_file = False
self.no_more_files = False
self.wsgi_input = wsgi_input
self.boundary = boundary
self.input_buffer = input_buffer
self.read_chunk_size = read_chunk_size
def read(self, length=None):
if not length:
length = self.read_chunk_size
if self.no_more_data_for_this_file:
return b''
# read enough data to know whether we're going to run
# into a boundary in next [length] bytes
if len(self.input_buffer) < length + len(self.boundary) + 2:
to_read = length + len(self.boundary) + 2
while to_read > 0:
try:
chunk = self.wsgi_input.read(to_read)
except (IOError, ValueError) as e:
raise swift.common.exceptions.ChunkReadError(str(e))
to_read -= len(chunk)
self.input_buffer += chunk
if not chunk:
self.no_more_files = True
break
boundary_pos = self.input_buffer.find(self.boundary)
# boundary does not exist in the next (length) bytes
if boundary_pos == -1 or boundary_pos > length:
ret = self.input_buffer[:length]
self.input_buffer = self.input_buffer[length:]
# if it does, just return data up to the boundary
else:
ret, self.input_buffer = self.input_buffer.split(self.boundary, 1)
self.no_more_files = self.input_buffer.startswith(b'--')
self.no_more_data_for_this_file = True
self.input_buffer = self.input_buffer[2:]
return ret
def readline(self):
if self.no_more_data_for_this_file:
return b''
boundary_pos = newline_pos = -1
while newline_pos < 0 and boundary_pos < 0:
try:
chunk = self.wsgi_input.read(self.read_chunk_size)
except (IOError, ValueError) as e:
raise swift.common.exceptions.ChunkReadError(str(e))
self.input_buffer += chunk
newline_pos = self.input_buffer.find(b'\r\n')
boundary_pos = self.input_buffer.find(self.boundary)
if not chunk:
self.no_more_files = True
break
# found a newline
if newline_pos >= 0 and \
(boundary_pos < 0 or newline_pos < boundary_pos):
# Use self.read to ensure any logic there happens...
ret = b''
to_read = newline_pos + 2
while to_read > 0:
chunk = self.read(to_read)
# Should never happen since we're reading from input_buffer,
# but just for completeness...
if not chunk:
break
to_read -= len(chunk)
ret += chunk
return ret
else: # no newlines, just return up to next boundary
return self.read(len(self.input_buffer))
def iter_multipart_mime_documents(wsgi_input, boundary, read_chunk_size=4096):
"""
Given a multi-part-mime-encoded input file object and boundary,
yield file-like objects for each part. Note that this does not
split each part into headers and body; the caller is responsible
for doing that if necessary.
:param wsgi_input: The file-like object to read from.
:param boundary: The mime boundary to separate new file-like objects on.
:returns: A generator of file-like objects for each part.
:raises MimeInvalid: if the document is malformed
"""
boundary = b'--' + boundary
blen = len(boundary) + 2 # \r\n
try:
got = wsgi_input.readline(blen)
while got == b'\r\n':
got = wsgi_input.readline(blen)
except (IOError, ValueError) as e:
raise swift.common.exceptions.ChunkReadError(str(e))
if got.strip() != boundary:
raise swift.common.exceptions.MimeInvalid(
'invalid starting boundary: wanted %r, got %r' % (boundary, got))
boundary = b'\r\n' + boundary
input_buffer = b''
done = False
while not done:
it = _MultipartMimeFileLikeObject(wsgi_input, boundary, input_buffer,
read_chunk_size)
yield it
done = it.no_more_files
input_buffer = it.input_buffer
def parse_mime_headers(doc_file):
"""
Takes a file-like object containing a MIME document and returns a
HeaderKeyDict containing the headers. The body of the message is not
consumed: the position in doc_file is left at the beginning of the body.
This function was inspired by the Python standard library's
http.client.parse_headers.
:param doc_file: binary file-like object containing a MIME document
:returns: a swift.common.swob.HeaderKeyDict containing the headers
"""
headers = []
while True:
line = doc_file.readline()
done = line in (b'\r\n', b'\n', b'')
if six.PY3:
try:
line = line.decode('utf-8')
except UnicodeDecodeError:
line = line.decode('latin1')
headers.append(line)
if done:
break
if six.PY3:
header_string = ''.join(headers)
else:
header_string = b''.join(headers)
headers = email.parser.Parser().parsestr(header_string)
return HeaderKeyDict(headers)
def mime_to_document_iters(input_file, boundary, read_chunk_size=4096):
"""
Takes a file-like object containing a multipart MIME document and
returns an iterator of (headers, body-file) tuples.
:param input_file: file-like object with the MIME doc in it
:param boundary: MIME boundary, sans dashes
(e.g. "divider", not "--divider")
:param read_chunk_size: size of strings read via input_file.read()
"""
if six.PY3 and isinstance(boundary, str):
# Since the boundary is in client-supplied headers, it can contain
# garbage that trips us and we don't like client-induced 500.
boundary = boundary.encode('latin-1', errors='replace')
doc_files = iter_multipart_mime_documents(input_file, boundary,
read_chunk_size)
for i, doc_file in enumerate(doc_files):
# this consumes the headers and leaves just the body in doc_file
headers = parse_mime_headers(doc_file)
yield (headers, doc_file)
def maybe_multipart_byteranges_to_document_iters(app_iter, content_type):
"""
Takes an iterator that may or may not contain a multipart MIME document
as well as content type and returns an iterator of body iterators.
:param app_iter: iterator that may contain a multipart MIME document
:param content_type: content type of the app_iter, used to determine
whether it conains a multipart document and, if
so, what the boundary is between documents
"""
content_type, params_list = parse_content_type(content_type)
if content_type != 'multipart/byteranges':
yield app_iter
return
body_file = FileLikeIter(app_iter)
boundary = dict(params_list)['boundary']
for _headers, body in mime_to_document_iters(body_file, boundary):
yield (chunk for chunk in iter(lambda: body.read(65536), b''))
def document_iters_to_multipart_byteranges(ranges_iter, boundary):
"""
Takes an iterator of range iters and yields a multipart/byteranges MIME
document suitable for sending as the body of a multi-range 206 response.
See document_iters_to_http_response_body for parameter descriptions.
"""
if not isinstance(boundary, bytes):
boundary = boundary.encode('ascii')
divider = b"--" + boundary + b"\r\n"
terminator = b"--" + boundary + b"--"
for range_spec in ranges_iter:
start_byte = range_spec["start_byte"]
end_byte = range_spec["end_byte"]
entity_length = range_spec.get("entity_length", "*")
content_type = range_spec["content_type"]
part_iter = range_spec["part_iter"]
if not isinstance(content_type, bytes):
content_type = str(content_type).encode('utf-8')
if not isinstance(entity_length, bytes):
entity_length = str(entity_length).encode('utf-8')
part_header = b''.join((
divider,
b"Content-Type: ", content_type, b"\r\n",
b"Content-Range: ", b"bytes %d-%d/%s\r\n" % (
start_byte, end_byte, entity_length),
b"\r\n"
))
yield part_header
for chunk in part_iter:
yield chunk
yield b"\r\n"
yield terminator
def document_iters_to_http_response_body(ranges_iter, boundary, multipart,
logger):
"""
Takes an iterator of range iters and turns it into an appropriate
HTTP response body, whether that's multipart/byteranges or not.
This is almost, but not quite, the inverse of
request_helpers.http_response_to_document_iters(). This function only
yields chunks of the body, not any headers.
:param ranges_iter: an iterator of dictionaries, one per range.
Each dictionary must contain at least the following key:
"part_iter": iterator yielding the bytes in the range
Additionally, if multipart is True, then the following other keys
are required:
"start_byte": index of the first byte in the range
"end_byte": index of the last byte in the range
"content_type": value for the range's Content-Type header
Finally, there is one optional key that is used in the
multipart/byteranges case:
"entity_length": length of the requested entity (not necessarily
equal to the response length). If omitted, "*" will be used.
Each part_iter will be exhausted prior to calling next(ranges_iter).
:param boundary: MIME boundary to use, sans dashes (e.g. "boundary", not
"--boundary").
:param multipart: True if the response should be multipart/byteranges,
False otherwise. This should be True if and only if you have 2 or
more ranges.
:param logger: a logger
"""
if multipart:
return document_iters_to_multipart_byteranges(ranges_iter, boundary)
else:
try:
response_body_iter = next(ranges_iter)['part_iter']
except StopIteration:
return ''
# We need to make sure ranges_iter does not get garbage-collected
# before response_body_iter is exhausted. The reason is that
# ranges_iter has a finally block that calls close_swift_conn, and
# so if that finally block fires before we read response_body_iter,
# there's nothing there.
def string_along(useful_iter, useless_iter_iter, logger):
with closing_if_possible(useful_iter):
for x in useful_iter:
yield x
try:
next(useless_iter_iter)
except StopIteration:
pass
else:
logger.warning(
"More than one part in a single-part response?")
return string_along(response_body_iter, ranges_iter, logger)
def multipart_byteranges_to_document_iters(input_file, boundary,
read_chunk_size=4096):
"""
Takes a file-like object containing a multipart/byteranges MIME document
(see RFC 7233, Appendix A) and returns an iterator of (first-byte,
last-byte, length, document-headers, body-file) 5-tuples.
:param input_file: file-like object with the MIME doc in it
:param boundary: MIME boundary, sans dashes
(e.g. "divider", not "--divider")
:param read_chunk_size: size of strings read via input_file.read()
"""
for headers, body in mime_to_document_iters(input_file, boundary,
read_chunk_size):
first_byte, last_byte, length = parse_content_range(
headers.get('content-range'))
yield (first_byte, last_byte, length, headers.items(), body)
#: Regular expression to match form attributes.
ATTRIBUTES_RE = re.compile(r'(\w+)=(".*?"|[^";]+)(; ?|$)')
def parse_content_disposition(header):
"""
Given the value of a header like:
Content-Disposition: form-data; name="somefile"; filename="test.html"
Return data like
("form-data", {"name": "somefile", "filename": "test.html"})
:param header: Value of a header (the part after the ': ').
:returns: (value name, dict) of the attribute data parsed (see above).
"""
attributes = {}
attrs = ''
if ';' in header:
header, attrs = [x.strip() for x in header.split(';', 1)]
m = True
while m:
m = ATTRIBUTES_RE.match(attrs)
if m:
attrs = attrs[len(m.group(0)):]
attributes[m.group(1)] = m.group(2).strip('"')
return header, attributes
try:
_test_md5 = hashlib.md5(usedforsecurity=False) # nosec
def md5(string=b'', usedforsecurity=True):
"""Return an md5 hashlib object using usedforsecurity parameter
For python distributions that support the usedforsecurity keyword
parameter, this passes the parameter through as expected.
See https://bugs.python.org/issue9216
"""
return hashlib.md5(string, usedforsecurity=usedforsecurity) # nosec
except TypeError:
def md5(string=b'', usedforsecurity=True):
"""Return an md5 hashlib object without usedforsecurity parameter
For python distributions that do not yet support this keyword
parameter, we drop the parameter
"""
return hashlib.md5(string) # nosec
class NamespaceOuterBound(object):
"""
A custom singleton type to be subclassed for the outer bounds of
Namespaces.
"""
_singleton = None
def __new__(cls):
if cls is NamespaceOuterBound:
raise TypeError('NamespaceOuterBound is an abstract class; '
'only subclasses should be instantiated')
if cls._singleton is None:
cls._singleton = super(NamespaceOuterBound, cls).__new__(cls)
return cls._singleton
def __str__(self):
return ''
def __repr__(self):
return type(self).__name__
def __bool__(self):
return False
__nonzero__ = __bool__
@functools.total_ordering
class Namespace(object):
"""
A Namespace encapsulates parameters that define a range of the object
namespace.
:param name: the name of the ``Namespace``.
:param lower: the lower bound of object names contained in the namespace;
the lower bound *is not* included in the namespace.
:param upper: the upper bound of object names contained in the namespace;
the upper bound *is* included in the namespace.
"""
__slots__ = ('_lower', '_upper', 'name')
@functools.total_ordering
class MaxBound(NamespaceOuterBound):
# singleton for maximum bound
def __ge__(self, other):
return True
@functools.total_ordering
class MinBound(NamespaceOuterBound):
# singleton for minimum bound
def __le__(self, other):
return True
MIN = MinBound()
MAX = MaxBound()
def __init__(self, name, lower, upper):
self._lower = Namespace.MIN
self._upper = Namespace.MAX
self.lower = lower
self.upper = upper
self.name = name
def __iter__(self):
yield 'name', str(self.name)
yield 'lower', self.lower_str
yield 'upper', self.upper_str
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, ', '.join(
'%s=%r' % prop for prop in self))
def __lt__(self, other):
# a Namespace is less than other if its entire namespace is less than
# other; if other is another Namespace that implies that this
# Namespace's upper must be less than or equal to the other
# Namespace's lower
if self.upper == Namespace.MAX:
return False
if isinstance(other, Namespace):
return self.upper <= other.lower
elif other is None:
return True
else:
return self.upper < self._encode(other)
def __gt__(self, other):
# a Namespace is greater than other if its entire namespace is greater
# than other; if other is another Namespace that implies that this
# Namespace's lower must be less greater than or equal to the other
# Namespace's upper
if self.lower == Namespace.MIN:
return False
if isinstance(other, Namespace):
return self.lower >= other.upper
elif other is None:
return False
else:
return self.lower >= self._encode(other)
def __eq__(self, other):
# test for equality of range bounds only
if not isinstance(other, Namespace):
return False
return self.lower == other.lower and self.upper == other.upper
def __ne__(self, other):
return not (self == other)
def __contains__(self, item):
# test if the given item is within the namespace
if item == '':
return False
item = self._encode_bound(item)
return self.lower < item <= self.upper
@classmethod
def _encode(cls, value):
if six.PY2 and isinstance(value, six.text_type):
return value.encode('utf-8')
if six.PY3 and isinstance(value, six.binary_type):
# This should never fail -- the value should always be coming from
# valid swift paths, which means UTF-8
return value.decode('utf-8')
return value
def _encode_bound(self, bound):
if isinstance(bound, NamespaceOuterBound):
return bound
if not (isinstance(bound, six.text_type) or
isinstance(bound, six.binary_type)):
raise TypeError('must be a string type')
return self._encode(bound)
@property
def lower(self):
return self._lower
@property
def lower_str(self):
return str(self.lower)
@lower.setter
def lower(self, value):
if value is None or (value == b"" if isinstance(value, bytes) else
value == u""):
value = Namespace.MIN
try:
value = self._encode_bound(value)
except TypeError as err:
raise TypeError('lower %s' % err)
if value > self._upper:
raise ValueError(
'lower (%r) must be less than or equal to upper (%r)' %
(value, self.upper))
self._lower = value
@property
def upper(self):
return self._upper
@property
def upper_str(self):
return str(self.upper)
@upper.setter
def upper(self, value):
if value is None or (value == b"" if isinstance(value, bytes) else
value == u""):
value = Namespace.MAX
try:
value = self._encode_bound(value)
except TypeError as err:
raise TypeError('upper %s' % err)
if value < self._lower:
raise ValueError(
'upper (%r) must be greater than or equal to lower (%r)' %
(value, self.lower))
self._upper = value
@property
def end_marker(self):
return self.upper_str + '\x00' if self.upper else ''
def entire_namespace(self):
"""
Returns True if this namespace includes the entire namespace, False
otherwise.
"""
return (self.lower == Namespace.MIN and
self.upper == Namespace.MAX)
def overlaps(self, other):
"""
Returns True if this namespace overlaps with the other namespace.
:param other: an instance of :class:`~swift.common.utils.Namespace`
"""
if not isinstance(other, Namespace):
return False
return max(self.lower, other.lower) < min(self.upper, other.upper)
def includes(self, other):
"""
Returns True if this namespace includes the whole of the other
namespace, False otherwise.
:param other: an instance of :class:`~swift.common.utils.Namespace`
"""
return (self.lower <= other.lower) and (other.upper <= self.upper)
def expand(self, donors):
"""
Expands the bounds as necessary to match the minimum and maximum bounds
of the given donors.
:param donors: A list of :class:`~swift.common.utils.Namespace`
:return: True if the bounds have been modified, False otherwise.
"""
modified = False
new_lower = self.lower
new_upper = self.upper
for donor in donors:
new_lower = min(new_lower, donor.lower)
new_upper = max(new_upper, donor.upper)
if self.lower > new_lower or self.upper < new_upper:
self.lower = new_lower
self.upper = new_upper
modified = True
return modified
class NamespaceBoundList(object):
def __init__(self, bounds):
"""
Encapsulate a compact representation of namespaces. Each item in the
list is a list [lower bound, name].
:param bounds: a list of lists ``[lower bound, name]``. The list
should be ordered by ``lower bound``.
"""
self.bounds = [] if bounds is None else bounds
def __eq__(self, other):
# test for equality of NamespaceBoundList objects only
if not isinstance(other, NamespaceBoundList):
return False
return self.bounds == other.bounds
@classmethod
def parse(cls, namespaces):
"""
Create a NamespaceBoundList object by parsing a list of Namespaces or
shard ranges and only storing the compact bounds list.
Each Namespace in the given list of ``namespaces`` provides the next
[lower bound, name] list to append to the NamespaceBoundList. The
given ``namespaces`` should be contiguous because the
NamespaceBoundList only stores lower bounds; if ``namespaces`` has
overlaps then at least one of the overlapping namespaces may be
ignored; similarly, gaps between namespaces are not represented in the
NamespaceBoundList.
:param namespaces: A list of Namespace instances. The list should be
ordered by namespace bounds.
:return: a NamespaceBoundList.
"""
if not namespaces:
return None
bounds = []
upper = namespaces[0].lower
for ns in namespaces:
if ns.lower < upper:
# Discard overlapping namespace.
# Overlapping namespaces are expected in lists of shard ranges
# fetched from the backend. For example, while a parent
# container is in the process of sharding, the parent shard
# range and its children shard ranges may be returned in the
# list of shard ranges. However, the backend sorts the list by
# (upper, state, lower, name) such that the children precede
# the parent, and it is the children that we prefer to retain
# in the NamespaceBoundList. For example, these namespaces:
# (a-b, "child1"), (b-c, "child2"), (a-c, "parent")
# would result in a NamespaceBoundList:
# (a, "child1"), (b, "child2")
# Unexpected overlaps or gaps may result in namespaces being
# 'extended' because only lower bounds are stored. For example,
# these namespaces:
# (a-b, "ns1"), (d-e, "ns2")
# would result in a NamespaceBoundList:
# (a, "ns1"), (d, "ns2")
# When used to find a target namespace for an object update
# that lies in a gap, the NamespaceBoundList will map the
# object name to the preceding namespace. In the example, an
# object named "c" would be mapped to "ns1". (In previous
# versions, an object update lying in a gap would have been
# mapped to the root container.)
continue
bounds.append([ns.lower_str, str(ns.name)])
upper = ns.upper
return cls(bounds)
def get_namespace(self, item):
"""
Get a Namespace instance that contains ``item`` by bisecting on the
lower bounds directly. This function is used for performance sensitive
path, for example, '_get_update_shard' in proxy object controller. For
normal paths, convert NamespaceBoundList to a list of Namespaces, and
use `~swift.common.utils.find_namespace` or
`~swift.common.utils.filter_namespaces`.
:param item: The item for a which a Namespace is to be found.
:return: the Namespace that contains ``item``.
"""
pos = bisect.bisect(self.bounds, [item]) - 1
lower, name = self.bounds[pos]
upper = ('' if pos + 1 == len(self.bounds)
else self.bounds[pos + 1][0])
return Namespace(name, lower, upper)
def get_namespaces(self):
"""
Get the contained namespaces as a list of contiguous Namespaces ordered
by lower bound.
:return: A list of Namespace objects which are ordered by
``lower bound``.
"""
if not self.bounds:
return []
namespaces = []
num_ns = len(self.bounds)
for i in range(num_ns):
lower, name = self.bounds[i]
upper = ('' if i + 1 == num_ns else self.bounds[i + 1][0])
namespaces.append(Namespace(name, lower, upper))
return namespaces
class ShardName(object):
"""
Encapsulates the components of a shard name.
Instances of this class would typically be constructed via the create() or
parse() class methods.
Shard names have the form:
<account>/<root_container>-<parent_container_hash>-<timestamp>-<index>
Note: some instances of :class:`~swift.common.utils.ShardRange` have names
that will NOT parse as a :class:`~swift.common.utils.ShardName`; e.g. a
root container's own shard range will have a name format of
<account>/<root_container> which will raise ValueError if passed to parse.
"""
def __init__(self, account, root_container,
parent_container_hash,
timestamp,
index):
self.account = self._validate(account)
self.root_container = self._validate(root_container)
self.parent_container_hash = self._validate(parent_container_hash)
self.timestamp = Timestamp(timestamp)
self.index = int(index)
@classmethod
def _validate(cls, arg):
if arg is None:
raise ValueError('arg must not be None')
return arg
def __str__(self):
return '%s/%s-%s-%s-%s' % (self.account,
self.root_container,
self.parent_container_hash,
self.timestamp.internal,
self.index)
@classmethod
def hash_container_name(cls, container_name):
"""
Calculates the hash of a container name.
:param container_name: name to be hashed.
:return: the hexdigest of the md5 hash of ``container_name``.
:raises ValueError: if ``container_name`` is None.
"""
cls._validate(container_name)
if not isinstance(container_name, bytes):
container_name = container_name.encode('utf-8')
hash = md5(container_name, usedforsecurity=False).hexdigest()
return hash
@classmethod
def create(cls, account, root_container, parent_container,
timestamp, index):
"""
Create an instance of :class:`~swift.common.utils.ShardName`.
:param account: the hidden internal account to which the shard
container belongs.
:param root_container: the name of the root container for the shard.
:param parent_container: the name of the parent container for the
shard; for initial first generation shards this should be the same
as ``root_container``; for shards of shards this should be the name
of the sharding shard container.
:param timestamp: an instance of :class:`~swift.common.utils.Timestamp`
:param index: a unique index that will distinguish the path from any
other path generated using the same combination of
``account``, ``root_container``, ``parent_container`` and
``timestamp``.
:return: an instance of :class:`~swift.common.utils.ShardName`.
:raises ValueError: if any argument is None
"""
# we make the shard name unique with respect to other shards names by
# embedding a hash of the parent container name; we use a hash (rather
# than the actual parent container name) to prevent shard names become
# longer with every generation.
parent_container_hash = cls.hash_container_name(parent_container)
return cls(account, root_container, parent_container_hash, timestamp,
index)
@classmethod
def parse(cls, name):
"""
Parse ``name`` to an instance of
:class:`~swift.common.utils.ShardName`.
:param name: a shard name which should have the form:
<account>/
<root_container>-<parent_container_hash>-<timestamp>-<index>
:return: an instance of :class:`~swift.common.utils.ShardName`.
:raises ValueError: if ``name`` is not a valid shard name.
"""
try:
account, container = name.split('/', 1)
root_container, parent_container_hash, timestamp, index = \
container.rsplit('-', 3)
return cls(account, root_container, parent_container_hash,
timestamp, index)
except ValueError:
raise ValueError('invalid name: %s' % name)
class ShardRange(Namespace):
"""
A ShardRange encapsulates sharding state related to a container including
lower and upper bounds that define the object namespace for which the
container is responsible.
Shard ranges may be persisted in a container database. Timestamps
associated with subsets of the shard range attributes are used to resolve
conflicts when a shard range needs to be merged with an existing shard
range record and the most recent version of an attribute should be
persisted.
:param name: the name of the shard range; this should take the form of a
path to a container i.e. <account_name>/<container_name>.
:param timestamp: a timestamp that represents the time at which the
shard range's ``lower``, ``upper`` or ``deleted`` attributes were
last modified.
:param lower: the lower bound of object names contained in the shard range;
the lower bound *is not* included in the shard range namespace.
:param upper: the upper bound of object names contained in the shard range;
the upper bound *is* included in the shard range namespace.
:param object_count: the number of objects in the shard range; defaults to
zero.
:param bytes_used: the number of bytes in the shard range; defaults to
zero.
:param meta_timestamp: a timestamp that represents the time at which the
shard range's ``object_count`` and ``bytes_used`` were last updated;
defaults to the value of ``timestamp``.
:param deleted: a boolean; if True the shard range is considered to be
deleted.
:param state: the state; must be one of ShardRange.STATES; defaults to
CREATED.
:param state_timestamp: a timestamp that represents the time at which
``state`` was forced to its current value; defaults to the value of
``timestamp``. This timestamp is typically not updated with every
change of ``state`` because in general conflicts in ``state``
attributes are resolved by choosing the larger ``state`` value.
However, when this rule does not apply, for example when changing state
from ``SHARDED`` to ``ACTIVE``, the ``state_timestamp`` may be advanced
so that the new ``state`` value is preferred over any older ``state``
value.
:param epoch: optional epoch timestamp which represents the time at which
sharding was enabled for a container.
:param reported: optional indicator that this shard and its stats have
been reported to the root container.
:param tombstones: the number of tombstones in the shard range; defaults to
-1 to indicate that the value is unknown.
"""
FOUND = 10
CREATED = 20
CLEAVED = 30
ACTIVE = 40
SHRINKING = 50
SHARDING = 60
SHARDED = 70
SHRUNK = 80
STATES = {FOUND: 'found',
CREATED: 'created',
CLEAVED: 'cleaved',
ACTIVE: 'active',
SHRINKING: 'shrinking',
SHARDING: 'sharding',
SHARDED: 'sharded',
SHRUNK: 'shrunk'}
STATES_BY_NAME = dict((v, k) for k, v in STATES.items())
SHRINKING_STATES = (SHRINKING, SHRUNK)
SHARDING_STATES = (SHARDING, SHARDED)
CLEAVING_STATES = SHRINKING_STATES + SHARDING_STATES
__slots__ = (
'account', 'container',
'_timestamp', '_meta_timestamp', '_state_timestamp', '_epoch',
'_deleted', '_state', '_count', '_bytes',
'_tombstones', '_reported')
def __init__(self, name, timestamp=0,
lower=Namespace.MIN, upper=Namespace.MAX,
object_count=0, bytes_used=0, meta_timestamp=None,
deleted=False, state=None, state_timestamp=None, epoch=None,
reported=False, tombstones=-1, **kwargs):
super(ShardRange, self).__init__(name=name, lower=lower, upper=upper)
self.account = self.container = self._timestamp = \
self._meta_timestamp = self._state_timestamp = self._epoch = None
self._deleted = False
self._state = None
self.name = name
self.timestamp = timestamp
self.deleted = deleted
self.object_count = object_count
self.bytes_used = bytes_used
self.meta_timestamp = meta_timestamp
self.state = self.FOUND if state is None else state
self.state_timestamp = state_timestamp
self.epoch = epoch
self.reported = reported
self.tombstones = tombstones
@classmethod
def sort_key(cls, sr):
# defines the sort order for shard ranges
# note if this ever changes to *not* sort by upper first then it breaks
# a key assumption for bisect, which is used by utils.find_namespace
# with shard ranges.
return sr.upper, sr.state, sr.lower, sr.name
def is_child_of(self, parent):
"""
Test if this shard range is a child of another shard range. The
parent-child relationship is inferred from the names of the shard
ranges. This method is limited to work only within the scope of the
same user-facing account (with and without shard prefix).
:param parent: an instance of ``ShardRange``.
:return: True if ``parent`` is the parent of this shard range, False
otherwise, assuming that they are within the same account.
"""
# note: We limit the usages of this method to be within the same
# account, because account shard prefix is configurable and it's hard
# to perform checking without breaking backward-compatibility.
try:
self_parsed_name = ShardName.parse(self.name)
except ValueError:
# self is not a shard and therefore not a child.
return False
try:
parsed_parent_name = ShardName.parse(parent.name)
parent_root_container = parsed_parent_name.root_container
except ValueError:
# parent is a root container.
parent_root_container = parent.container
return (
self_parsed_name.root_container == parent_root_container
and self_parsed_name.parent_container_hash
== ShardName.hash_container_name(parent.container)
)
def _find_root(self, parsed_name, shard_ranges):
for sr in shard_ranges:
if parsed_name.root_container == sr.container:
return sr
return None
def find_root(self, shard_ranges):
"""
Find this shard range's root shard range in the given ``shard_ranges``.
:param shard_ranges: a list of instances of
:class:`~swift.common.utils.ShardRange`
:return: this shard range's root shard range if it is found in the
list, otherwise None.
"""
try:
self_parsed_name = ShardName.parse(self.name)
except ValueError:
# not a shard
return None
return self._find_root(self_parsed_name, shard_ranges)
def find_ancestors(self, shard_ranges):
"""
Find this shard range's ancestor ranges in the given ``shard_ranges``.
This method makes a best-effort attempt to identify this shard range's
parent shard range, the parent's parent, etc., up to and including the
root shard range. It is only possible to directly identify the parent
of a particular shard range, so the search is recursive; if any member
of the ancestry is not found then the search ends and older ancestors
that may be in the list are not identified. The root shard range,
however, will always be identified if it is present in the list.
For example, given a list that contains parent, grandparent,
great-great-grandparent and root shard ranges, but is missing the
great-grandparent shard range, only the parent, grand-parent and root
shard ranges will be identified.
:param shard_ranges: a list of instances of
:class:`~swift.common.utils.ShardRange`
:return: a list of instances of
:class:`~swift.common.utils.ShardRange` containing items in the
given ``shard_ranges`` that can be identified as ancestors of this
shard range. The list may not be complete if there are gaps in the
ancestry, but is guaranteed to contain at least the parent and
root shard ranges if they are present.
"""
if not shard_ranges:
return []
try:
self_parsed_name = ShardName.parse(self.name)
except ValueError:
# not a shard
return []
ancestors = []
for sr in shard_ranges:
if self.is_child_of(sr):
ancestors.append(sr)
break
if ancestors:
ancestors.extend(ancestors[0].find_ancestors(shard_ranges))
else:
root_sr = self._find_root(self_parsed_name, shard_ranges)
if root_sr:
ancestors.append(root_sr)
return ancestors
@classmethod
def make_path(cls, shards_account, root_container, parent_container,
timestamp, index):
"""
Returns a path for a shard container that is valid to use as a name
when constructing a :class:`~swift.common.utils.ShardRange`.
:param shards_account: the hidden internal account to which the shard
container belongs.
:param root_container: the name of the root container for the shard.
:param parent_container: the name of the parent container for the
shard; for initial first generation shards this should be the same
as ``root_container``; for shards of shards this should be the name
of the sharding shard container.
:param timestamp: an instance of :class:`~swift.common.utils.Timestamp`
:param index: a unique index that will distinguish the path from any
other path generated using the same combination of
``shards_account``, ``root_container``, ``parent_container`` and
``timestamp``.
:return: a string of the form <account_name>/<container_name>
"""
timestamp = cls._to_timestamp(timestamp)
return str(ShardName.create(shards_account,
root_container,
parent_container,
timestamp,
index))
@classmethod
def _to_timestamp(cls, timestamp):
if timestamp is None or isinstance(timestamp, Timestamp):
return timestamp
return Timestamp(timestamp)
@property
def name(self):
return '%s/%s' % (self.account, self.container)
@name.setter
def name(self, path):
path = self._encode(path)
if not path or len(path.split('/')) != 2 or not all(path.split('/')):
raise ValueError(
"Name must be of the form '<account>/<container>', got %r" %
path)
self.account, self.container = path.split('/')
@property
def timestamp(self):
return self._timestamp
@timestamp.setter
def timestamp(self, ts):
if ts is None:
raise TypeError('timestamp cannot be None')
self._timestamp = self._to_timestamp(ts)
@property
def meta_timestamp(self):
if self._meta_timestamp is None:
return self.timestamp
return self._meta_timestamp
@meta_timestamp.setter
def meta_timestamp(self, ts):
self._meta_timestamp = self._to_timestamp(ts)
@property
def object_count(self):
return self._count
@object_count.setter
def object_count(self, count):
count = int(count)
if count < 0:
raise ValueError('object_count cannot be < 0')
self._count = count
@property
def bytes_used(self):
return self._bytes
@bytes_used.setter
def bytes_used(self, bytes_used):
bytes_used = int(bytes_used)
if bytes_used < 0:
raise ValueError('bytes_used cannot be < 0')
self._bytes = bytes_used
@property
def tombstones(self):
return self._tombstones
@tombstones.setter
def tombstones(self, tombstones):
self._tombstones = int(tombstones)
@property
def row_count(self):
"""
Returns the total number of rows in the shard range i.e. the sum of
objects and tombstones.
:return: the row count
"""
return self.object_count + max(self.tombstones, 0)
def update_meta(self, object_count, bytes_used, meta_timestamp=None):
"""
Set the object stats metadata to the given values and update the
meta_timestamp to the current time.
:param object_count: should be an integer
:param bytes_used: should be an integer
:param meta_timestamp: timestamp for metadata; if not given the
current time will be set.
:raises ValueError: if ``object_count`` or ``bytes_used`` cannot be
cast to an int, or if meta_timestamp is neither None nor can be
cast to a :class:`~swift.common.utils.Timestamp`.
"""
if self.object_count != int(object_count):
self.object_count = int(object_count)
self.reported = False
if self.bytes_used != int(bytes_used):
self.bytes_used = int(bytes_used)
self.reported = False
if meta_timestamp is None:
self.meta_timestamp = Timestamp.now()
else:
self.meta_timestamp = meta_timestamp
def update_tombstones(self, tombstones, meta_timestamp=None):
"""
Set the tombstones metadata to the given values and update the
meta_timestamp to the current time.
:param tombstones: should be an integer
:param meta_timestamp: timestamp for metadata; if not given the
current time will be set.
:raises ValueError: if ``tombstones`` cannot be cast to an int, or
if meta_timestamp is neither None nor can be cast to a
:class:`~swift.common.utils.Timestamp`.
"""
tombstones = int(tombstones)
if 0 <= tombstones != self.tombstones:
self.tombstones = tombstones
self.reported = False
if meta_timestamp is None:
self.meta_timestamp = Timestamp.now()
else:
self.meta_timestamp = meta_timestamp
def increment_meta(self, object_count, bytes_used):
"""
Increment the object stats metadata by the given values and update the
meta_timestamp to the current time.
:param object_count: should be an integer
:param bytes_used: should be an integer
:raises ValueError: if ``object_count`` or ``bytes_used`` cannot be
cast to an int.
"""
self.update_meta(self.object_count + int(object_count),
self.bytes_used + int(bytes_used))
@classmethod
def resolve_state(cls, state):
"""
Given a value that may be either the name or the number of a state
return a tuple of (state number, state name).
:param state: Either a string state name or an integer state number.
:return: A tuple (state number, state name)
:raises ValueError: if ``state`` is neither a valid state name nor a
valid state number.
"""
try:
try:
# maybe it's a number
float_state = float(state)
state_num = int(float_state)
if state_num != float_state:
raise ValueError('Invalid state %r' % state)
state_name = cls.STATES[state_num]
except (ValueError, TypeError):
# maybe it's a state name
state_name = state.lower()
state_num = cls.STATES_BY_NAME[state_name]
except (KeyError, AttributeError):
raise ValueError('Invalid state %r' % state)
return state_num, state_name
@property
def state(self):
return self._state
@state.setter
def state(self, state):
self._state = self.resolve_state(state)[0]
@property
def state_text(self):
return self.STATES[self.state]
@property
def state_timestamp(self):
if self._state_timestamp is None:
return self.timestamp
return self._state_timestamp
@state_timestamp.setter
def state_timestamp(self, ts):
self._state_timestamp = self._to_timestamp(ts)
@property
def epoch(self):
return self._epoch
@epoch.setter
def epoch(self, epoch):
self._epoch = self._to_timestamp(epoch)
@property
def reported(self):
return self._reported
@reported.setter
def reported(self, value):
self._reported = bool(value)
def update_state(self, state, state_timestamp=None):
"""
Set state to the given value and optionally update the state_timestamp
to the given time.
:param state: new state, should be an integer
:param state_timestamp: timestamp for state; if not given the
state_timestamp will not be changed.
:return: True if the state or state_timestamp was changed, False
otherwise
"""
if state_timestamp is None and self.state == state:
return False
self.state = state
if state_timestamp is not None:
self.state_timestamp = state_timestamp
self.reported = False
return True
@property
def deleted(self):
return self._deleted
@deleted.setter
def deleted(self, value):
self._deleted = bool(value)
def set_deleted(self, timestamp=None):
"""
Mark the shard range deleted and set timestamp to the current time.
:param timestamp: optional timestamp to set; if not given the
current time will be set.
:return: True if the deleted attribute or timestamp was changed, False
otherwise
"""
if timestamp is None and self.deleted:
return False
self.deleted = True
self.timestamp = timestamp or Timestamp.now()
return True
# A by-the-book implementation should probably hash the value, which
# in our case would be account+container+lower+upper (+timestamp ?).
# But we seem to be okay with just the identity.
def __hash__(self):
return id(self)
def __repr__(self):
return '%s<%r to %r as of %s, (%d, %d) as of %s, %s as of %s>' % (
self.__class__.__name__, self.lower, self.upper,
self.timestamp.internal, self.object_count, self.bytes_used,
self.meta_timestamp.internal, self.state_text,
self.state_timestamp.internal)
def __iter__(self):
yield 'name', self.name
yield 'timestamp', self.timestamp.internal
yield 'lower', str(self.lower)
yield 'upper', str(self.upper)
yield 'object_count', self.object_count
yield 'bytes_used', self.bytes_used
yield 'meta_timestamp', self.meta_timestamp.internal
yield 'deleted', 1 if self.deleted else 0
yield 'state', self.state
yield 'state_timestamp', self.state_timestamp.internal
yield 'epoch', self.epoch.internal if self.epoch is not None else None
yield 'reported', 1 if self.reported else 0
yield 'tombstones', self.tombstones
def copy(self, timestamp=None, **kwargs):
"""
Creates a copy of the ShardRange.
:param timestamp: (optional) If given, the returned ShardRange will
have all of its timestamps set to this value. Otherwise the
returned ShardRange will have the original timestamps.
:return: an instance of :class:`~swift.common.utils.ShardRange`
"""
new = ShardRange.from_dict(dict(self, **kwargs))
if timestamp:
new.timestamp = timestamp
new.meta_timestamp = new.state_timestamp = None
return new
@classmethod
def from_dict(cls, params):
"""
Return an instance constructed using the given dict of params. This
method is deliberately less flexible than the class `__init__()` method
and requires all of the `__init__()` args to be given in the dict of
params.
:param params: a dict of parameters
:return: an instance of this class
"""
return cls(
params['name'], params['timestamp'], params['lower'],
params['upper'], params['object_count'], params['bytes_used'],
params['meta_timestamp'], params['deleted'], params['state'],
params['state_timestamp'], params['epoch'],
params.get('reported', 0), params.get('tombstones', -1))
class ShardRangeList(UserList):
"""
This class provides some convenience functions for working with lists of
:class:`~swift.common.utils.ShardRange`.
This class does not enforce ordering or continuity of the list items:
callers should ensure that items are added in order as appropriate.
"""
def __getitem__(self, index):
# workaround for py3 - not needed for py2.7,py3.8
result = self.data[index]
return ShardRangeList(result) if type(result) == list else result
@property
def lower(self):
"""
Returns the lower bound of the first item in the list. Note: this will
only be equal to the lowest bound of all items in the list if the list
contents has been sorted.
:return: lower bound of first item in the list, or Namespace.MIN
if the list is empty.
"""
if not self:
# empty list has range MIN->MIN
return Namespace.MIN
return self[0].lower
@property
def upper(self):
"""
Returns the upper bound of the last item in the list. Note: this will
only be equal to the uppermost bound of all items in the list if the
list has previously been sorted.
:return: upper bound of last item in the list, or Namespace.MIN
if the list is empty.
"""
if not self:
# empty list has range MIN->MIN
return Namespace.MIN
return self[-1].upper
@property
def object_count(self):
"""
Returns the total number of objects of all items in the list.
:return: total object count
"""
return sum(sr.object_count for sr in self)
@property
def row_count(self):
"""
Returns the total number of rows of all items in the list.
:return: total row count
"""
return sum(sr.row_count for sr in self)
@property
def bytes_used(self):
"""
Returns the total number of bytes in all items in the list.
:return: total bytes used
"""
return sum(sr.bytes_used for sr in self)
@property
def timestamps(self):
return set(sr.timestamp for sr in self)
@property
def states(self):
return set(sr.state for sr in self)
def includes(self, other):
"""
Check if another ShardRange namespace is enclosed between the list's
``lower`` and ``upper`` properties. Note: the list's ``lower`` and
``upper`` properties will only equal the outermost bounds of all items
in the list if the list has previously been sorted.
Note: the list does not need to contain an item matching ``other`` for
this method to return True, although if the list has been sorted and
does contain an item matching ``other`` then the method will return
True.
:param other: an instance of :class:`~swift.common.utils.ShardRange`
:return: True if other's namespace is enclosed, False otherwise.
"""
return self.lower <= other.lower and self.upper >= other.upper
def filter(self, includes=None, marker=None, end_marker=None):
"""
Filter the list for those shard ranges whose namespace includes the
``includes`` name or any part of the namespace between ``marker`` and
``end_marker``. If none of ``includes``, ``marker`` or ``end_marker``
are specified then all shard ranges will be returned.
:param includes: a string; if not empty then only the shard range, if
any, whose namespace includes this string will be returned, and
``marker`` and ``end_marker`` will be ignored.
:param marker: if specified then only shard ranges whose upper bound is
greater than this value will be returned.
:param end_marker: if specified then only shard ranges whose lower
bound is less than this value will be returned.
:return: A new instance of :class:`~swift.common.utils.ShardRangeList`
containing the filtered shard ranges.
"""
return ShardRangeList(
filter_namespaces(self, includes, marker, end_marker))
def find_lower(self, condition):
"""
Finds the first shard range satisfies the given condition and returns
its lower bound.
:param condition: A function that must accept a single argument of type
:class:`~swift.common.utils.ShardRange` and return True if the
shard range satisfies the condition or False otherwise.
:return: The lower bound of the first shard range to satisfy the
condition, or the ``upper`` value of this list if no such shard
range is found.
"""
for sr in self:
if condition(sr):
return sr.lower
return self.upper
def find_namespace(item, namespaces):
"""
Find a Namespace/ShardRange in given list of ``namespaces`` whose namespace
contains ``item``.
:param item: The item for a which a Namespace is to be found.
:param ranges: a sorted list of Namespaces.
:return: the Namespace/ShardRange whose namespace contains ``item``, or
None if no suitable Namespace is found.
"""
index = bisect.bisect_left(namespaces, item)
if index != len(namespaces) and item in namespaces[index]:
return namespaces[index]
return None
def filter_namespaces(namespaces, includes, marker, end_marker):
"""
Filter the given Namespaces/ShardRanges to those whose namespace includes
the ``includes`` name or any part of the namespace between ``marker`` and
``end_marker``. If none of ``includes``, ``marker`` or ``end_marker`` are
specified then all Namespaces will be returned.
:param namespaces: A list of :class:`~swift.common.utils.Namespace` or
:class:`~swift.common.utils.ShardRange`.
:param includes: a string; if not empty then only the Namespace,
if any, whose namespace includes this string will be returned,
``marker`` and ``end_marker`` will be ignored.
:param marker: if specified then only shard ranges whose upper bound is
greater than this value will be returned.
:param end_marker: if specified then only shard ranges whose lower bound is
less than this value will be returned.
:return: A filtered list of :class:`~swift.common.utils.Namespace`.
"""
if includes:
namespace = find_namespace(includes, namespaces)
return [namespace] if namespace else []
def namespace_filter(sr):
end = start = True
if end_marker:
end = end_marker > sr.lower
if marker:
start = marker < sr.upper
return start and end
if marker or end_marker:
return list(filter(namespace_filter, namespaces))
if marker == Namespace.MAX or end_marker == Namespace.MIN:
# MIN and MAX are both Falsy so not handled by namespace_filter
return []
return namespaces
def o_tmpfile_in_path_supported(dirpath):
fd = None
try:
fd = os.open(dirpath, os.O_WRONLY | O_TMPFILE)
return True
except OSError as e:
if e.errno in (errno.EINVAL, errno.EISDIR, errno.EOPNOTSUPP):
return False
else:
raise Exception("Error on '%(path)s' while checking "
"O_TMPFILE: '%(ex)s'" %
{'path': dirpath, 'ex': e})
finally:
if fd is not None:
os.close(fd)
def o_tmpfile_in_tmpdir_supported():
return o_tmpfile_in_path_supported(gettempdir())
def safe_json_loads(value):
if value:
try:
return json.loads(value)
except (TypeError, ValueError):
pass
return None
def strict_b64decode(value, allow_line_breaks=False):
'''
Validate and decode Base64-encoded data.
The stdlib base64 module silently discards bad characters, but we often
want to treat them as an error.
:param value: some base64-encoded data
:param allow_line_breaks: if True, ignore carriage returns and newlines
:returns: the decoded data
:raises ValueError: if ``value`` is not a string, contains invalid
characters, or has insufficient padding
'''
if isinstance(value, bytes):
try:
value = value.decode('ascii')
except UnicodeDecodeError:
raise ValueError
if not isinstance(value, six.text_type):
raise ValueError
# b64decode will silently discard bad characters, but we want to
# treat them as an error
valid_chars = string.digits + string.ascii_letters + '/+'
strip_chars = '='
if allow_line_breaks:
valid_chars += '\r\n'
strip_chars += '\r\n'
if any(c not in valid_chars for c in value.strip(strip_chars)):
raise ValueError
try:
return base64.b64decode(value)
except (TypeError, binascii.Error): # (py2 error, py3 error)
raise ValueError
def cap_length(value, max_length):
if value and len(value) > max_length:
if isinstance(value, bytes):
return value[:max_length] + b'...'
else:
return value[:max_length] + '...'
return value
MD5_BLOCK_READ_BYTES = 4096
def md5_hash_for_file(fname):
"""
Get the MD5 checksum of a file.
:param fname: path to file
:returns: MD5 checksum, hex encoded
"""
with open(fname, 'rb') as f:
md5sum = md5(usedforsecurity=False)
for block in iter(lambda: f.read(MD5_BLOCK_READ_BYTES), b''):
md5sum.update(block)
return md5sum.hexdigest()
def get_partition_for_hash(hex_hash, part_power):
"""
Return partition number for given hex hash and partition power.
:param hex_hash: A hash string
:param part_power: partition power
:returns: partition number
"""
raw_hash = binascii.unhexlify(hex_hash)
part_shift = 32 - int(part_power)
return struct.unpack_from('>I', raw_hash)[0] >> part_shift
def get_partition_from_path(devices, path):
"""
:param devices: directory where devices are mounted (e.g. /srv/node)
:param path: full path to a object file or hashdir
:returns: the (integer) partition from the path
"""
offset_parts = devices.rstrip(os.sep).split(os.sep)
path_components = path.split(os.sep)
if offset_parts == path_components[:len(offset_parts)]:
offset = len(offset_parts)
else:
raise ValueError('Path %r is not under device dir %r' % (
path, devices))
return int(path_components[offset + 2])
def replace_partition_in_path(devices, path, part_power):
"""
Takes a path and a partition power and returns the same path, but with the
correct partition number. Most useful when increasing the partition power.
:param devices: directory where devices are mounted (e.g. /srv/node)
:param path: full path to a object file or hashdir
:param part_power: partition power to compute correct partition number
:returns: Path with re-computed partition power
"""
offset_parts = devices.rstrip(os.sep).split(os.sep)
path_components = path.split(os.sep)
if offset_parts == path_components[:len(offset_parts)]:
offset = len(offset_parts)
else:
raise ValueError('Path %r is not under device dir %r' % (
path, devices))
part = get_partition_for_hash(path_components[offset + 4], part_power)
path_components[offset + 2] = "%d" % part
return os.sep.join(path_components)
def load_pkg_resource(group, uri):
if '#' in uri:
uri, name = uri.split('#', 1)
else:
name = uri
uri = 'egg:swift'
if ':' in uri:
scheme, dist = uri.split(':', 1)
scheme = scheme.lower()
else:
scheme = 'egg'
dist = uri
if scheme != 'egg':
raise TypeError('Unhandled URI scheme: %r' % scheme)
if pkg_resources:
# python < 3.8
return pkg_resources.load_entry_point(dist, group, name)
# May raise importlib.metadata.PackageNotFoundError
meta = importlib.metadata.distribution(dist)
entry_points = [ep for ep in meta.entry_points
if ep.group == group and ep.name == name]
if not entry_points:
raise ImportError("Entry point %r not found" % ((group, name),))
return entry_points[0].load()
class PipeMutex(object):
"""
Mutex using a pipe. Works across both greenlets and real threads, even
at the same time.
"""
def __init__(self):
self.rfd, self.wfd = os.pipe()
# You can't create a pipe in non-blocking mode; you must set it
# later.
rflags = fcntl.fcntl(self.rfd, fcntl.F_GETFL)
fcntl.fcntl(self.rfd, fcntl.F_SETFL, rflags | os.O_NONBLOCK)
os.write(self.wfd, b'-') # start unlocked
self.owner = None
self.recursion_depth = 0
# Usually, it's an error to have multiple greenthreads all waiting
# to read the same file descriptor. It's often a sign of inadequate
# concurrency control; for example, if you have two greenthreads
# trying to use the same memcache connection, they'll end up writing
# interleaved garbage to the socket or stealing part of each others'
# responses.
#
# In this case, we have multiple greenthreads waiting on the same
# file descriptor by design. This lets greenthreads in real thread A
# wait with greenthreads in real thread B for the same mutex.
# Therefore, we must turn off eventlet's multiple-reader detection.
#
# It would be better to turn off multiple-reader detection for only
# our calls to trampoline(), but eventlet does not support that.
eventlet.debug.hub_prevent_multiple_readers(False)
def acquire(self, blocking=True):
"""
Acquire the mutex.
If called with blocking=False, returns True if the mutex was
acquired and False if it wasn't. Otherwise, blocks until the mutex
is acquired and returns True.
This lock is recursive; the same greenthread may acquire it as many
times as it wants to, though it must then release it that many times
too.
"""
current_greenthread_id = id(eventlet.greenthread.getcurrent())
if self.owner == current_greenthread_id:
self.recursion_depth += 1
return True
while True:
try:
# If there is a byte available, this will read it and remove
# it from the pipe. If not, this will raise OSError with
# errno=EAGAIN.
os.read(self.rfd, 1)
self.owner = current_greenthread_id
return True
except OSError as err:
if err.errno != errno.EAGAIN:
raise
if not blocking:
return False
# Tell eventlet to suspend the current greenthread until
# self.rfd becomes readable. This will happen when someone
# else writes to self.wfd.
eventlet.hubs.trampoline(self.rfd, read=True)
def release(self):
"""
Release the mutex.
"""
current_greenthread_id = id(eventlet.greenthread.getcurrent())
if self.owner != current_greenthread_id:
raise RuntimeError("cannot release un-acquired lock")
if self.recursion_depth > 0:
self.recursion_depth -= 1
return
self.owner = None
os.write(self.wfd, b'X')
def close(self):
"""
Close the mutex. This releases its file descriptors.
You can't use a mutex after it's been closed.
"""
if self.wfd is not None:
os.close(self.rfd)
self.rfd = None
os.close(self.wfd)
self.wfd = None
self.owner = None
self.recursion_depth = 0
def __del__(self):
# We need this so we don't leak file descriptors. Otherwise, if you
# call get_logger() and don't explicitly dispose of it by calling
# logger.logger.handlers[0].lock.close() [1], the pipe file
# descriptors are leaked.
#
# This only really comes up in tests. Swift processes tend to call
# get_logger() once and then hang on to it until they exit, but the
# test suite calls get_logger() a lot.
#
# [1] and that's a completely ridiculous thing to expect callers to
# do, so nobody does it and that's okay.
self.close()
class NoopMutex(object):
"""
"Mutex" that doesn't lock anything.
We only allow our syslog logging to be configured via UDS or UDP, neither
of which have the message-interleaving trouble you'd expect from TCP or
file handlers.
"""
def __init__(self):
# Usually, it's an error to have multiple greenthreads all waiting
# to write to the same file descriptor. It's often a sign of inadequate
# concurrency control; for example, if you have two greenthreads
# trying to use the same memcache connection, they'll end up writing
# interleaved garbage to the socket or stealing part of each others'
# responses.
#
# In this case, we have multiple greenthreads waiting on the same
# (logging) file descriptor by design. So, similar to the PipeMutex,
# we must turn off eventlet's multiple-waiter detection.
#
# It would be better to turn off multiple-reader detection for only
# the logging socket fd, but eventlet does not support that.
eventlet.debug.hub_prevent_multiple_readers(False)
def acquire(self, blocking=True):
pass
def release(self):
pass
class ThreadSafeSysLogHandler(SysLogHandler):
def createLock(self):
if config_true_value(os.environ.get(
'SWIFT_NOOP_LOGGING_MUTEX') or 'true'):
self.lock = NoopMutex()
else:
self.lock = PipeMutex()
def round_robin_iter(its):
"""
Takes a list of iterators, yield an element from each in a round-robin
fashion until all of them are exhausted.
:param its: list of iterators
"""
while its:
for it in its:
try:
yield next(it)
except StopIteration:
its.remove(it)
OverrideOptions = collections.namedtuple(
'OverrideOptions', ['devices', 'partitions', 'policies'])
def parse_override_options(**kwargs):
"""
Figure out which policies, devices, and partitions we should operate on,
based on kwargs.
If 'override_policies' is already present in kwargs, then return that
value. This happens when using multiple worker processes; the parent
process supplies override_policies=X to each child process.
Otherwise, in run-once mode, look at the 'policies' keyword argument.
This is the value of the "--policies" command-line option. In
run-forever mode or if no --policies option was provided, an empty list
will be returned.
The procedures for devices and partitions are similar.
:returns: a named tuple with fields "devices", "partitions", and
"policies".
"""
run_once = kwargs.get('once', False)
if 'override_policies' in kwargs:
policies = kwargs['override_policies']
elif run_once:
policies = [
int(p) for p in list_from_csv(kwargs.get('policies'))]
else:
policies = []
if 'override_devices' in kwargs:
devices = kwargs['override_devices']
elif run_once:
devices = list_from_csv(kwargs.get('devices'))
else:
devices = []
if 'override_partitions' in kwargs:
partitions = kwargs['override_partitions']
elif run_once:
partitions = [
int(p) for p in list_from_csv(kwargs.get('partitions'))]
else:
partitions = []
return OverrideOptions(devices=devices, partitions=partitions,
policies=policies)
def distribute_evenly(items, num_buckets):
"""
Distribute items as evenly as possible into N buckets.
"""
out = [[] for _ in range(num_buckets)]
for index, item in enumerate(items):
out[index % num_buckets].append(item)
return out
def get_redirect_data(response):
"""
Extract a redirect location from a response's headers.
:param response: a response
:return: a tuple of (path, Timestamp) if a Location header is found,
otherwise None
:raises ValueError: if the Location header is found but a
X-Backend-Redirect-Timestamp is not found, or if there is a problem
with the format of etiher header
"""
headers = HeaderKeyDict(response.getheaders())
if 'Location' not in headers:
return None
location = urlparse(headers['Location']).path
if config_true_value(headers.get('X-Backend-Location-Is-Quoted',
'false')):
location = unquote(location)
account, container, _junk = split_path(location, 2, 3, True)
timestamp_val = headers.get('X-Backend-Redirect-Timestamp')
try:
timestamp = Timestamp(timestamp_val)
except (TypeError, ValueError):
raise ValueError('Invalid timestamp value: %s' % timestamp_val)
return '%s/%s' % (account, container), timestamp
def parse_db_filename(filename):
"""
Splits a db filename into three parts: the hash, the epoch, and the
extension.
>>> parse_db_filename("ab2134.db")
('ab2134', None, '.db')
>>> parse_db_filename("ab2134_1234567890.12345.db")
('ab2134', '1234567890.12345', '.db')
:param filename: A db file basename or path to a db file.
:return: A tuple of (hash , epoch, extension). ``epoch`` may be None.
:raises ValueError: if ``filename`` is not a path to a file.
"""
filename = os.path.basename(filename)
if not filename:
raise ValueError('Path to a file required.')
name, ext = os.path.splitext(filename)
parts = name.split('_')
hash_ = parts.pop(0)
epoch = parts[0] if parts else None
return hash_, epoch, ext
def make_db_file_path(db_path, epoch):
"""
Given a path to a db file, return a modified path whose filename part has
the given epoch.
A db filename takes the form ``<hash>[_<epoch>].db``; this method replaces
the ``<epoch>`` part of the given ``db_path`` with the given ``epoch``
value, or drops the epoch part if the given ``epoch`` is ``None``.
:param db_path: Path to a db file that does not necessarily exist.
:param epoch: A string (or ``None``) that will be used as the epoch
in the new path's filename; non-``None`` values will be
normalized to the normal string representation of a
:class:`~swift.common.utils.Timestamp`.
:return: A modified path to a db file.
:raises ValueError: if the ``epoch`` is not valid for constructing a
:class:`~swift.common.utils.Timestamp`.
"""
hash_, _, ext = parse_db_filename(db_path)
db_dir = os.path.dirname(db_path)
if epoch is None:
return os.path.join(db_dir, hash_ + ext)
epoch = Timestamp(epoch).normal
return os.path.join(db_dir, '%s_%s%s' % (hash_, epoch, ext))
def get_db_files(db_path):
"""
Given the path to a db file, return a sorted list of all valid db files
that actually exist in that path's dir. A valid db filename has the form::
<hash>[_<epoch>].db
where <hash> matches the <hash> part of the given db_path as would be
parsed by :meth:`~swift.utils.common.parse_db_filename`.
:param db_path: Path to a db file that does not necessarily exist.
:return: List of valid db files that do exist in the dir of the
``db_path``. This list may be empty.
"""
db_dir, db_file = os.path.split(db_path)
try:
files = os.listdir(db_dir)
except OSError as err:
if err.errno == errno.ENOENT:
return []
raise
if not files:
return []
match_hash, epoch, ext = parse_db_filename(db_file)
results = []
for f in files:
hash_, epoch, ext = parse_db_filename(f)
if ext != '.db':
continue
if hash_ != match_hash:
continue
results.append(os.path.join(db_dir, f))
return sorted(results)
def systemd_notify(logger=None):
"""
Notify the service manager that started this process, if it is
systemd-compatible, that this process correctly started. To do so,
it communicates through a Unix socket stored in environment variable
NOTIFY_SOCKET. More information can be found in systemd documentation:
https://www.freedesktop.org/software/systemd/man/sd_notify.html
:param logger: a logger object
"""
msg = b'READY=1'
notify_socket = os.getenv('NOTIFY_SOCKET')
if notify_socket:
if notify_socket.startswith('@'):
# abstract namespace socket
notify_socket = '\0%s' % notify_socket[1:]
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
with closing(sock):
try:
sock.connect(notify_socket)
sock.sendall(msg)
del os.environ['NOTIFY_SOCKET']
except EnvironmentError:
if logger:
logger.debug("Systemd notification failed", exc_info=True)
class Watchdog(object):
"""
Implements a watchdog to efficiently manage concurrent timeouts.
Compared to eventlet.timeouts.Timeout, it reduces the number of context
switching in eventlet by avoiding to schedule actions (throw an Exception),
then unschedule them if the timeouts are cancelled.
1. at T+0, request timeout(10)
=> wathdog greenlet sleeps 10 seconds
2. at T+1, request timeout(15)
=> the timeout will expire after the current, no need to wake up the
watchdog greenlet
3. at T+2, request timeout(5)
=> the timeout will expire before the first timeout, wake up the
watchdog greenlet to calculate a new sleep period
4. at T+7, the 3rd timeout expires
=> the exception is raised, then the greenlet watchdog sleep(3) to
wake up for the 1st timeout expiration
"""
def __init__(self):
# key => (timeout, timeout_at, caller_greenthread, exception)
self._timeouts = dict()
self._evt = Event()
self._next_expiration = None
self._run_gth = None
def start(self, timeout, exc, timeout_at=None):
"""
Schedule a timeout action
:param timeout: duration before the timeout expires
:param exc: exception to throw when the timeout expire, must inherit
from eventlet.Timeout
:param timeout_at: allow to force the expiration timestamp
:return: id of the scheduled timeout, needed to cancel it
"""
now = time.time()
if not timeout_at:
timeout_at = now + timeout
gth = eventlet.greenthread.getcurrent()
timeout_definition = (timeout, timeout_at, gth, exc, now)
key = id(timeout_definition)
self._timeouts[key] = timeout_definition
# Wake up the watchdog loop only when there is a new shorter timeout
if (self._next_expiration is None
or self._next_expiration > timeout_at):
# There could be concurrency on .send(), so wrap it in a try
try:
if not self._evt.ready():
self._evt.send()
except AssertionError:
pass
return key
def stop(self, key):
"""
Cancel a scheduled timeout
:param key: timeout id, as returned by start()
"""
try:
del(self._timeouts[key])
except KeyError:
pass
def spawn(self):
"""
Start the watchdog greenthread.
"""
if self._run_gth is None:
self._run_gth = eventlet.spawn(self.run)
def run(self):
while True:
self._run()
def _run(self):
now = time.time()
self._next_expiration = None
if self._evt.ready():
self._evt.reset()
for k, (timeout, timeout_at, gth, exc,
created_at) in list(self._timeouts.items()):
if timeout_at <= now:
self.stop(k)
e = exc()
# set this after __init__ to keep it off the eventlet scheduler
e.seconds = timeout
e.created_at = created_at
eventlet.hubs.get_hub().schedule_call_global(0, gth.throw, e)
else:
if (self._next_expiration is None
or self._next_expiration > timeout_at):
self._next_expiration = timeout_at
if self._next_expiration is None:
sleep_duration = self._next_expiration
else:
sleep_duration = self._next_expiration - now
self._evt.wait(sleep_duration)
class WatchdogTimeout(object):
"""
Context manager to schedule a timeout in a Watchdog instance
"""
def __init__(self, watchdog, timeout, exc, timeout_at=None):
"""
Schedule a timeout in a Watchdog instance
:param watchdog: Watchdog instance
:param timeout: duration before the timeout expires
:param exc: exception to throw when the timeout expire, must inherit
from eventlet.timeouts.Timeout
:param timeout_at: allow to force the expiration timestamp
"""
self.watchdog = watchdog
self.key = watchdog.start(timeout, exc, timeout_at=timeout_at)
def __enter__(self):
pass
def __exit__(self, type, value, traceback):
self.watchdog.stop(self.key)
class CooperativeIterator(object):
"""
Wrapper to make a deliberate periodic call to ``sleep()`` while iterating
over wrapped iterator, providing an opportunity to switch greenthreads.
This is for fairness; if the network is outpacing the CPU, we'll always be
able to read and write data without encountering an EWOULDBLOCK, and so
eventlet will not switch greenthreads on its own. We do it manually so that
clients don't starve.
The number 5 here was chosen by making stuff up. It's not every single
chunk, but it's not too big either, so it seemed like it would probably be
an okay choice.
Note that we may trampoline to other greenthreads more often than once
every 5 chunks, depending on how blocking our network IO is; the explicit
sleep here simply provides a lower bound on the rate of trampolining.
:param iterable: iterator to wrap.
:param period: number of items yielded from this iterator between calls to
``sleep()``.
"""
__slots__ = ('period', 'count', 'wrapped_iter')
def __init__(self, iterable, period=5):
self.wrapped_iter = iterable
self.count = 0
self.period = period
def __iter__(self):
return self
def next(self):
if self.count >= self.period:
self.count = 0
sleep()
self.count += 1
return next(self.wrapped_iter)
__next__ = next
def close(self):
close_if_possible(self.wrapped_iter)
| swift-master | swift/common/utils/__init__.py |
# Copyright (c) 2010-2023 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Timestamp-related functions for use with Swift."""
import datetime
import functools
import math
import time
import six
NORMAL_FORMAT = "%016.05f"
INTERNAL_FORMAT = NORMAL_FORMAT + '_%016x'
SHORT_FORMAT = NORMAL_FORMAT + '_%x'
MAX_OFFSET = (16 ** 16) - 1
PRECISION = 1e-5
# Setting this to True will cause the internal format to always display
# extended digits - even when the value is equivalent to the normalized form.
# This isn't ideal during an upgrade when some servers might not understand
# the new time format - but flipping it to True works great for testing.
FORCE_INTERNAL = False # or True
@functools.total_ordering
class Timestamp(object):
"""
Internal Representation of Swift Time.
The normalized form of the X-Timestamp header looks like a float
with a fixed width to ensure stable string sorting - normalized
timestamps look like "1402464677.04188"
To support overwrites of existing data without modifying the original
timestamp but still maintain consistency a second internal offset vector
is append to the normalized timestamp form which compares and sorts
greater than the fixed width float format but less than a newer timestamp.
The internalized format of timestamps looks like
"1402464677.04188_0000000000000000" - the portion after the underscore is
the offset and is a formatted hexadecimal integer.
The internalized form is not exposed to clients in responses from
Swift. Normal client operations will not create a timestamp with an
offset.
The Timestamp class in common.utils supports internalized and
normalized formatting of timestamps and also comparison of timestamp
values. When the offset value of a Timestamp is 0 - it's considered
insignificant and need not be represented in the string format; to
support backwards compatibility during a Swift upgrade the
internalized and normalized form of a Timestamp with an
insignificant offset are identical. When a timestamp includes an
offset it will always be represented in the internalized form, but
is still excluded from the normalized form. Timestamps with an
equivalent timestamp portion (the float part) will compare and order
by their offset. Timestamps with a greater timestamp portion will
always compare and order greater than a Timestamp with a lesser
timestamp regardless of it's offset. String comparison and ordering
is guaranteed for the internalized string format, and is backwards
compatible for normalized timestamps which do not include an offset.
"""
def __init__(self, timestamp, offset=0, delta=0, check_bounds=True):
"""
Create a new Timestamp.
:param timestamp: time in seconds since the Epoch, may be any of:
* a float or integer
* normalized/internalized string
* another instance of this class (offset is preserved)
:param offset: the second internal offset vector, an int
:param delta: deca-microsecond difference from the base timestamp
param, an int
"""
if isinstance(timestamp, bytes):
timestamp = timestamp.decode('ascii')
if isinstance(timestamp, six.string_types):
base, base_offset = timestamp.partition('_')[::2]
self.timestamp = float(base)
if '_' in base_offset:
raise ValueError('invalid literal for int() with base 16: '
'%r' % base_offset)
if base_offset:
self.offset = int(base_offset, 16)
else:
self.offset = 0
else:
self.timestamp = float(timestamp)
self.offset = getattr(timestamp, 'offset', 0)
# increment offset
if offset >= 0:
self.offset += offset
else:
raise ValueError('offset must be non-negative')
if self.offset > MAX_OFFSET:
raise ValueError('offset must be smaller than %d' % MAX_OFFSET)
self.raw = int(round(self.timestamp / PRECISION))
# add delta
if delta:
self.raw = self.raw + delta
if self.raw <= 0:
raise ValueError(
'delta must be greater than %d' % (-1 * self.raw))
self.timestamp = float(self.raw * PRECISION)
if check_bounds:
if self.timestamp < 0:
raise ValueError('timestamp cannot be negative')
if self.timestamp >= 10000000000:
raise ValueError('timestamp too large')
@classmethod
def now(cls, offset=0, delta=0):
return cls(time.time(), offset=offset, delta=delta)
def __repr__(self):
return INTERNAL_FORMAT % (self.timestamp, self.offset)
def __str__(self):
raise TypeError('You must specify which string format is required')
def __float__(self):
return self.timestamp
def __int__(self):
return int(self.timestamp)
def __nonzero__(self):
return bool(self.timestamp or self.offset)
def __bool__(self):
return self.__nonzero__()
@property
def normal(self):
return NORMAL_FORMAT % self.timestamp
@property
def internal(self):
if self.offset or FORCE_INTERNAL:
return INTERNAL_FORMAT % (self.timestamp, self.offset)
else:
return self.normal
@property
def short(self):
if self.offset or FORCE_INTERNAL:
return SHORT_FORMAT % (self.timestamp, self.offset)
else:
return self.normal
@property
def isoformat(self):
"""
Get an isoformat string representation of the 'normal' part of the
Timestamp with microsecond precision and no trailing timezone, for
example::
1970-01-01T00:00:00.000000
:return: an isoformat string
"""
t = float(self.normal)
if six.PY3:
# On Python 3, round manually using ROUND_HALF_EVEN rounding
# method, to use the same rounding method than Python 2. Python 3
# used a different rounding method, but Python 3.4.4 and 3.5.1 use
# again ROUND_HALF_EVEN as Python 2.
# See https://bugs.python.org/issue23517
frac, t = math.modf(t)
us = round(frac * 1e6)
if us >= 1000000:
t += 1
us -= 1000000
elif us < 0:
t -= 1
us += 1000000
dt = datetime.datetime.utcfromtimestamp(t)
dt = dt.replace(microsecond=us)
else:
dt = datetime.datetime.utcfromtimestamp(t)
isoformat = dt.isoformat()
# python isoformat() doesn't include msecs when zero
if len(isoformat) < len("1970-01-01T00:00:00.000000"):
isoformat += ".000000"
return isoformat
@classmethod
def from_isoformat(cls, date_string):
"""
Parse an isoformat string representation of time to a Timestamp object.
:param date_string: a string formatted as per an Timestamp.isoformat
property.
:return: an instance of this class.
"""
start = datetime.datetime.strptime(date_string, "%Y-%m-%dT%H:%M:%S.%f")
delta = start - EPOCH
# This calculation is based on Python 2.7's Modules/datetimemodule.c,
# function delta_to_microseconds(), but written in Python.
return cls(delta.total_seconds())
def ceil(self):
"""
Return the 'normal' part of the timestamp rounded up to the nearest
integer number of seconds.
This value should be used whenever the second-precision Last-Modified
time of a resource is required.
:return: a float value with second precision.
"""
return math.ceil(float(self))
def __eq__(self, other):
if other is None:
return False
if not isinstance(other, Timestamp):
try:
other = Timestamp(other, check_bounds=False)
except ValueError:
return False
return self.internal == other.internal
def __ne__(self, other):
return not (self == other)
def __lt__(self, other):
if other is None:
return False
if not isinstance(other, Timestamp):
other = Timestamp(other, check_bounds=False)
if other.timestamp < 0:
return False
if other.timestamp >= 10000000000:
return True
return self.internal < other.internal
def __hash__(self):
return hash(self.internal)
def __invert__(self):
if self.offset:
raise ValueError('Cannot invert timestamps with offsets')
return Timestamp((999999999999999 - self.raw) * PRECISION)
def encode_timestamps(t1, t2=None, t3=None, explicit=False):
"""
Encode up to three timestamps into a string. Unlike a Timestamp object, the
encoded string does NOT used fixed width fields and consequently no
relative chronology of the timestamps can be inferred from lexicographic
sorting of encoded timestamp strings.
The format of the encoded string is:
<t1>[<+/-><t2 - t1>[<+/-><t3 - t2>]]
i.e. if t1 = t2 = t3 then just the string representation of t1 is returned,
otherwise the time offsets for t2 and t3 are appended. If explicit is True
then the offsets for t2 and t3 are always appended even if zero.
Note: any offset value in t1 will be preserved, but offsets on t2 and t3
are not preserved. In the anticipated use cases for this method (and the
inverse decode_timestamps method) the timestamps passed as t2 and t3 are
not expected to have offsets as they will be timestamps associated with a
POST request. In the case where the encoding is used in a container objects
table row, t1 could be the PUT or DELETE time but t2 and t3 represent the
content type and metadata times (if different from the data file) i.e.
correspond to POST timestamps. In the case where the encoded form is used
in a .meta file name, t1 and t2 both correspond to POST timestamps.
"""
form = '{0}'
values = [t1.short]
if t2 is not None:
t2_t1_delta = t2.raw - t1.raw
explicit = explicit or (t2_t1_delta != 0)
values.append(t2_t1_delta)
if t3 is not None:
t3_t2_delta = t3.raw - t2.raw
explicit = explicit or (t3_t2_delta != 0)
values.append(t3_t2_delta)
if explicit:
form += '{1:+x}'
if t3 is not None:
form += '{2:+x}'
return form.format(*values)
def decode_timestamps(encoded, explicit=False):
"""
Parses a string of the form generated by encode_timestamps and returns
a tuple of the three component timestamps. If explicit is False, component
timestamps that are not explicitly encoded will be assumed to have zero
delta from the previous component and therefore take the value of the
previous component. If explicit is True, component timestamps that are
not explicitly encoded will be returned with value None.
"""
# TODO: some tests, e.g. in test_replicator, put float timestamps values
# into container db's, hence this defensive check, but in real world
# this may never happen.
if not isinstance(encoded, six.string_types):
ts = Timestamp(encoded)
return ts, ts, ts
parts = []
signs = []
pos_parts = encoded.split('+')
for part in pos_parts:
# parse time components and their signs
# e.g. x-y+z --> parts = [x, y, z] and signs = [+1, -1, +1]
neg_parts = part.split('-')
parts = parts + neg_parts
signs = signs + [1] + [-1] * (len(neg_parts) - 1)
t1 = Timestamp(parts[0])
t2 = t3 = None
if len(parts) > 1:
t2 = t1
delta = signs[1] * int(parts[1], 16)
# if delta = 0 we want t2 = t3 = t1 in order to
# preserve any offset in t1 - only construct a distinct
# timestamp if there is a non-zero delta.
if delta:
t2 = Timestamp((t1.raw + delta) * PRECISION)
elif not explicit:
t2 = t1
if len(parts) > 2:
t3 = t2
delta = signs[2] * int(parts[2], 16)
if delta:
t3 = Timestamp((t2.raw + delta) * PRECISION)
elif not explicit:
t3 = t2
return t1, t2, t3
def normalize_timestamp(timestamp):
"""
Format a timestamp (string or numeric) into a standardized
xxxxxxxxxx.xxxxx (10.5) format.
Note that timestamps using values greater than or equal to November 20th,
2286 at 17:46 UTC will use 11 digits to represent the number of
seconds.
:param timestamp: unix timestamp
:returns: normalized timestamp as a string
"""
return Timestamp(timestamp).normal
EPOCH = datetime.datetime(1970, 1, 1)
def last_modified_date_to_timestamp(last_modified_date_str):
"""
Convert a last modified date (like you'd get from a container listing,
e.g. 2014-02-28T23:22:36.698390) to a float.
"""
return Timestamp.from_isoformat(last_modified_date_str)
def normalize_delete_at_timestamp(timestamp, high_precision=False):
"""
Format a timestamp (string or numeric) into a standardized
xxxxxxxxxx (10) or xxxxxxxxxx.xxxxx (10.5) format.
Note that timestamps less than 0000000000 are raised to
0000000000 and values greater than November 20th, 2286 at
17:46:39 UTC will be capped at that date and time, resulting in
no return value exceeding 9999999999.99999 (or 9999999999 if
using low-precision).
This cap is because the expirer is already working through a
sorted list of strings that were all a length of 10. Adding
another digit would mess up the sort and cause the expirer to
break from processing early. By 2286, this problem will need to
be fixed, probably by creating an additional .expiring_objects
account to work from with 11 (or more) digit container names.
:param timestamp: unix timestamp
:returns: normalized timestamp as a string
"""
fmt = '%016.5f' if high_precision else '%010d'
return fmt % min(max(0, float(timestamp)), 9999999999.99999)
| swift-master | swift/common/utils/timestamp.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ctypes
import ctypes.util
import os
import platform
import re
import socket
import warnings
# Used by the parse_socket_string() function to validate IPv6 addresses
IPV6_RE = re.compile(r"^\[(?P<address>.*)\](:(?P<port>[0-9]+))?$")
def is_valid_ip(ip):
"""
Return True if the provided ip is a valid IP-address
"""
return is_valid_ipv4(ip) or is_valid_ipv6(ip)
def is_valid_ipv4(ip):
"""
Return True if the provided ip is a valid IPv4-address
"""
try:
socket.inet_pton(socket.AF_INET, ip)
except socket.error: # not a valid IPv4 address
return False
return True
def is_valid_ipv6(ip):
"""
Returns True if the provided ip is a valid IPv6-address
"""
try:
socket.inet_pton(socket.AF_INET6, ip)
except socket.error: # not a valid IPv6 address
return False
return True
def expand_ipv6(address):
"""
Expand ipv6 address.
:param address: a string indicating valid ipv6 address
:returns: a string indicating fully expanded ipv6 address
"""
packed_ip = socket.inet_pton(socket.AF_INET6, address)
return socket.inet_ntop(socket.AF_INET6, packed_ip)
libc = ctypes.CDLL(ctypes.util.find_library("c"), use_errno=True)
try:
getifaddrs = libc.getifaddrs
freeifaddrs = libc.freeifaddrs
netifaces = None # for patching
except AttributeError:
getifaddrs = None
freeifaddrs = None
try:
import netifaces
except ImportError:
raise ImportError('C function getifaddrs not available, '
'and netifaces not installed')
else:
warnings.warn('getifaddrs is not available; falling back to the '
'archived and no longer maintained netifaces project. '
'This fallback will be removed in a future release; '
'see https://bugs.launchpad.net/swift/+bug/2019233 for '
'more information.', FutureWarning)
else:
class sockaddr_in4(ctypes.Structure):
if platform.system() == 'Linux':
_fields_ = [
("sin_family", ctypes.c_uint16),
("sin_port", ctypes.c_uint16),
("sin_addr", ctypes.c_ubyte * 4),
]
else:
# Assume BSD / OS X
_fields_ = [
("sin_len", ctypes.c_uint8),
("sin_family", ctypes.c_uint8),
("sin_port", ctypes.c_uint16),
("sin_addr", ctypes.c_ubyte * 4),
]
class sockaddr_in6(ctypes.Structure):
if platform.system() == 'Linux':
_fields_ = [
("sin6_family", ctypes.c_uint16),
("sin6_port", ctypes.c_uint16),
("sin6_flowinfo", ctypes.c_uint32),
("sin6_addr", ctypes.c_ubyte * 16),
]
else:
# Assume BSD / OS X
_fields_ = [
("sin6_len", ctypes.c_uint8),
("sin6_family", ctypes.c_uint8),
("sin6_port", ctypes.c_uint16),
("sin6_flowinfo", ctypes.c_uint32),
("sin6_addr", ctypes.c_ubyte * 16),
]
class ifaddrs(ctypes.Structure):
pass
# Have to do this a little later so we can self-reference
ifaddrs._fields_ = [
("ifa_next", ctypes.POINTER(ifaddrs)),
("ifa_name", ctypes.c_char_p),
("ifa_flags", ctypes.c_int),
# Use the smaller of the two to start, can cast later
# when we *know* we're looking at INET6
("ifa_addr", ctypes.POINTER(sockaddr_in4)),
# Don't care about the rest of the fields
]
def errcheck(result, func, arguments):
if result != 0:
errno = ctypes.set_errno(0)
raise OSError(errno, "getifaddrs: %s" % os.strerror(errno))
return result
getifaddrs.errcheck = errcheck
def whataremyips(ring_ip=None):
"""
Get "our" IP addresses ("us" being the set of services configured by
one `*.conf` file). If our REST listens on a specific address, return it.
Otherwise, if listen on '0.0.0.0' or '::' return all addresses, including
the loopback.
:param str ring_ip: Optional ring_ip/bind_ip from a config file; may be
IP address or hostname.
:returns: list of Strings of ip addresses
"""
if ring_ip:
# See if bind_ip is '0.0.0.0'/'::'
try:
_, _, _, _, sockaddr = socket.getaddrinfo(
ring_ip, None, 0, socket.SOCK_STREAM, 0,
socket.AI_NUMERICHOST)[0]
if sockaddr[0] not in ('0.0.0.0', '::'):
return [ring_ip]
except socket.gaierror:
pass
addresses = []
if getifaddrs:
addrs = ctypes.POINTER(ifaddrs)()
getifaddrs(ctypes.byref(addrs))
try:
cur = addrs
while cur:
if not cur.contents.ifa_addr:
# Not all interfaces will have addresses; move on
cur = cur.contents.ifa_next
continue
sa_family = cur.contents.ifa_addr.contents.sin_family
if sa_family == socket.AF_INET:
addresses.append(
socket.inet_ntop(
socket.AF_INET,
cur.contents.ifa_addr.contents.sin_addr,
)
)
elif sa_family == socket.AF_INET6:
addr = ctypes.cast(cur.contents.ifa_addr,
ctypes.POINTER(sockaddr_in6))
addresses.append(
socket.inet_ntop(
socket.AF_INET6,
addr.contents.sin6_addr,
)
)
cur = cur.contents.ifa_next
finally:
freeifaddrs(addrs)
return addresses
# getifaddrs not available; try netifaces
for interface in netifaces.interfaces():
try:
iface_data = netifaces.ifaddresses(interface)
for family in iface_data:
if family not in (netifaces.AF_INET, netifaces.AF_INET6):
continue
for address in iface_data[family]:
addr = address['addr']
# If we have an ipv6 address remove the
# %ether_interface at the end
if family == netifaces.AF_INET6:
addr = expand_ipv6(addr.split('%')[0])
addresses.append(addr)
except ValueError:
pass
return addresses
def parse_socket_string(socket_string, default_port):
"""
Given a string representing a socket, returns a tuple of (host, port).
Valid strings are DNS names, IPv4 addresses, or IPv6 addresses, with an
optional port. If an IPv6 address is specified it **must** be enclosed in
[], like *[::1]* or *[::1]:11211*. This follows the accepted prescription
for `IPv6 host literals`_.
Examples::
server.org
server.org:1337
127.0.0.1:1337
[::1]:1337
[::1]
.. _IPv6 host literals: https://tools.ietf.org/html/rfc3986#section-3.2.2
"""
port = default_port
# IPv6 addresses must be between '[]'
if socket_string.startswith('['):
match = IPV6_RE.match(socket_string)
if not match:
raise ValueError("Invalid IPv6 address: %s" % socket_string)
host = match.group('address')
port = match.group('port') or port
else:
if ':' in socket_string:
tokens = socket_string.split(':')
if len(tokens) > 2:
raise ValueError("IPv6 addresses must be between '[]'")
host, port = tokens
else:
host = socket_string
return (host, port)
| swift-master | swift/common/utils/ipaddrs.py |
# Copyright (c) 2010-2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
A standard ring built using the :ref:`ring-builder <ring_builder>` will attempt
to randomly disperse replicas or erasure-coded fragments across failure
domains, but does not provide any guarantees such as placing at least one
replica of every partition into each region. Composite rings are intended to
provide operators with greater control over the dispersion of object replicas
or fragments across a cluster, in particular when there is a desire to
have strict guarantees that some replicas or fragments are placed in certain
failure domains. This is particularly important for policies with duplicated
erasure-coded fragments.
A composite ring comprises two or more component rings that are combined to
form a single ring with a replica count equal to the sum of replica counts
from the component rings. The component rings are built independently, using
distinct devices in distinct regions, which means that the dispersion of
replicas between the components can be guaranteed. The ``composite_builder``
utilities may then be used to combine components into a composite ring.
For example, consider a normal ring ``ring0`` with replica count of 4 and
devices in two regions ``r1`` and ``r2``. Despite the best efforts of the
ring-builder, it is possible for there to be three replicas of a particular
partition placed in one region and only one replica placed in the other region.
For example::
part_n -> r1z1h110/sdb r1z2h12/sdb r1z3h13/sdb r2z1h21/sdb
Now consider two normal rings each with replica count of 2: ``ring1`` has
devices in only ``r1``; ``ring2`` has devices in only ``r2``.
When these rings are combined into a composite ring then every partition is
guaranteed to be mapped to two devices in each of ``r1`` and ``r2``, for
example::
part_n -> r1z1h10/sdb r1z2h20/sdb r2z1h21/sdb r2z2h22/sdb
|_____________________| |_____________________|
| |
ring1 ring2
The dispersion of partition replicas across failure domains within each of the
two component rings may change as they are modified and rebalanced, but the
dispersion of replicas between the two regions is guaranteed by the use of a
composite ring.
For rings to be formed into a composite they must satisfy the following
requirements:
* All component rings must have the same part power (and therefore number of
partitions)
* All component rings must have an integer replica count
* Each region may only be used in one component ring
* Each device may only be used in one component ring
Under the hood, the composite ring has a ``_replica2part2dev_id`` table that is
the union of the tables from the component rings. Whenever the component rings
are rebalanced, the composite ring must be rebuilt. There is no dynamic
rebuilding of the composite ring.
.. note::
The order in which component rings are combined into a composite ring is
very significant because it determines the order in which the
Ring.get_part_nodes() method will provide primary nodes for the composite
ring and consequently the node indexes assigned to the primary nodes. For
an erasure-coded policy, inadvertent changes to the primary node indexes
could result in large amounts of data movement due to fragments being moved
to their new correct primary.
The ``id`` of each component RingBuilder is therefore stored in metadata of
the composite and used to check for the component ordering when the same
composite ring is re-composed. RingBuilder ``id``\\s are normally assigned
when a RingBuilder instance is first saved. Older RingBuilder instances
loaded from file may not have an ``id`` assigned and will need to be saved
before they can be used as components of a composite ring. This can be
achieved by, for example::
swift-ring-builder <builder-file> rebalance --force
"""
import copy
import json
import os
from random import shuffle
from swift.common.exceptions import RingBuilderError
from swift.common.ring import RingBuilder
from swift.common.ring import RingData
from collections import defaultdict
from itertools import combinations
MUST_MATCH_ATTRS = (
'part_power',
)
def pre_validate_all_builders(builders):
"""
Pre-validation for all component ring builders that are to be included in
the composite ring. Checks that all component rings are valid with respect
to each other.
:param builders: a list of :class:`swift.common.ring.builder.RingBuilder`
instances
:raises ValueError: if the builders are invalid with respect to each other
"""
if len(builders) < 2:
raise ValueError('Two or more component builders are required.')
# all ring builders should be consistent for each MUST_MATCH_ATTRS
for attr in MUST_MATCH_ATTRS:
attr_dict = defaultdict(list)
for i, builder in enumerate(builders):
value = getattr(builder, attr, None)
attr_dict[value].append(i)
if len(attr_dict) > 1:
variations = ['%s=%s found at indexes %s' %
(attr, val, indexes)
for val, indexes in attr_dict.items()]
raise ValueError(
'All builders must have same value for %r.\n%s'
% (attr, '\n '.join(variations)))
# all ring builders should have int replica count and not have dirty mods
errors = []
for index, builder in enumerate(builders):
if int(builder.replicas) != builder.replicas:
errors.append(
'Non integer replica count %s found at index %s' %
(builder.replicas, index))
if builder.devs_changed:
errors.append(
'Builder needs rebalance to apply changes at index %s' %
index)
if errors:
raise ValueError(
'Problem with builders.\n%s' % ('\n '.join(errors)))
# check regions
regions_info = {}
for builder in builders:
regions_info[builder] = set(
dev['region'] for dev in builder._iter_devs())
for first_region_set, second_region_set in combinations(
regions_info.values(), 2):
inter = first_region_set & second_region_set
if inter:
raise ValueError('Same region found in different rings')
# check device uniqueness
check_for_dev_uniqueness(builders)
def check_for_dev_uniqueness(builders):
"""
Check that no device appears in more than one of the given list of
builders.
:param builders: a list of :class:`swift.common.ring.builder.RingBuilder`
instances
:raises ValueError: if the same device is found in more than one builder
"""
builder2devs = []
for i, builder in enumerate(builders):
dev_set = set()
for dev in builder._iter_devs():
ip, port, device = (dev['ip'], dev['port'], dev['device'])
for j, (other_builder, devs) in enumerate(builder2devs):
if (ip, port, device) in devs:
raise ValueError(
'Duplicate ip/port/device combination %s/%s/%s found '
'in builders at indexes %s and %s' %
(ip, port, device, j, i)
)
dev_set.add((ip, port, device))
builder2devs.append((builder, dev_set))
def _make_composite_ring(builders):
"""
Given a list of component ring builders, return a composite RingData
instance.
:param builders: a list of
:class:`swift.common.ring.builder.RingBuilder` instances
:return: a new RingData instance built from the component builders
:raises ValueError: if the builders are invalid with respect to each other
"""
composite_r2p2d = []
composite_devs = []
device_offset = 0
for builder in builders:
# copy all devs list and replica2part2dev table to be able
# to modify the id for each dev
devs = copy.deepcopy(builder.devs)
r2p2d = copy.deepcopy(builder._replica2part2dev)
for part2dev in r2p2d:
for part, dev in enumerate(part2dev):
part2dev[part] += device_offset
for dev in [d for d in devs if d]:
# note that some devs may not be referenced in r2p2d but update
# their dev id nonetheless
dev['id'] += device_offset
composite_r2p2d.extend(r2p2d)
composite_devs.extend(devs)
device_offset += len(builder.devs)
return RingData(composite_r2p2d, composite_devs, builders[0].part_shift)
def compose_rings(builders):
"""
Given a list of component ring builders, perform validation on the list of
builders and return a composite RingData instance.
:param builders: a list of
:class:`swift.common.ring.builder.RingBuilder` instances
:return: a new RingData instance built from the component builders
:raises ValueError: if the builders are invalid with respect to each other
"""
pre_validate_all_builders(builders)
rd = _make_composite_ring(builders)
return rd
def _make_component_meta(builder):
"""
Return a dict of selected builder attributes to save in composite meta. The
dict has keys ``version``, ``replicas`` and ``id``.
:param builder: a :class:`swift.common.ring.builder.RingBuilder`
instance
:return: a dict of component metadata
"""
attrs = ['version', 'replicas', 'id']
metadata = dict((attr, getattr(builder, attr)) for attr in attrs)
return metadata
def _make_composite_metadata(builders):
"""
Return a dict with key ``components`` that maps to a list of dicts, each
dict being of the form returned by :func:`_make_component_meta`.
:param builders: a list of
:class:`swift.common.ring.builder.RingBuilder` instances
:return: a dict of composite metadata
"""
component_meta = [_make_component_meta(builder) for builder in builders]
return {'components': component_meta}
def check_same_builder(old_component, new_component):
"""
Check that the given new_component metadata describes the same builder as
the given old_component metadata. The new_component builder does not
necessarily need to be in the same state as when the old_component metadata
was created to satisfy this check e.g. it may have changed devs and been
rebalanced.
:param old_component: a dict of metadata describing a component builder
:param new_component: a dict of metadata describing a component builder
:raises ValueError: if the new_component is not the same as that described
by the old_component
"""
for key in ['replicas', 'id']:
if old_component[key] != new_component[key]:
raise ValueError("Attribute mismatch for %s: %r != %r" %
(key, old_component[key], new_component[key]))
def is_builder_newer(old_component, new_component):
"""
Return True if the given builder has been modified with respect to its
state when the given component_meta was created.
:param old_component: a dict of metadata describing a component ring
:param new_component: a dict of metadata describing a component ring
:return: True if the builder has been modified, False otherwise.
:raises ValueError: if the version of the new_component is older than the
version of the existing component.
"""
if new_component['version'] < old_component['version']:
raise ValueError('Older builder version: %s < %s' %
(new_component['version'], old_component['version']))
return old_component['version'] < new_component['version']
def check_against_existing(old_composite_meta, new_composite_meta):
"""
Check that the given builders and their order are the same as that
used to build an existing composite ring. Return True if any of the given
builders has been modified with respect to its state when the given
component_meta was created.
:param old_composite_meta: a dict of the form returned by
:func:`_make_composite_meta`
:param new_composite_meta: a dict of the form returned by
:func:`_make_composite_meta`
:return: True if any of the components has been modified, False otherwise.
:raises Value Error: if proposed new components do not match any existing
components.
"""
errors = []
newer = False
old_components = old_composite_meta['components']
new_components = new_composite_meta['components']
for i, old_component in enumerate(old_components):
try:
new_component = new_components[i]
except IndexError:
errors.append("Missing builder at index %d" % i)
continue
try:
# check we have same component builder in this position vs existing
check_same_builder(old_component, new_component)
newer |= is_builder_newer(old_component, new_component)
except ValueError as err:
errors.append("Invalid builder change at index %d: %s" % (i, err))
for j, new_component in enumerate(new_components[i + 1:], start=i + 1):
errors.append("Unexpected extra builder at index %d: %r" %
(j, new_component))
if errors:
raise ValueError('\n'.join(errors))
return newer
def check_builder_ids(builders):
"""
Check that all builders in the given list have id's assigned and that no
id appears more than once in the list.
:param builders: a list instances of
:class:`swift.common.ring.builder.RingBuilder`
:raises: ValueError if any builder id is missing or repeated
"""
id2index = defaultdict(list)
errors = []
for i, builder in enumerate(builders):
try:
id2index[builder.id].append(str(i))
except AttributeError as err:
errors.append("Problem with builder at index %d: %s" % (i, err))
for builder_id, index in id2index.items():
if len(index) > 1:
errors.append("Builder id %r used at indexes %s" %
(builder_id, ', '.join(index)))
if errors:
raise ValueError('\n'.join(errors))
class CompositeRingBuilder(object):
"""
Provides facility to create, persist, load, rebalance and update composite
rings, for example::
# create a CompositeRingBuilder instance with a list of
# component builder files
crb = CompositeRingBuilder(["region1.builder", "region2.builder"])
# perform a cooperative rebalance of the component builders
crb.rebalance()
# call compose which will make a new RingData instance
ring_data = crb.compose()
# save the composite ring file
ring_data.save("composite_ring.gz")
# save the composite metadata file
crb.save("composite_builder.composite")
# load the persisted composite metadata file
crb = CompositeRingBuilder.load("composite_builder.composite")
# compose (optionally update the paths to the component builder files)
crb.compose(["/path/to/region1.builder", "/path/to/region2.builder"])
Composite ring metadata is persisted to file in JSON format. The metadata
has the structure shown below (using example values)::
{
"version": 4,
"components": [
{
"version": 3,
"id": "8e56f3b692d43d9a666440a3d945a03a",
"replicas": 1
},
{
"version": 5,
"id": "96085923c2b644999dbfd74664f4301b",
"replicas": 1
}
]
"component_builder_files": {
"8e56f3b692d43d9a666440a3d945a03a": "/etc/swift/region1.builder",
"96085923c2b644999dbfd74664f4301b": "/etc/swift/region2.builder",
}
"serialization_version": 1,
"saved_path": "/etc/swift/multi-ring-1.composite",
}
`version` is an integer representing the current version of the composite
ring, which increments each time the ring is successfully (re)composed.
`components` is a list of dicts, each of which describes relevant
properties of a component ring
`component_builder_files` is a dict that maps component ring builder ids to
the file from which that component ring builder was loaded.
`serialization_version` is an integer constant.
`saved_path` is the path to which the metadata was written.
:params builder_files: a list of paths to builder files that will be used
as components of the composite ring.
"""
def __init__(self, builder_files=None):
self.version = 0
self.components = []
self.ring_data = None
self._builder_files = None
self._set_builder_files(builder_files or [])
self._builders = None # these are lazy loaded in _load_components
def _set_builder_files(self, builder_files):
self._builder_files = [os.path.abspath(bf) for bf in builder_files]
@classmethod
def load(cls, path_to_file):
"""
Load composite ring metadata.
:param path_to_file: Absolute path to a composite ring JSON file.
:return: an instance of :class:`CompositeRingBuilder`
:raises IOError: if there is a problem opening the file
:raises ValueError: if the file does not contain valid composite ring
metadata
"""
try:
with open(path_to_file, 'rt') as fp:
metadata = json.load(fp)
builder_files = [metadata['component_builder_files'][comp['id']]
for comp in metadata['components']]
builder = CompositeRingBuilder(builder_files)
builder.components = metadata['components']
builder.version = metadata['version']
except (ValueError, TypeError, KeyError):
raise ValueError("File does not contain valid composite ring data")
return builder
def to_dict(self):
"""
Transform the composite ring attributes to a dict. See
:class:`CompositeRingBuilder` for details of the persisted metadata
format.
:return: a composite ring metadata dict
"""
id2builder_file = dict((component['id'], self._builder_files[i])
for i, component in enumerate(self.components))
return {'components': self.components,
'component_builder_files': id2builder_file,
'version': self.version}
def save(self, path_to_file):
"""
Save composite ring metadata to given file. See
:class:`CompositeRingBuilder` for details of the persisted metadata
format.
:param path_to_file: Absolute path to a composite ring file
:raises ValueError: if no composite ring has been built yet with this
instance
"""
if not self.components or not self._builder_files:
raise ValueError("No composed ring to save.")
# persist relative paths to builder files
with open(path_to_file, 'wt') as fp:
metadata = self.to_dict()
# future-proofing:
# - saving abs path to component builder files and this file should
# allow the relative paths to be derived if required when loading
# a set of {composite builder file, component builder files} that
# has been moved, so long as their relative locations are
# unchanged.
# - save a serialization format version number
metadata['saved_path'] = os.path.abspath(path_to_file)
metadata['serialization_version'] = 1
json.dump(metadata, fp)
def _load_components(self, builder_files=None, force=False,
require_modified=False):
if self._builders:
return self._builder_files, self._builders
builder_files = builder_files or self._builder_files
if len(builder_files) < 2:
raise ValueError('Two or more component builders are required.')
builders = []
for builder_file in builder_files:
# each component builder gets a reference to this composite builder
# so that it can delegate part movement decisions to the composite
# builder during rebalance
builders.append(CooperativeRingBuilder.load(builder_file,
parent_builder=self))
check_builder_ids(builders)
new_metadata = _make_composite_metadata(builders)
if self.components and self._builder_files and not force:
modified = check_against_existing(self.to_dict(), new_metadata)
if require_modified and not modified:
raise ValueError(
"None of the component builders has been modified"
" since the existing composite ring was built.")
self._set_builder_files(builder_files)
self._builders = builders
return self._builder_files, self._builders
def load_components(self, builder_files=None, force=False,
require_modified=False):
"""
Loads component ring builders from builder files. Previously loaded
component ring builders will discarded and reloaded.
If a list of component ring builder files is given then that will be
used to load component ring builders. Otherwise, component ring
builders will be loaded using the list of builder files that was set
when the instance was constructed.
In either case, if metadata for an existing composite ring has been
loaded then the component ring builders are verified for consistency
with the existing composition of builders, unless the optional
``force`` flag if set True.
:param builder_files: Optional list of paths to ring builder
files that will be used to load the component ring builders.
Typically the list of component builder files will have been set
when the instance was constructed, for example when using the
load() class method. However, this parameter may be used if the
component builder file paths have moved, or, in conjunction with
the ``force`` parameter, if a new list of component builders is to
be used.
:param force: if True then do not verify given builders are
consistent with any existing composite ring (default is False).
:param require_modified: if True and ``force`` is False, then
verify that at least one of the given builders has been modified
since the composite ring was last built (default is False).
:return: A tuple of (builder files, loaded builders)
:raises: ValueError if the component ring builders are not suitable for
composing with each other, or are inconsistent with any existing
composite ring, or if require_modified is True and there has been
no change with respect to the existing ring.
"""
self._builders = None # force a reload of builders
return self._load_components(
builder_files, force, require_modified)
def compose(self, builder_files=None, force=False, require_modified=False):
"""
Builds a composite ring using component ring builders loaded from a
list of builder files and updates composite ring metadata.
If a list of component ring builder files is given then that will be
used to load component ring builders. Otherwise, component ring
builders will be loaded using the list of builder files that was set
when the instance was constructed.
In either case, if metadata for an existing composite ring has been
loaded then the component ring builders are verified for consistency
with the existing composition of builders, unless the optional
``force`` flag if set True.
:param builder_files: Optional list of paths to ring builder
files that will be used to load the component ring builders.
Typically the list of component builder files will have been set
when the instance was constructed, for example when using the
load() class method. However, this parameter may be used if the
component builder file paths have moved, or, in conjunction with
the ``force`` parameter, if a new list of component builders is to
be used.
:param force: if True then do not verify given builders are
consistent with any existing composite ring (default is False).
:param require_modified: if True and ``force`` is False, then
verify that at least one of the given builders has been modified
since the composite ring was last built (default is False).
:return: An instance of :class:`swift.common.ring.ring.RingData`
:raises: ValueError if the component ring builders are not suitable for
composing with each other, or are inconsistent with any existing
composite ring, or if require_modified is True and there has been
no change with respect to the existing ring.
"""
self.load_components(builder_files, force=force,
require_modified=require_modified)
self.ring_data = compose_rings(self._builders)
self.version += 1
new_metadata = _make_composite_metadata(self._builders)
self.components = new_metadata['components']
return self.ring_data
def rebalance(self):
"""
Cooperatively rebalances all component ring builders.
This method does not change the state of the composite ring; a
subsequent call to :meth:`compose` is required to generate updated
composite :class:`RingData`.
:return: A list of dicts, one per component builder, each having the
following keys:
* 'builder_file' maps to the component builder file;
* 'builder' maps to the corresponding instance of
:class:`swift.common.ring.builder.RingBuilder`;
* 'result' maps to the results of the rebalance of that component
i.e. a tuple of: `(number_of_partitions_altered,
resulting_balance, number_of_removed_devices)`
The list has the same order as components in the composite ring.
:raises RingBuilderError: if there is an error while rebalancing any
component builder.
"""
self._load_components()
self.update_last_part_moves()
component_builders = list(zip(self._builder_files, self._builders))
# don't let the same builder go first each time
shuffle(component_builders)
results = {}
for builder_file, builder in component_builders:
try:
results[builder] = {
'builder': builder,
'builder_file': builder_file,
'result': builder.rebalance()
}
builder.validate()
except RingBuilderError as err:
self._builders = None
raise RingBuilderError(
'An error occurred while rebalancing component %s: %s' %
(builder_file, err))
for builder_file, builder in component_builders:
builder.save(builder_file)
# return results in component order
return [results[builder] for builder in self._builders]
def can_part_move(self, part):
"""
Check with all component builders that it is ok to move a partition.
:param part: The partition to check.
:return: True if all component builders agree that the partition can be
moved, False otherwise.
"""
# Called by component builders.
return all(b.can_part_move(part) for b in self._builders)
def update_last_part_moves(self):
"""
Updates the record of how many hours ago each partition was moved in
all component builders.
"""
# Called at start of each composite rebalance. We need all component
# builders to be at same last_part_moves epoch before any builder
# starts moving parts; this will effectively be a no-op for builders
# that have already been updated in last hour
for b in self._builders:
b.update_last_part_moves()
class CooperativeRingBuilder(RingBuilder):
"""
A subclass of :class:`RingBuilder` that participates in cooperative
rebalance.
During rebalance this subclass will consult with its `parent_builder`
before moving a partition. The `parent_builder` may in turn check with
co-builders (including this instance) to verify that none have moved that
partition in the last `min_part_hours`.
:param part_power: number of partitions = 2**part_power.
:param replicas: number of replicas for each partition.
:param min_part_hours: minimum number of hours between partition changes.
:param parent_builder: an instance of :class:`CompositeRingBuilder`.
"""
def __init__(self, part_power, replicas, min_part_hours, parent_builder):
super(CooperativeRingBuilder, self).__init__(
part_power, replicas, min_part_hours)
self.parent_builder = parent_builder
def _can_part_move(self, part):
# override superclass method to delegate to the parent builder
return self.parent_builder.can_part_move(part)
def can_part_move(self, part):
"""
Check that in the context of this builder alone it is ok to move a
partition.
:param part: The partition to check.
:return: True if the partition can be moved, False otherwise.
"""
# called by parent_builder - now forward to the superclass
return (not self.ever_rebalanced or
super(CooperativeRingBuilder, self)._can_part_move(part))
def _update_last_part_moves(self):
# overrides superclass method - parent builder should have called
# update_last_part_moves() before rebalance; calling the superclass
# method here would reset _part_moved_bitmap which is state we rely on
# when min_part_hours is zero
pass
def update_last_part_moves(self):
"""
Updates the record of how many hours ago each partition was moved in
in this builder.
"""
# called by parent_builder - now forward to the superclass
return super(CooperativeRingBuilder, self)._update_last_part_moves()
| swift-master | swift/common/ring/composite_builder.py |
# Copyright (c) 2010-2017 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.common.ring.ring import RingData, Ring
from swift.common.ring.builder import RingBuilder
__all__ = [
'RingData',
'Ring',
'RingBuilder',
]
| swift-master | swift/common/ring/__init__.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import copy
import errno
import itertools
import logging
import math
import random
import uuid
import six.moves.cPickle as pickle
from copy import deepcopy
from contextlib import contextmanager
from array import array
from collections import defaultdict
import six
from six.moves import range
from time import time
from swift.common import exceptions
from swift.common.ring.ring import RingData
from swift.common.ring.utils import tiers_for_dev, build_tier_tree, \
validate_and_normalize_address, validate_replicas_by_tier, pretty_dev
# we can't store None's in the replica2part2dev array, so we high-jack
# the max value for magic to represent the part is not currently
# assigned to any device.
NONE_DEV = 2 ** 16 - 1
MAX_BALANCE = 999.99
MAX_BALANCE_GATHER_COUNT = 3
class RingValidationWarning(Warning):
pass
@contextlib.contextmanager
def _set_random_seed(seed):
# If random seed is set when entering this context then reset original
# random state when exiting the context. This avoids a test calling this
# method with a fixed seed value causing all subsequent tests to use a
# repeatable random sequence.
random_state = None
if seed is not None:
random_state = random.getstate()
random.seed(seed)
try:
yield
finally:
if random_state:
# resetting state rather than calling seed() eases unit testing
random.setstate(random_state)
class RingBuilder(object):
"""
Used to build swift.common.ring.RingData instances to be written to disk
and used with swift.common.ring.Ring instances. See bin/swift-ring-builder
for example usage.
The instance variable devs_changed indicates if the device information has
changed since the last balancing. This can be used by tools to know whether
a rebalance request is an isolated request or due to added, changed, or
removed devices.
:param part_power: number of partitions = 2**part_power.
:param replicas: number of replicas for each partition
:param min_part_hours: minimum number of hours between partition changes
"""
def __init__(self, part_power, replicas, min_part_hours):
if part_power > 32:
raise ValueError("part_power must be at most 32 (was %d)"
% (part_power,))
if part_power < 0:
raise ValueError("part_power must be at least 0 (was %d)"
% (part_power,))
if replicas < 1:
raise ValueError("replicas must be at least 1 (was %.6f)"
% (replicas,))
if min_part_hours < 0:
raise ValueError("min_part_hours must be non-negative (was %d)"
% (min_part_hours,))
self.part_power = part_power
self.next_part_power = None
self.replicas = replicas
self.min_part_hours = min_part_hours
self.parts = 2 ** self.part_power
self.devs = []
self.devs_changed = False
self.version = 0
self.overload = 0.0
self._id = None
# _replica2part2dev maps from replica number to partition number to
# device id. So, for a three replica, 2**23 ring, it's an array of
# three 2**23 arrays of device ids (unsigned shorts). This can work a
# bit faster than the 2**23 array of triplet arrays of device ids in
# many circumstances. Making one big 2**23 * 3 array didn't seem to
# have any speed change; though you're welcome to try it again (it was
# a while ago, code-wise, when I last tried it).
self._replica2part2dev = None
# _last_part_moves is an array of unsigned bytes representing
# the number of hours since a given partition was last moved.
# This is used to guarantee we don't move a partition twice
# within a given number of hours (24 is my usual test). Removing
# a device overrides this behavior as it's assumed that's only
# done because of device failure.
self._last_part_moves = array('B', itertools.repeat(0, self.parts))
# _part_moved_bitmap record parts have been moved
self._part_moved_bitmap = None
# _last_part_moves_epoch indicates the time the offsets in
# _last_part_moves is based on.
self._last_part_moves_epoch = 0
self._last_part_gather_start = 0
self._dispersion_graph = {}
self.dispersion = 0.0
self._remove_devs = []
self._ring = None
self.logger = logging.getLogger("swift.ring.builder")
if not self.logger.handlers:
self.logger.disabled = True
# silence "no handler for X" error messages
self.logger.addHandler(logging.NullHandler())
@property
def id(self):
if self._id is None:
# We don't automatically assign an id here because we want a caller
# to explicitly know when a builder needs an id to be assigned. In
# that case the caller must save the builder in order that a newly
# assigned id is persisted.
raise AttributeError(
'id attribute has not been initialised by calling save()')
return self._id
@property
def part_shift(self):
return 32 - self.part_power
@property
def ever_rebalanced(self):
return self._replica2part2dev is not None
def _set_part_moved(self, part):
self._last_part_moves[part] = 0
byte, bit = divmod(part, 8)
self._part_moved_bitmap[byte] |= (128 >> bit)
def _has_part_moved(self, part):
byte, bit = divmod(part, 8)
return bool(self._part_moved_bitmap[byte] & (128 >> bit))
def _can_part_move(self, part):
# if min_part_hours is zero then checking _last_part_moves will not
# indicate if the part has already moved during the current rebalance,
# but _has_part_moved will.
return (self._last_part_moves[part] >= self.min_part_hours and
not self._has_part_moved(part))
@contextmanager
def debug(self):
"""
Temporarily enables debug logging, useful in tests, e.g.::
with rb.debug():
rb.rebalance()
"""
old_val, self.logger.disabled = self.logger.disabled, False
try:
yield
finally:
self.logger.disabled = old_val
@property
def min_part_seconds_left(self):
"""Get the total seconds until a rebalance can be performed"""
elapsed_seconds = int(time() - self._last_part_moves_epoch)
return max((self.min_part_hours * 3600) - elapsed_seconds, 0)
def weight_of_one_part(self):
"""
Returns the weight of each partition as calculated from the
total weight of all the devices.
"""
try:
return self.parts * self.replicas / \
sum(d['weight'] for d in self._iter_devs())
except ZeroDivisionError:
raise exceptions.EmptyRingError('There are no devices in this '
'ring, or all devices have been '
'deleted')
@classmethod
def from_dict(cls, builder_data):
b = cls(1, 1, 1) # Dummy values
b.copy_from(builder_data)
return b
def copy_from(self, builder):
"""
Reinitializes this RingBuilder instance from data obtained from the
builder dict given. Code example::
b = RingBuilder(1, 1, 1) # Dummy values
b.copy_from(builder)
This is to restore a RingBuilder that has had its b.to_dict()
previously saved.
"""
if hasattr(builder, 'devs'):
self.part_power = builder.part_power
self.next_part_power = builder.next_part_power
self.replicas = builder.replicas
self.min_part_hours = builder.min_part_hours
self.parts = builder.parts
self.devs = builder.devs
self.devs_changed = builder.devs_changed
self.overload = builder.overload
self.version = builder.version
self._replica2part2dev = builder._replica2part2dev
self._last_part_moves_epoch = builder._last_part_moves_epoch
if builder._last_part_moves is None:
self._last_part_moves = array(
'B', itertools.repeat(0, self.parts))
else:
self._last_part_moves = builder._last_part_moves
self._last_part_gather_start = builder._last_part_gather_start
self._remove_devs = builder._remove_devs
self._id = getattr(builder, '_id', None)
else:
self.part_power = builder['part_power']
self.next_part_power = builder.get('next_part_power')
self.replicas = builder['replicas']
self.min_part_hours = builder['min_part_hours']
self.parts = builder['parts']
self.devs = builder['devs']
self.devs_changed = builder['devs_changed']
self.overload = builder.get('overload', 0.0)
self.version = builder['version']
self._replica2part2dev = builder['_replica2part2dev']
self._last_part_moves_epoch = builder['_last_part_moves_epoch']
if builder['_last_part_moves'] is None:
self._last_part_moves = array(
'B', itertools.repeat(0, self.parts))
else:
self._last_part_moves = builder['_last_part_moves']
self._last_part_gather_start = builder['_last_part_gather_start']
self._dispersion_graph = builder.get('_dispersion_graph', {})
self.dispersion = builder.get('dispersion')
self._remove_devs = builder['_remove_devs']
self._id = builder.get('id')
self._ring = None
# Old builders may not have a region defined for their devices, in
# which case we default it to 1.
for dev in self._iter_devs():
dev.setdefault("region", 1)
if not self._last_part_moves_epoch:
self._last_part_moves_epoch = 0
def __deepcopy__(self, memo):
return type(self).from_dict(deepcopy(self.to_dict(), memo))
def to_dict(self):
"""
Returns a dict that can be used later with copy_from to
restore a RingBuilder. swift-ring-builder uses this to
pickle.dump the dict to a file and later load that dict into
copy_from.
"""
return {'part_power': self.part_power,
'next_part_power': self.next_part_power,
'replicas': self.replicas,
'min_part_hours': self.min_part_hours,
'parts': self.parts,
'devs': self.devs,
'devs_changed': self.devs_changed,
'version': self.version,
'overload': self.overload,
'_replica2part2dev': self._replica2part2dev,
'_last_part_moves_epoch': self._last_part_moves_epoch,
'_last_part_moves': self._last_part_moves,
'_last_part_gather_start': self._last_part_gather_start,
'_dispersion_graph': self._dispersion_graph,
'dispersion': self.dispersion,
'_remove_devs': self._remove_devs,
'id': self._id}
def change_min_part_hours(self, min_part_hours):
"""
Changes the value used to decide if a given partition can be moved
again. This restriction is to give the overall system enough time to
settle a partition to its new location before moving it to yet another
location. While no data would be lost if a partition is moved several
times quickly, it could make that data unreachable for a short period
of time.
This should be set to at least the average full partition replication
time. Starting it at 24 hours and then lowering it to what the
replicator reports as the longest partition cycle is best.
:param min_part_hours: new value for min_part_hours
"""
self.min_part_hours = min_part_hours
def set_replicas(self, new_replica_count):
"""
Changes the number of replicas in this ring.
If the new replica count is sufficiently different that
self._replica2part2dev will change size, sets
self.devs_changed. This is so tools like
bin/swift-ring-builder can know to write out the new ring
rather than bailing out due to lack of balance change.
"""
old_slots_used = int(self.parts * self.replicas)
new_slots_used = int(self.parts * new_replica_count)
if old_slots_used != new_slots_used:
self.devs_changed = True
self.replicas = new_replica_count
def set_overload(self, overload):
self.overload = overload
def get_ring(self):
"""
Get the ring, or more specifically, the swift.common.ring.RingData.
This ring data is the minimum required for use of the ring. The ring
builder itself keeps additional data such as when partitions were last
moved.
"""
# We cache the self._ring value so multiple requests for it don't build
# it multiple times. Be sure to set self._ring = None whenever the ring
# will need to be rebuilt.
if not self._ring:
# Make devs list (with holes for deleted devices) and not including
# builder-specific extra attributes.
devs = [None] * len(self.devs)
for dev in self._iter_devs():
devs[dev['id']] = dict((k, v) for k, v in dev.items()
if k not in ('parts', 'parts_wanted'))
# Copy over the replica+partition->device assignments, the device
# information, and the part_shift value (the number of bits to
# shift an unsigned int >I right to obtain the partition for the
# int).
if not self._replica2part2dev:
self._ring = RingData([], devs, self.part_shift,
version=self.version)
else:
self._ring = \
RingData([array('H', p2d) for p2d in
self._replica2part2dev],
devs, self.part_shift,
self.next_part_power,
self.version)
return self._ring
def add_dev(self, dev):
"""
Add a device to the ring. This device dict should have a minimum of the
following keys:
====== ===============================================================
id unique integer identifier amongst devices. Defaults to the next
id if the 'id' key is not provided in the dict
weight a float of the relative weight of this device as compared to
others; this indicates how many partitions the builder will try
to assign to this device
region integer indicating which region the device is in
zone integer indicating which zone the device is in; a given
partition will not be assigned to multiple devices within the
same (region, zone) pair if there is any alternative
ip the ip address of the device
port the tcp port of the device
device the device's name on disk (sdb1, for example)
meta general use 'extra' field; for example: the online date, the
hardware description
====== ===============================================================
.. note::
This will not rebalance the ring immediately as you may want to
make multiple changes for a single rebalance.
:param dev: device dict
:returns: id of device (not used in the tree anymore, but unknown
users may depend on it)
"""
if 'id' not in dev:
dev['id'] = 0
if self.devs:
try:
dev['id'] = self.devs.index(None)
except ValueError:
dev['id'] = len(self.devs)
if dev['id'] < len(self.devs) and self.devs[dev['id']] is not None:
raise exceptions.DuplicateDeviceError(
'Duplicate device id: %d' % dev['id'])
# Add holes to self.devs to ensure self.devs[dev['id']] will be the dev
while dev['id'] >= len(self.devs):
self.devs.append(None)
required_keys = ('region', 'zone', 'ip', 'port', 'device', 'weight')
missing = tuple(key for key in required_keys if key not in dev)
if missing:
raise ValueError('%r is missing required key(s): %s' % (
dev, ', '.join(missing)))
dev['weight'] = float(dev['weight'])
dev['parts'] = 0
dev.setdefault('meta', '')
self.devs[dev['id']] = dev
self.devs_changed = True
self.version += 1
return dev['id']
def set_dev_weight(self, dev_id, weight):
"""
Set the weight of a device. This should be called rather than just
altering the weight key in the device dict directly, as the builder
will need to rebuild some internal state to reflect the change.
.. note::
This will not rebalance the ring immediately as you may want to
make multiple changes for a single rebalance.
:param dev_id: device id
:param weight: new weight for device
"""
if any(dev_id == d['id'] for d in self._remove_devs):
raise ValueError("Can not set weight of dev_id %s because it "
"is marked for removal" % (dev_id,))
self.devs[dev_id]['weight'] = weight
self.devs_changed = True
self.version += 1
def set_dev_region(self, dev_id, region):
"""
Set the region of a device. This should be called rather than just
altering the region key in the device dict directly, as the builder
will need to rebuild some internal state to reflect the change.
.. note::
This will not rebalance the ring immediately as you may want to
make multiple changes for a single rebalance.
:param dev_id: device id
:param region: new region for device
"""
if any(dev_id == d['id'] for d in self._remove_devs):
raise ValueError("Can not set region of dev_id %s because it "
"is marked for removal" % (dev_id,))
self.devs[dev_id]['region'] = region
self.devs_changed = True
self.version += 1
def set_dev_zone(self, dev_id, zone):
"""
Set the zone of a device. This should be called rather than just
altering the zone key in the device dict directly, as the builder
will need to rebuild some internal state to reflect the change.
.. note::
This will not rebalance the ring immediately as you may want to
make multiple changes for a single rebalance.
:param dev_id: device id
:param zone: new zone for device
"""
if any(dev_id == d['id'] for d in self._remove_devs):
raise ValueError("Can not set zone of dev_id %s because it "
"is marked for removal" % (dev_id,))
self.devs[dev_id]['zone'] = zone
self.devs_changed = True
self.version += 1
def remove_dev(self, dev_id):
"""
Remove a device from the ring.
.. note::
This will not rebalance the ring immediately as you may want to
make multiple changes for a single rebalance.
:param dev_id: device id
"""
dev = self.devs[dev_id]
dev['weight'] = 0
self._remove_devs.append(dev)
self.devs_changed = True
self.version += 1
def rebalance(self, seed=None):
"""
Rebalance the ring.
This is the main work function of the builder, as it will assign and
reassign partitions to devices in the ring based on weights, distinct
zones, recent reassignments, etc.
The process doesn't always perfectly assign partitions (that'd take a
lot more analysis and therefore a lot more time -- I had code that did
that before). Because of this, it keeps rebalancing until the device
skew (number of partitions a device wants compared to what it has) gets
below 1% or doesn't change by more than 1% (only happens with a ring
that can't be balanced no matter what).
:param seed: a value for the random seed (optional)
:returns: (number_of_partitions_altered, resulting_balance,
number_of_removed_devices)
"""
# count up the devs, and cache some stuff
num_devices = 0
for dev in self._iter_devs():
dev['tiers'] = tiers_for_dev(dev)
if dev['weight'] > 0:
num_devices += 1
if num_devices < self.replicas:
raise exceptions.RingValidationError(
"Replica count of %(replicas)s requires more "
"than %(num_devices)s devices" % {
'replicas': self.replicas,
'num_devices': num_devices,
})
self._ring = None
old_replica2part2dev = copy.deepcopy(self._replica2part2dev)
if not self.ever_rebalanced:
self.logger.debug("New builder; performing initial balance")
self._update_last_part_moves()
with _set_random_seed(seed):
replica_plan = self._build_replica_plan()
self._set_parts_wanted(replica_plan)
assign_parts = defaultdict(list)
# gather parts from replica count adjustment
self._adjust_replica2part2dev_size(assign_parts)
# gather parts from failed devices
removed_devs = self._gather_parts_from_failed_devices(assign_parts)
# gather parts for dispersion (N.B. this only picks up parts that
# *must* disperse according to the replica plan)
self._gather_parts_for_dispersion(assign_parts, replica_plan)
# we'll gather a few times, or until we archive the plan
for gather_count in range(MAX_BALANCE_GATHER_COUNT):
self._gather_parts_for_balance(assign_parts, replica_plan,
# firsrt attempt go for disperse
gather_count == 0)
if not assign_parts:
# most likely min part hours
finish_status = 'Unable to finish'
break
assign_parts_list = list(assign_parts.items())
# shuffle the parts to be reassigned, we have no preference on
# the order in which the replica plan is fulfilled.
random.shuffle(assign_parts_list)
# reset assign_parts map for next iteration
assign_parts = defaultdict(list)
num_part_replicas = sum(len(r) for p, r in assign_parts_list)
self.logger.debug("Gathered %d parts", num_part_replicas)
self._reassign_parts(assign_parts_list, replica_plan)
self.logger.debug("Assigned %d parts", num_part_replicas)
if not sum(d['parts_wanted'] < 0 for d in
self._iter_devs()):
finish_status = 'Finished'
break
else:
finish_status = 'Unable to finish'
self.logger.debug(
'%(status)s rebalance plan after %(count)s attempts',
{'status': finish_status, 'count': gather_count + 1})
self.devs_changed = False
changed_parts = self._build_dispersion_graph(old_replica2part2dev)
# clean up the cache
for dev in self._iter_devs():
dev.pop('tiers', None)
return changed_parts, self.get_balance(), removed_devs
def _build_dispersion_graph(self, old_replica2part2dev=None):
"""
Build a dict of all tiers in the cluster to a list of the number of
parts with a replica count at each index. The values of the dict will
be lists of length the maximum whole replica + 1 so that the
graph[tier][3] is the number of parts within the tier with 3 replicas
and graph [tier][0] is the number of parts not assigned in this tier.
i.e.
{
<tier>: [
<number_of_parts_with_0_replicas>,
<number_of_parts_with_1_replicas>,
...
<number_of_parts_with_n_replicas>,
],
...
}
:param old_replica2part2dev: if called from rebalance, the
old_replica2part2dev can be used to count moved parts.
:returns: number of parts with different assignments than
old_replica2part2dev if provided
"""
# Since we're going to loop over every replica of every part we'll
# also count up changed_parts if old_replica2part2dev is passed in
old_replica2part2dev = old_replica2part2dev or []
# Compare the partition allocation before and after the rebalance
# Only changed device ids are taken into account; devices might be
# "touched" during the rebalance, but actually not really moved
changed_parts = 0
int_replicas = int(math.ceil(self.replicas))
max_allowed_replicas = self._build_max_replicas_by_tier()
parts_at_risk = 0
dispersion_graph = {}
# go over all the devices holding each replica part by part
for part_id, dev_ids in enumerate(
six.moves.zip(*self._replica2part2dev)):
# count the number of replicas of this part for each tier of each
# device, some devices may have overlapping tiers!
replicas_at_tier = defaultdict(int)
for rep_id, dev in enumerate(iter(
self.devs[dev_id] for dev_id in dev_ids)):
for tier in (dev.get('tiers') or tiers_for_dev(dev)):
replicas_at_tier[tier] += 1
# IndexErrors will be raised if the replicas are increased or
# decreased, and that actually means the partition has changed
try:
old_device = old_replica2part2dev[rep_id][part_id]
except IndexError:
changed_parts += 1
continue
if old_device != dev['id']:
changed_parts += 1
# update running totals for each tiers' number of parts with a
# given replica count
part_risk_depth = defaultdict(int)
part_risk_depth[0] = 0
for tier, replicas in replicas_at_tier.items():
if tier not in dispersion_graph:
dispersion_graph[tier] = [self.parts] + [0] * int_replicas
dispersion_graph[tier][0] -= 1
dispersion_graph[tier][replicas] += 1
if replicas > max_allowed_replicas[tier]:
part_risk_depth[len(tier)] += (
replicas - max_allowed_replicas[tier])
# count each part-replica once at tier where dispersion is worst
parts_at_risk += max(part_risk_depth.values())
self._dispersion_graph = dispersion_graph
self.dispersion = 100.0 * parts_at_risk / (self.parts * self.replicas)
self.version += 1
return changed_parts
def validate(self, stats=False):
"""
Validate the ring.
This is a safety function to try to catch any bugs in the building
process. It ensures partitions have been assigned to real devices,
aren't doubly assigned, etc. It can also optionally check the even
distribution of partitions across devices.
:param stats: if True, check distribution of partitions across devices
:returns: if stats is True, a tuple of (device_usage, worst_stat), else
(None, None). device_usage[dev_id] will equal the number of
partitions assigned to that device. worst_stat will equal the
number of partitions the worst device is skewed from the
number it should have.
:raises RingValidationError: problem was found with the ring.
"""
# "len" showed up in profiling, so it's just computed once.
dev_len = len(self.devs)
parts_on_devs = sum(d['parts'] for d in self._iter_devs())
if not self._replica2part2dev:
raise exceptions.RingValidationError(
'_replica2part2dev empty; did you forget to rebalance?')
parts_in_map = sum(len(p2d) for p2d in self._replica2part2dev)
if parts_on_devs != parts_in_map:
raise exceptions.RingValidationError(
'All partitions are not double accounted for: %d != %d' %
(parts_on_devs, parts_in_map))
if stats:
# dev_usage[dev_id] will equal the number of partitions assigned to
# that device.
dev_usage = array('I', (0 for _junk in range(dev_len)))
for part2dev in self._replica2part2dev:
for dev_id in part2dev:
dev_usage[dev_id] += 1
for dev in self._iter_devs():
if not isinstance(dev['port'], int):
raise exceptions.RingValidationError(
"Device %d has port %r, which is not an integer." %
(dev['id'], dev['port']))
int_replicas = int(math.ceil(self.replicas))
rep2part_len = list(map(len, self._replica2part2dev))
# check the assignments of each part's replicas
for part in range(self.parts):
devs_for_part = []
for replica, part_len in enumerate(rep2part_len):
if part_len <= part:
# last replica may be short on parts because of floating
# replica count
if replica + 1 < int_replicas:
raise exceptions.RingValidationError(
"The partition assignments of replica %r were "
"shorter than expected (%s < %s) - this should "
"only happen for the last replica" % (
replica,
len(self._replica2part2dev[replica]),
self.parts,
))
break
dev_id = self._replica2part2dev[replica][part]
if dev_id >= dev_len or not self.devs[dev_id]:
raise exceptions.RingValidationError(
"Partition %d, replica %d was not allocated "
"to a device." %
(part, replica))
devs_for_part.append(dev_id)
if len(devs_for_part) != len(set(devs_for_part)):
raise exceptions.RingValidationError(
"The partition %s has been assigned to "
"duplicate devices %r" % (
part, devs_for_part))
if stats:
weight_of_one_part = self.weight_of_one_part()
worst = 0
for dev in self._iter_devs():
if not dev['weight']:
if dev_usage[dev['id']]:
# If a device has no weight, but has partitions, then
# its overage is considered "infinity" and therefore
# always the worst possible. We show MAX_BALANCE for
# convenience.
worst = MAX_BALANCE
break
continue
skew = abs(100.0 * dev_usage[dev['id']] /
(dev['weight'] * weight_of_one_part) - 100.0)
if skew > worst:
worst = skew
return dev_usage, worst
return None, None
def _build_balance_per_dev(self):
"""
Build a map of <device_id> => <balance> where <balance> is a float
representing the percentage difference from the desired amount of
partitions a given device wants and the amount it has.
N.B. this method only considers a device's weight and the parts
assigned, not the parts wanted according to the replica plan.
"""
weight_of_one_part = self.weight_of_one_part()
balance_per_dev = {}
for dev in self._iter_devs():
if not dev['weight']:
if dev['parts']:
# If a device has no weight, but has partitions, then its
# overage is considered "infinity" and therefore always the
# worst possible. We show MAX_BALANCE for convenience.
balance = MAX_BALANCE
else:
balance = 0
else:
balance = 100.0 * dev['parts'] / (
dev['weight'] * weight_of_one_part) - 100.0
balance_per_dev[dev['id']] = balance
return balance_per_dev
def get_balance(self):
"""
Get the balance of the ring. The balance value is the highest
percentage of the desired amount of partitions a given device
wants. For instance, if the "worst" device wants (based on its
weight relative to the sum of all the devices' weights) 123
partitions and it has 124 partitions, the balance value would
be 0.83 (1 extra / 123 wanted * 100 for percentage).
:returns: balance of the ring
"""
balance_per_dev = self._build_balance_per_dev()
return max(abs(b) for b in balance_per_dev.values())
def get_required_overload(self, weighted=None, wanted=None):
"""
Returns the minimum overload value required to make the ring maximally
dispersed.
The required overload is the largest percentage change of any single
device from its weighted replicanth to its wanted replicanth (note:
under weighted devices have a negative percentage change) to archive
dispersion - that is to say a single device that must be overloaded by
5% is worse than 5 devices in a single tier overloaded by 1%.
"""
weighted = weighted or self._build_weighted_replicas_by_tier()
wanted = wanted or self._build_wanted_replicas_by_tier()
max_overload = 0.0
for dev in self._iter_devs():
tier = (dev['region'], dev['zone'], dev['ip'], dev['id'])
if not dev['weight']:
if tier not in wanted or not wanted[tier]:
continue
raise exceptions.RingValidationError(
'Device %s has zero weight and '
'should not want any replicas' % (tier,))
required = (wanted[tier] - weighted[tier]) / weighted[tier]
self.logger.debug('%(tier)s wants %(wanted)s and is weighted for '
'%(weight)s so therefore requires %(required)s '
'overload', {'tier': pretty_dev(dev),
'wanted': wanted[tier],
'weight': weighted[tier],
'required': required})
if required > max_overload:
max_overload = required
return max_overload
def pretend_min_part_hours_passed(self):
"""
Override min_part_hours by marking all partitions as having been moved
255 hours ago and last move epoch to 'the beginning of time'. This can
be used to force a full rebalance on the next call to rebalance.
"""
self._last_part_moves_epoch = 0
if not self._last_part_moves:
return
for part in range(self.parts):
self._last_part_moves[part] = 0xff
def get_part_devices(self, part):
"""
Get the devices that are responsible for the partition,
filtering out duplicates.
:param part: partition to get devices for
:returns: list of device dicts
"""
devices = []
for dev in self._devs_for_part(part):
if dev not in devices:
devices.append(dev)
return devices
def _iter_devs(self):
"""
Returns an iterator all the non-None devices in the ring. Note that
this means list(b._iter_devs())[some_id] may not equal b.devs[some_id];
you will have to check the 'id' key of each device to obtain its
dev_id.
"""
for dev in self.devs:
if dev is not None:
yield dev
def _build_tier2children(self):
"""
Wrap helper build_tier_tree so exclude zero-weight devices.
"""
return build_tier_tree(d for d in self._iter_devs() if d['weight'])
def _set_parts_wanted(self, replica_plan):
"""
Sets the parts_wanted key for each of the devices to the number of
partitions the device wants based on its relative weight. This key is
used to sort the devices according to "most wanted" during rebalancing
to best distribute partitions. A negative parts_wanted indicates the
device is "overweight" and wishes to give partitions away if possible.
:param replica_plan: a dict of dicts, as returned from
_build_replica_plan, that maps
each tier to it's target replicanths.
"""
tier2children = self._build_tier2children()
parts_by_tier = defaultdict(int)
def place_parts(tier, parts):
parts_by_tier[tier] = parts
sub_tiers = sorted(tier2children[tier])
if not sub_tiers:
return
to_place = defaultdict(int)
for t in sub_tiers:
to_place[t] = min(parts, int(math.floor(
replica_plan[t]['target'] * self.parts)))
parts -= to_place[t]
# if there's some parts left over, just throw 'em about
sub_tier_gen = itertools.cycle(sorted(
sub_tiers, key=lambda t: replica_plan[t]['target']))
while parts > 0:
t = next(sub_tier_gen)
to_place[t] += 1
parts -= 1
for t, p in to_place.items():
place_parts(t, p)
total_parts = int(self.replicas * self.parts)
place_parts((), total_parts)
# belts & suspenders/paranoia - at every level, the sum of
# parts_by_tier should be total_parts for the ring
tiers = ['cluster', 'regions', 'zones', 'servers', 'devices']
for i, tier_name in enumerate(tiers):
parts_at_tier = sum(parts_by_tier[t] for t in parts_by_tier
if len(t) == i)
if parts_at_tier != total_parts:
raise exceptions.RingValidationError(
'%s != %s at tier %s' % (
parts_at_tier, total_parts, tier_name))
for dev in self._iter_devs():
if not dev['weight']:
# With no weight, that means we wish to "drain" the device. So
# we set the parts_wanted to a really large negative number to
# indicate its strong desire to give up everything it has.
dev['parts_wanted'] = -self.parts * self.replicas
else:
tier = (dev['region'], dev['zone'], dev['ip'], dev['id'])
dev['parts_wanted'] = parts_by_tier[tier] - dev['parts']
def _update_last_part_moves(self):
"""
Updates how many hours ago each partition was moved based on the
current time. The builder won't move a partition that has been moved
more recently than min_part_hours.
"""
self._part_moved_bitmap = bytearray(max(2 ** (self.part_power - 3), 1))
elapsed_hours = int(time() - self._last_part_moves_epoch) // 3600
if elapsed_hours <= 0:
return
for part in range(self.parts):
# The "min(self._last_part_moves[part] + elapsed_hours, 0xff)"
# which was here showed up in profiling, so it got inlined.
last_plus_elapsed = self._last_part_moves[part] + elapsed_hours
if last_plus_elapsed < 0xff:
self._last_part_moves[part] = last_plus_elapsed
else:
self._last_part_moves[part] = 0xff
self._last_part_moves_epoch = int(time())
def _gather_parts_from_failed_devices(self, assign_parts):
"""
Update the map of partition => [replicas] to be reassigned from
removed devices.
"""
# First we gather partitions from removed devices. Since removed
# devices usually indicate device failures, we have no choice but to
# reassign these partitions. However, we mark them as moved so later
# choices will skip other replicas of the same partition if possible.
if self._remove_devs:
dev_ids = [d['id'] for d in self._remove_devs if d['parts']]
if dev_ids:
for part, replica in self._each_part_replica():
dev_id = self._replica2part2dev[replica][part]
if dev_id in dev_ids:
self._replica2part2dev[replica][part] = NONE_DEV
self._set_part_moved(part)
assign_parts[part].append(replica)
self.logger.debug(
"Gathered %d/%d from dev %d [dev removed]",
part, replica, dev_id)
removed_devs = 0
while self._remove_devs:
remove_dev_id = self._remove_devs.pop()['id']
self.logger.debug("Removing dev %d", remove_dev_id)
self.devs[remove_dev_id] = None
removed_devs += 1
return removed_devs
def _adjust_replica2part2dev_size(self, to_assign):
"""
Make sure that the lengths of the arrays in _replica2part2dev
are correct for the current value of self.replicas.
Example:
self.part_power = 8
self.replicas = 2.25
self._replica2part2dev will contain 3 arrays: the first 2 of
length 256 (2**8), and the last of length 64 (0.25 * 2**8).
Update the mapping of partition => [replicas] that need assignment.
"""
fractional_replicas, whole_replicas = math.modf(self.replicas)
whole_replicas = int(whole_replicas)
removed_parts = 0
new_parts = 0
desired_lengths = [self.parts] * whole_replicas
if fractional_replicas:
desired_lengths.append(int(self.parts * fractional_replicas))
if self._replica2part2dev is not None:
# If we crossed an integer threshold (say, 4.1 --> 4),
# we'll have a partial extra replica clinging on here. Clean
# up any such extra stuff.
for part2dev in self._replica2part2dev[len(desired_lengths):]:
for dev_id in part2dev:
dev_losing_part = self.devs[dev_id]
dev_losing_part['parts'] -= 1
removed_parts -= 1
self._replica2part2dev = \
self._replica2part2dev[:len(desired_lengths)]
else:
self._replica2part2dev = []
for replica, desired_length in enumerate(desired_lengths):
if replica < len(self._replica2part2dev):
part2dev = self._replica2part2dev[replica]
if len(part2dev) < desired_length:
# Not long enough: needs to be extended and the
# newly-added pieces assigned to devices.
for part in range(len(part2dev), desired_length):
to_assign[part].append(replica)
part2dev.append(NONE_DEV)
new_parts += 1
elif len(part2dev) > desired_length:
# Too long: truncate this mapping.
for part in range(desired_length, len(part2dev)):
dev_losing_part = self.devs[part2dev[part]]
dev_losing_part['parts'] -= 1
removed_parts -= 1
self._replica2part2dev[replica] = part2dev[:desired_length]
else:
# Mapping not present at all: make one up and assign
# all of it.
for part in range(desired_length):
to_assign[part].append(replica)
new_parts += 1
self._replica2part2dev.append(
array('H', itertools.repeat(NONE_DEV, desired_length)))
self.logger.debug(
"%d new parts and %d removed parts from replica-count change",
new_parts, removed_parts)
def _gather_parts_for_dispersion(self, assign_parts, replica_plan):
"""
Update the map of partition => [replicas] to be reassigned from
insufficiently-far-apart replicas.
"""
# Now we gather partitions that are "at risk" because they aren't
# currently sufficient spread out across the cluster.
for part in range(self.parts):
if (not self._can_part_move(part)):
continue
# First, add up the count of replicas at each tier for each
# partition.
replicas_at_tier = defaultdict(int)
for dev in self._devs_for_part(part):
for tier in dev['tiers']:
replicas_at_tier[tier] += 1
# Now, look for partitions not yet spread out enough.
undispersed_dev_replicas = []
for replica in self._replicas_for_part(part):
dev_id = self._replica2part2dev[replica][part]
if dev_id == NONE_DEV:
continue
dev = self.devs[dev_id]
if all(replicas_at_tier[tier] <=
replica_plan[tier]['max']
for tier in dev['tiers']):
continue
undispersed_dev_replicas.append((dev, replica))
if not undispersed_dev_replicas:
continue
undispersed_dev_replicas.sort(
key=lambda dr: dr[0]['parts_wanted'])
for dev, replica in undispersed_dev_replicas:
# the min part hour check is ignored if and only if a device
# has more than one replica of a part assigned to it - which
# would have only been possible on rings built with an older
# version of the code
if (not self._can_part_move(part) and
not replicas_at_tier[dev['tiers'][-1]] > 1):
continue
dev['parts_wanted'] += 1
dev['parts'] -= 1
assign_parts[part].append(replica)
self.logger.debug(
"Gathered %d/%d from dev %s [dispersion]",
part, replica, pretty_dev(dev))
self._replica2part2dev[replica][part] = NONE_DEV
for tier in dev['tiers']:
replicas_at_tier[tier] -= 1
self._set_part_moved(part)
def _gather_parts_for_balance_can_disperse(self, assign_parts, start,
replica_plan):
"""
Update the map of partition => [replicas] to be reassigned from
overweight drives where the replicas can be better dispersed to
another failure domain.
:param assign_parts: the map of partition => [replica] to update
:param start: offset into self.parts to begin search
:param replica_plan: replicanth targets for tiers
"""
tier2children = self._build_tier2children()
parts_wanted_in_tier = defaultdict(int)
for dev in self._iter_devs():
wanted = max(dev['parts_wanted'], 0)
for tier in dev['tiers']:
parts_wanted_in_tier[tier] += wanted
# Last, we gather partitions from devices that are "overweight" because
# they have more partitions than their parts_wanted.
for offset in range(self.parts):
part = (start + offset) % self.parts
if (not self._can_part_move(part)):
continue
# For each part we'll look at the devices holding those parts and
# see if any are overweight, keeping track of replicas_at_tier as
# we go
overweight_dev_replica = []
replicas_at_tier = defaultdict(int)
for replica in self._replicas_for_part(part):
dev_id = self._replica2part2dev[replica][part]
if dev_id == NONE_DEV:
continue
dev = self.devs[dev_id]
for tier in dev['tiers']:
replicas_at_tier[tier] += 1
if dev['parts_wanted'] < 0:
overweight_dev_replica.append((dev, replica))
if not overweight_dev_replica:
continue
overweight_dev_replica.sort(
key=lambda dr: dr[0]['parts_wanted'])
for dev, replica in overweight_dev_replica:
if any(replica_plan[tier]['min'] <=
replicas_at_tier[tier] <
replica_plan[tier]['max']
for tier in dev['tiers']):
# we're stuck by replica plan
continue
for t in reversed(dev['tiers']):
if replicas_at_tier[t] - 1 < replica_plan[t]['min']:
# we're stuck at tier t
break
if sum(parts_wanted_in_tier[c]
for c in tier2children[t]
if c not in dev['tiers']) <= 0:
# we're stuck by weight
continue
# this is the most overweight_device holding a replica
# of this part that can shed it according to the plan
dev['parts_wanted'] += 1
dev['parts'] -= 1
assign_parts[part].append(replica)
self.logger.debug(
"Gathered %d/%d from dev %s [weight disperse]",
part, replica, pretty_dev(dev))
self._replica2part2dev[replica][part] = NONE_DEV
for tier in dev['tiers']:
replicas_at_tier[tier] -= 1
parts_wanted_in_tier[tier] -= 1
self._set_part_moved(part)
break
def _gather_parts_for_balance(self, assign_parts, replica_plan,
disperse_first):
"""
Gather parts that look like they should move for balance reasons.
A simple gathers of parts that looks dispersible normally works out,
we'll switch strategies if things don't seem to move.
:param disperse_first: boolean, avoid replicas on overweight devices
that need to be there for dispersion
"""
# pick a random starting point on the other side of the ring
quarter_turn = (self.parts // 4)
random_half = random.randint(0, self.parts // 2)
start = (self._last_part_gather_start + quarter_turn +
random_half) % self.parts
self.logger.debug('Gather start is %(start)s '
'(Last start was %(last_start)s)',
{'start': start,
'last_start': self._last_part_gather_start})
self._last_part_gather_start = start
if disperse_first:
self._gather_parts_for_balance_can_disperse(
assign_parts, start, replica_plan)
self._gather_parts_for_balance_forced(assign_parts, start)
def _gather_parts_for_balance_forced(self, assign_parts, start, **kwargs):
"""
Update the map of partition => [replicas] to be reassigned from
overweight drives without restriction, parts gathered from this method
may be placed back onto devices that are no better (or worse) than the
device from which they are gathered.
This method allows devices to flop around enough to unlock replicas
that would have otherwise potentially been locked because of
dispersion - it should be used as a last resort.
:param assign_parts: the map of partition => [replica] to update
:param start: offset into self.parts to begin search
"""
for offset in range(self.parts):
part = (start + offset) % self.parts
if (not self._can_part_move(part)):
continue
overweight_dev_replica = []
for replica in self._replicas_for_part(part):
dev_id = self._replica2part2dev[replica][part]
if dev_id == NONE_DEV:
continue
dev = self.devs[dev_id]
if dev['parts_wanted'] < 0:
overweight_dev_replica.append((dev, replica))
if not overweight_dev_replica:
continue
overweight_dev_replica.sort(
key=lambda dr: dr[0]['parts_wanted'])
dev, replica = overweight_dev_replica[0]
# this is the most overweight_device holding a replica of this
# part we don't know where it's going to end up - but we'll
# pick it up and hope for the best.
dev['parts_wanted'] += 1
dev['parts'] -= 1
assign_parts[part].append(replica)
self.logger.debug(
"Gathered %d/%d from dev %s [weight forced]",
part, replica, pretty_dev(dev))
self._replica2part2dev[replica][part] = NONE_DEV
self._set_part_moved(part)
def _reassign_parts(self, reassign_parts, replica_plan):
"""
For an existing ring data set, partitions are reassigned similar to
the initial assignment.
The devices are ordered by how many partitions they still want and
kept in that order throughout the process.
The gathered partitions are iterated through, assigning them to
devices according to the "most wanted" while keeping the replicas as
"far apart" as possible.
Two different regions are considered the farthest-apart things,
followed by zones, then different ip within a zone; the
least-far-apart things are different devices with the same ip in the
same zone.
:param reassign_parts: An iterable of (part, replicas_to_replace)
pairs. replicas_to_replace is an iterable of the
replica (an int) to replace for that partition.
replicas_to_replace may be shared for multiple
partitions, so be sure you do not modify it.
"""
parts_available_in_tier = defaultdict(int)
for dev in self._iter_devs():
dev['sort_key'] = self._sort_key_for(dev)
# Note: this represents how many partitions may be assigned to a
# given tier (region/zone/server/disk). It does not take into
# account how many partitions a given tier wants to shed.
#
# If we did not do this, we could have a zone where, at some
# point during an assignment, number-of-parts-to-gain equals
# number-of-parts-to-shed. At that point, no further placement
# into that zone would occur since its parts_available_in_tier
# would be 0. This would happen any time a zone had any device
# with partitions to shed, which is any time a device is being
# removed, which is a pretty frequent operation.
wanted = max(dev['parts_wanted'], 0)
for tier in dev['tiers']:
parts_available_in_tier[tier] += wanted
available_devs = \
sorted((d for d in self._iter_devs() if d['weight']),
key=lambda x: x['sort_key'])
tier2devs = defaultdict(list)
tier2sort_key = defaultdict(tuple)
tier2dev_sort_key = defaultdict(list)
max_tier_depth = 0
for dev in available_devs:
for tier in dev['tiers']:
tier2devs[tier].append(dev) # <-- starts out sorted!
tier2dev_sort_key[tier].append(dev['sort_key'])
tier2sort_key[tier] = dev['sort_key']
if len(tier) > max_tier_depth:
max_tier_depth = len(tier)
tier2children_sets = build_tier_tree(available_devs)
tier2children = defaultdict(list)
tier2children_sort_key = {}
tiers_list = [()]
depth = 1
while depth <= max_tier_depth:
new_tiers_list = []
for tier in tiers_list:
child_tiers = list(tier2children_sets[tier])
child_tiers.sort(key=tier2sort_key.__getitem__)
tier2children[tier] = child_tiers
tier2children_sort_key[tier] = map(
tier2sort_key.__getitem__, child_tiers)
new_tiers_list.extend(child_tiers)
tiers_list = new_tiers_list
depth += 1
for part, replace_replicas in reassign_parts:
# always update part_moves for min_part_hours
self._last_part_moves[part] = 0
# count up where these replicas be
replicas_at_tier = defaultdict(int)
for dev in self._devs_for_part(part):
for tier in dev['tiers']:
replicas_at_tier[tier] += 1
for replica in replace_replicas:
# Find a new home for this replica
tier = ()
# This used to be a cute, recursive function, but it's been
# unrolled for performance.
depth = 1
while depth <= max_tier_depth:
# Choose the roomiest tier among those that don't
# already have their max replicas assigned according
# to the replica_plan.
candidates = [t for t in tier2children[tier] if
replicas_at_tier[t] <
replica_plan[t]['max']]
if not candidates:
raise Exception('no home for %s/%s %s' % (
part, replica, {t: (
replicas_at_tier[t],
replica_plan[t]['max'],
) for t in tier2children[tier]}))
tier = max(candidates, key=lambda t:
parts_available_in_tier[t])
depth += 1
dev = tier2devs[tier][-1]
dev['parts_wanted'] -= 1
dev['parts'] += 1
for tier in dev['tiers']:
parts_available_in_tier[tier] -= 1
replicas_at_tier[tier] += 1
self._replica2part2dev[replica][part] = dev['id']
self.logger.debug(
"Placed %d/%d onto dev %s", part, replica, pretty_dev(dev))
# Just to save memory and keep from accidental reuse.
for dev in self._iter_devs():
del dev['sort_key']
@staticmethod
def _sort_key_for(dev):
return (dev['parts_wanted'], random.randint(0, 0xFFFF), dev['id'])
def _build_max_replicas_by_tier(self, bound=math.ceil):
"""
Returns a defaultdict of (tier: replica_count) for all tiers in the
ring excluding zero weight devices.
There will always be a () entry as the root of the structure, whose
replica_count will equal the ring's replica_count.
Then there will be (region,) entries for each region, indicating the
maximum number of replicas the region might have for any given
partition.
Next there will be (region, zone) entries for each zone, indicating
the maximum number of replicas in a given region and zone. Anything
greater than 1 indicates a partition at slightly elevated risk, as if
that zone were to fail multiple replicas of that partition would be
unreachable.
Next there will be (region, zone, ip_port) entries for each node,
indicating the maximum number of replicas stored on a node in a given
region and zone. Anything greater than 1 indicates a partition at
elevated risk, as if that ip_port were to fail multiple replicas of
that partition would be unreachable.
Last there will be (region, zone, ip_port, device) entries for each
device, indicating the maximum number of replicas the device shares
with other devices on the same node for any given partition.
Anything greater than 1 indicates a partition at serious risk, as the
data on that partition will not be stored distinctly at the ring's
replica_count.
Example return dict for the common SAIO setup::
{(): 3.0,
(1,): 3.0,
(1, 1): 1.0,
(1, 1, '127.0.0.1:6210'): 1.0,
(1, 1, '127.0.0.1:6210', 0): 1.0,
(1, 2): 1.0,
(1, 2, '127.0.0.1:6220'): 1.0,
(1, 2, '127.0.0.1:6220', 1): 1.0,
(1, 3): 1.0,
(1, 3, '127.0.0.1:6230'): 1.0,
(1, 3, '127.0.0.1:6230', 2): 1.0,
(1, 4): 1.0,
(1, 4, '127.0.0.1:6240'): 1.0,
(1, 4, '127.0.0.1:6240', 3): 1.0}
"""
# Used by walk_tree to know what entries to create for each recursive
# call.
tier2children = self._build_tier2children()
def walk_tree(tier, replica_count):
if len(tier) == 4:
# special case for device, it's not recursive
replica_count = min(1, replica_count)
mr = {tier: replica_count}
if tier in tier2children:
subtiers = tier2children[tier]
for subtier in subtiers:
submax = bound(float(replica_count) / len(subtiers))
mr.update(walk_tree(subtier, submax))
return mr
mr = defaultdict(float)
mr.update(walk_tree((), self.replicas))
return mr
def _build_weighted_replicas_by_tier(self):
"""
Returns a dict mapping <tier> => replicanths for all tiers in
the ring based on their weights.
"""
weight_of_one_part = self.weight_of_one_part()
# assign each device some replicanths by weight (can't be > 1)
weighted_replicas_for_dev = {}
devices_with_room = []
for dev in self._iter_devs():
if not dev['weight']:
continue
weighted_replicas = (
dev['weight'] * weight_of_one_part / self.parts)
if weighted_replicas < 1:
devices_with_room.append(dev['id'])
else:
weighted_replicas = 1
weighted_replicas_for_dev[dev['id']] = weighted_replicas
while True:
remaining = self.replicas - sum(weighted_replicas_for_dev.values())
if remaining < 1e-10:
break
devices_with_room = [d for d in devices_with_room if
weighted_replicas_for_dev[d] < 1]
rel_weight = remaining / sum(
weighted_replicas_for_dev[d] for d in devices_with_room)
for d in devices_with_room:
weighted_replicas_for_dev[d] = min(
1, weighted_replicas_for_dev[d] * (rel_weight + 1))
weighted_replicas_by_tier = defaultdict(float)
for dev in self._iter_devs():
if not dev['weight']:
continue
assigned_replicanths = weighted_replicas_for_dev[dev['id']]
dev_tier = (dev['region'], dev['zone'], dev['ip'], dev['id'])
for i in range(len(dev_tier) + 1):
tier = dev_tier[:i]
weighted_replicas_by_tier[tier] += assigned_replicanths
# belts & suspenders/paranoia - at every level, the sum of
# weighted_replicas should be very close to the total number of
# replicas for the ring
validate_replicas_by_tier(self.replicas, weighted_replicas_by_tier)
return weighted_replicas_by_tier
def _build_wanted_replicas_by_tier(self):
"""
Returns a defaultdict of (tier: replicanths) for all tiers in the ring
based on unique-as-possible (full dispersion) with respect to their
weights and device counts.
N.B. _build_max_replicas_by_tier calculates the upper bound on the
replicanths each tier may hold irrespective of the weights of the
tier; this method will calculate the minimum replicanth <=
max_replicas[tier] that will still solve dispersion. However, it is
not guaranteed to return a fully dispersed solution if failure domains
are over-weighted for their device count.
"""
weighted_replicas = self._build_weighted_replicas_by_tier()
dispersed_replicas = {
t: {
'min': math.floor(r),
'max': math.ceil(r),
} for (t, r) in
self._build_max_replicas_by_tier(bound=float).items()
}
# watch out for device limited tiers
num_devices = defaultdict(int)
for d in self._iter_devs():
if d['weight'] <= 0:
continue
for t in (d.get('tiers') or tiers_for_dev(d)):
num_devices[t] += 1
num_devices[()] += 1
tier2children = self._build_tier2children()
wanted_replicas = defaultdict(float)
def place_replicas(tier, replicanths):
if replicanths > num_devices[tier]:
raise exceptions.RingValidationError(
'More replicanths (%s) than devices (%s) '
'in tier (%s)' % (replicanths, num_devices[tier], tier))
wanted_replicas[tier] = replicanths
sub_tiers = sorted(tier2children[tier])
if not sub_tiers:
return
to_place = defaultdict(float)
remaining = replicanths
tiers_to_spread = sub_tiers
device_limited = False
while True:
rel_weight = remaining / sum(weighted_replicas[t]
for t in tiers_to_spread)
for t in tiers_to_spread:
replicas = to_place[t] + (
weighted_replicas[t] * rel_weight)
if replicas < dispersed_replicas[t]['min']:
replicas = dispersed_replicas[t]['min']
elif (replicas > dispersed_replicas[t]['max'] and
not device_limited):
replicas = dispersed_replicas[t]['max']
if replicas > num_devices[t]:
replicas = num_devices[t]
to_place[t] = replicas
remaining = replicanths - sum(to_place.values())
if remaining < -1e-10:
tiers_to_spread = [
t for t in sub_tiers
if to_place[t] > dispersed_replicas[t]['min']
]
elif remaining > 1e-10:
tiers_to_spread = [
t for t in sub_tiers
if (num_devices[t] > to_place[t] <
dispersed_replicas[t]['max'])
]
if not tiers_to_spread:
device_limited = True
tiers_to_spread = [
t for t in sub_tiers
if to_place[t] < num_devices[t]
]
else:
# remaining is "empty"
break
for t in sub_tiers:
self.logger.debug('Planning %s on %s',
to_place[t], t)
place_replicas(t, to_place[t])
# place all replicas in the cluster tier
place_replicas((), self.replicas)
# belts & suspenders/paranoia - at every level, the sum of
# wanted_replicas should be very close to the total number of
# replicas for the ring
validate_replicas_by_tier(self.replicas, wanted_replicas)
return wanted_replicas
def _build_target_replicas_by_tier(self):
"""
Build a map of <tier> => <target_replicas> accounting for device
weights, unique-as-possible dispersion and overload.
<tier> - a tuple, describing each tier in the ring topology
<target_replicas> - a float, the target replicanths at the tier
"""
weighted_replicas = self._build_weighted_replicas_by_tier()
wanted_replicas = self._build_wanted_replicas_by_tier()
max_overload = self.get_required_overload(weighted=weighted_replicas,
wanted=wanted_replicas)
if max_overload <= 0.0:
return wanted_replicas
else:
overload = min(self.overload, max_overload)
self.logger.debug("Using effective overload of %f", overload)
target_replicas = defaultdict(float)
for tier, weighted in weighted_replicas.items():
m = (wanted_replicas[tier] - weighted) / max_overload
target_replicas[tier] = m * overload + weighted
# belts & suspenders/paranoia - at every level, the sum of
# target_replicas should be very close to the total number
# of replicas for the ring
validate_replicas_by_tier(self.replicas, target_replicas)
return target_replicas
def _build_replica_plan(self):
"""
Wraps return value of _build_target_replicas_by_tier to include
pre-calculated min and max values for each tier.
:returns: a dict, mapping <tier> => <replica_plan>, where
<replica_plan> is itself a dict
<replica_plan> include at least the following keys:
min - the minimum number of replicas at the tier
target - the target replicanths at the tier
max - the maximum number of replicas at the tier
"""
# replica part-y planner!
target_replicas = self._build_target_replicas_by_tier()
replica_plan = defaultdict(
lambda: {'min': 0, 'target': 0, 'max': 0})
replica_plan.update({
t: {
'min': math.floor(r + 1e-10),
'target': r,
'max': math.ceil(r - 1e-10),
} for (t, r) in
target_replicas.items()
})
return replica_plan
def _devs_for_part(self, part):
"""
Returns a list of devices for a specified partition.
Deliberately includes duplicates.
"""
if self._replica2part2dev is None:
return []
devs = []
for part2dev in self._replica2part2dev:
if part >= len(part2dev):
continue
dev_id = part2dev[part]
if dev_id == NONE_DEV:
continue
devs.append(self.devs[dev_id])
return devs
def _replicas_for_part(self, part):
"""
Returns a list of replicas for a specified partition.
These can be used as indices into self._replica2part2dev
without worrying about IndexErrors.
"""
return [replica for replica, part2dev
in enumerate(self._replica2part2dev)
if part < len(part2dev)]
def _each_part_replica(self):
"""
Generator yielding every (partition, replica) pair in the ring.
"""
for replica, part2dev in enumerate(self._replica2part2dev):
for part in range(len(part2dev)):
yield (part, replica)
@classmethod
def load(cls, builder_file, open=open, **kwargs):
"""
Obtain RingBuilder instance of the provided builder file
:param builder_file: path to builder file to load
:return: RingBuilder instance
"""
try:
fp = open(builder_file, 'rb')
except IOError as e:
if e.errno == errno.ENOENT:
raise exceptions.FileNotFoundError(
'Ring Builder file does not exist: %s' % builder_file)
elif e.errno in [errno.EPERM, errno.EACCES]:
raise exceptions.PermissionError(
'Ring Builder file cannot be accessed: %s' % builder_file)
else:
raise
else:
with fp:
try:
builder = pickle.load(fp)
except Exception:
# raise error during unpickling as UnPicklingError
raise exceptions.UnPicklingError(
'Ring Builder file is invalid: %s' % builder_file)
if not hasattr(builder, 'devs'):
builder_dict = builder
builder = cls(1, 1, 1, **kwargs)
builder.copy_from(builder_dict)
if not hasattr(builder, '_id'):
builder._id = None
for dev in builder.devs:
# really old rings didn't have meta keys
if dev and 'meta' not in dev:
dev['meta'] = ''
# NOTE(akscram): An old ring builder file don't contain
# replication parameters.
if dev:
dev.setdefault('replication_ip', dev['ip'])
dev.setdefault('replication_port', dev['port'])
return builder
def save(self, builder_file):
"""Serialize this RingBuilder instance to disk.
:param builder_file: path to builder file to save
"""
# We want to be sure the builder id's are persistent, so this is the
# only place where the id is assigned. Newly created instances of this
# class, or instances loaded from legacy builder files that have no
# persisted id, must be saved in order for an id to be assigned.
id_persisted = True
if self._id is None:
id_persisted = False
self._id = uuid.uuid4().hex
try:
with open(builder_file, 'wb') as f:
pickle.dump(self.to_dict(), f, protocol=2)
except Exception:
if not id_persisted:
self._id = None
raise
def search_devs(self, search_values):
"""Search devices by parameters.
:param search_values: a dictionary with search values to filter
devices, supported parameters are id,
region, zone, ip, port, replication_ip,
replication_port, device, weight, meta
:returns: list of device dicts
"""
matched_devs = []
for dev in self.devs:
if not dev:
continue
matched = True
for key in ('id', 'region', 'zone', 'ip', 'port', 'replication_ip',
'replication_port', 'device', 'weight', 'meta'):
if key in search_values:
value = search_values.get(key)
if value is not None:
if key == 'meta':
if value not in dev.get(key):
matched = False
elif key == 'ip' or key == 'replication_ip':
cdev = ''
try:
cdev = validate_and_normalize_address(
dev.get(key, ''))
except ValueError:
pass
if cdev != value:
matched = False
elif dev.get(key) != value:
matched = False
if matched:
matched_devs.append(dev)
return matched_devs
def prepare_increase_partition_power(self):
"""
Prepares a ring for partition power increase.
This makes it possible to compute the future location of any object
based on the next partition power.
In this phase object servers should create hard links when finalizing a
write to the new location as well. A relinker will be run after
restarting object-servers, creating hard links to all existing objects
in their future location.
:returns: False if next_part_power was not set, otherwise True.
"""
if self.next_part_power:
return False
self.next_part_power = self.part_power + 1
self.version += 1
return True
def increase_partition_power(self):
"""
Increases ring partition power by one.
Devices will be assigned to partitions like this:
OLD: 0, 3, 7, 5, 2, 1, ...
NEW: 0, 0, 3, 3, 7, 7, 5, 5, 2, 2, 1, 1, ...
:returns: False if next_part_power was not set or is equal to current
part_power, None if something went wrong, otherwise True.
"""
if not self.next_part_power:
return False
if self.next_part_power != (self.part_power + 1):
return False
new_replica2part2dev = []
for replica in self._replica2part2dev:
new_replica = array('H')
for device in replica:
new_replica.append(device)
new_replica.append(device) # append device a second time
new_replica2part2dev.append(new_replica)
self._replica2part2dev = new_replica2part2dev
for device in self._iter_devs():
device['parts'] *= 2
# We need to update the time when a partition has been moved the last
# time. Since this is an array of all partitions, we need to double it
# too
new_last_part_moves = []
for partition in self._last_part_moves:
new_last_part_moves.append(partition)
new_last_part_moves.append(partition)
self._last_part_moves = new_last_part_moves
self.part_power = self.next_part_power
self.parts *= 2
self.version += 1
return True
def cancel_increase_partition_power(self):
"""
Cancels a ring partition power increasement.
This sets the next_part_power to the current part_power. Object
replicators will still skip replication, and a cleanup is still
required. Finally, a finish_increase_partition_power needs to be run.
:returns: False if next_part_power was not set or is equal to current
part_power, otherwise True.
"""
if not self.next_part_power:
return False
if self.next_part_power != (self.part_power + 1):
return False
self.next_part_power = self.part_power
self.version += 1
return True
def finish_increase_partition_power(self):
"""Finish the partition power increase.
The hard links from the old object locations should be removed by now.
"""
if self.next_part_power and self.next_part_power == self.part_power:
self.next_part_power = None
self.version += 1
return True
return False
| swift-master | swift/common/ring/builder.py |
# Copyright (c) 2010-2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from collections import defaultdict
import optparse
import re
import socket
from swift.common import exceptions
from swift.common.utils import expand_ipv6, is_valid_ip, is_valid_ipv4, \
is_valid_ipv6
def tiers_for_dev(dev):
"""
Returns a tuple of tiers for a given device in ascending order by
length.
:returns: tuple of tiers
"""
t1 = dev['region']
t2 = dev['zone']
t3 = dev['ip']
t4 = dev['id']
return ((t1,),
(t1, t2),
(t1, t2, t3),
(t1, t2, t3, t4))
def build_tier_tree(devices):
"""
Construct the tier tree from the zone layout.
The tier tree is a dictionary that maps tiers to their child tiers.
A synthetic root node of () is generated so that there's one tree,
not a forest.
Example:
region 1 -+---- zone 1 -+---- 192.168.101.1 -+---- device id 0
| | |
| | +---- device id 1
| | |
| | +---- device id 2
| |
| +---- 192.168.101.2 -+---- device id 3
| |
| +---- device id 4
| |
| +---- device id 5
|
+---- zone 2 -+---- 192.168.102.1 -+---- device id 6
| |
| +---- device id 7
| |
| +---- device id 8
|
+---- 192.168.102.2 -+---- device id 9
|
+---- device id 10
region 2 -+---- zone 1 -+---- 192.168.201.1 -+---- device id 12
| |
| +---- device id 13
| |
| +---- device id 14
|
+---- 192.168.201.2 -+---- device id 15
|
+---- device id 16
|
+---- device id 17
The tier tree would look like:
{
(): [(1,), (2,)],
(1,): [(1, 1), (1, 2)],
(2,): [(2, 1)],
(1, 1): [(1, 1, 192.168.101.1),
(1, 1, 192.168.101.2)],
(1, 2): [(1, 2, 192.168.102.1),
(1, 2, 192.168.102.2)],
(2, 1): [(2, 1, 192.168.201.1),
(2, 1, 192.168.201.2)],
(1, 1, 192.168.101.1): [(1, 1, 192.168.101.1, 0),
(1, 1, 192.168.101.1, 1),
(1, 1, 192.168.101.1, 2)],
(1, 1, 192.168.101.2): [(1, 1, 192.168.101.2, 3),
(1, 1, 192.168.101.2, 4),
(1, 1, 192.168.101.2, 5)],
(1, 2, 192.168.102.1): [(1, 2, 192.168.102.1, 6),
(1, 2, 192.168.102.1, 7),
(1, 2, 192.168.102.1, 8)],
(1, 2, 192.168.102.2): [(1, 2, 192.168.102.2, 9),
(1, 2, 192.168.102.2, 10)],
(2, 1, 192.168.201.1): [(2, 1, 192.168.201.1, 12),
(2, 1, 192.168.201.1, 13),
(2, 1, 192.168.201.1, 14)],
(2, 1, 192.168.201.2): [(2, 1, 192.168.201.2, 15),
(2, 1, 192.168.201.2, 16),
(2, 1, 192.168.201.2, 17)],
}
:devices: device dicts from which to generate the tree
:returns: tier tree
"""
tier2children = defaultdict(set)
for dev in devices:
for tier in tiers_for_dev(dev):
if len(tier) > 1:
tier2children[tier[0:-1]].add(tier)
else:
tier2children[()].add(tier)
return tier2children
def validate_and_normalize_ip(ip):
"""
Return normalized ip if the ip is a valid ip.
Otherwise raise ValueError Exception. The hostname is
normalized to all lower case. IPv6-addresses are converted to
lowercase and fully expanded.
"""
# first convert to lower case
new_ip = ip.lower()
if is_valid_ipv4(new_ip):
return new_ip
elif is_valid_ipv6(new_ip):
return expand_ipv6(new_ip)
else:
raise ValueError('Invalid ip %s' % ip)
def validate_and_normalize_address(address):
"""
Return normalized address if the address is a valid ip or hostname.
Otherwise raise ValueError Exception. The hostname is
normalized to all lower case. IPv6-addresses are converted to
lowercase and fully expanded.
RFC1123 2.1 Host Names and Nubmers
DISCUSSION
This last requirement is not intended to specify the complete
syntactic form for entering a dotted-decimal host number;
that is considered to be a user-interface issue. For
example, a dotted-decimal number must be enclosed within
"[ ]" brackets for SMTP mail (see Section 5.2.17). This
notation could be made universal within a host system,
simplifying the syntactic checking for a dotted-decimal
number.
If a dotted-decimal number can be entered without such
identifying delimiters, then a full syntactic check must be
made, because a segment of a host domain name is now allowed
to begin with a digit and could legally be entirely numeric
(see Section 6.1.2.4). However, a valid host name can never
have the dotted-decimal form #.#.#.#, since at least the
highest-level component label will be alphabetic.
"""
new_address = address.lstrip('[').rstrip(']')
if address.startswith('[') and address.endswith(']'):
return validate_and_normalize_ip(new_address)
new_address = new_address.lower()
if is_valid_ipv4(new_address):
return new_address
elif is_valid_ipv6(new_address):
return expand_ipv6(new_address)
elif is_valid_hostname(new_address):
return new_address
else:
raise ValueError('Invalid address %s' % address)
def is_valid_hostname(hostname):
"""
Return True if the provided hostname is a valid hostname
"""
if len(hostname) < 1 or len(hostname) > 255:
return False
if hostname.endswith('.'):
# strip exactly one dot from the right, if present
hostname = hostname[:-1]
allowed = re.compile(r"(?!-)[A-Z\d-]{1,63}(?<!-)$", re.IGNORECASE)
return all(allowed.match(x) for x in hostname.split("."))
def is_local_device(my_ips, my_port, dev_ip, dev_port):
"""
Return True if the provided dev_ip and dev_port are among the IP
addresses specified in my_ips and my_port respectively.
To support accurate locality determination in the server-per-port
deployment, when my_port is None, only IP addresses are used for
determining locality (dev_port is ignored).
If dev_ip is a hostname then it is first translated to an IP
address before checking it against my_ips.
"""
candidate_ips = []
if not is_valid_ip(dev_ip) and is_valid_hostname(dev_ip):
try:
# get the ip for this host; use getaddrinfo so that
# it works for both ipv4 and ipv6 addresses
addrinfo = socket.getaddrinfo(dev_ip, dev_port)
for addr in addrinfo:
family = addr[0]
dev_ip = addr[4][0] # get the ip-address
if family == socket.AF_INET6:
dev_ip = expand_ipv6(dev_ip)
candidate_ips.append(dev_ip)
except socket.gaierror:
return False
else:
if is_valid_ipv6(dev_ip):
dev_ip = expand_ipv6(dev_ip)
candidate_ips = [dev_ip]
for dev_ip in candidate_ips:
if dev_ip in my_ips and (my_port is None or dev_port == my_port):
return True
return False
def parse_search_value(search_value):
"""The <search-value> can be of the form::
d<device_id>r<region>z<zone>-<ip>:<port>R<r_ip>:<r_port>/
<device_name>_<meta>
Where <r_ip> and <r_port> are replication ip and port.
Any part is optional, but you must include at least one part.
Examples::
d74 Matches the device id 74
r4 Matches devices in region 4
z1 Matches devices in zone 1
z1-1.2.3.4 Matches devices in zone 1 with the ip 1.2.3.4
1.2.3.4 Matches devices in any zone with the ip 1.2.3.4
z1:5678 Matches devices in zone 1 using port 5678
:5678 Matches devices that use port 5678
R5.6.7.8 Matches devices that use replication ip 5.6.7.8
R:5678 Matches devices that use replication port 5678
1.2.3.4R5.6.7.8 Matches devices that use ip 1.2.3.4 and replication ip
5.6.7.8
/sdb1 Matches devices with the device name sdb1
_shiny Matches devices with shiny in the meta data
_"snet: 5.6.7.8" Matches devices with snet: 5.6.7.8 in the meta data
[::1] Matches devices in any zone with the ip ::1
z1-[::1]:5678 Matches devices in zone 1 with ip ::1 and port 5678
Most specific example::
d74r4z1-1.2.3.4:5678/sdb1_"snet: 5.6.7.8"
Nerd explanation:
All items require their single character prefix except the ip, in which
case the - is optional unless the device id or zone is also included.
"""
orig_search_value = search_value
match = {}
if search_value.startswith('d'):
i = 1
while i < len(search_value) and search_value[i].isdigit():
i += 1
match['id'] = int(search_value[1:i])
search_value = search_value[i:]
if search_value.startswith('r'):
i = 1
while i < len(search_value) and search_value[i].isdigit():
i += 1
match['region'] = int(search_value[1:i])
search_value = search_value[i:]
if search_value.startswith('z'):
i = 1
while i < len(search_value) and search_value[i].isdigit():
i += 1
match['zone'] = int(search_value[1:i])
search_value = search_value[i:]
if search_value.startswith('-'):
search_value = search_value[1:]
if search_value and search_value[0].isdigit():
i = 1
while i < len(search_value) and search_value[i] in '0123456789.':
i += 1
match['ip'] = search_value[:i]
search_value = search_value[i:]
elif search_value and search_value.startswith('['):
i = 1
while i < len(search_value) and search_value[i] != ']':
i += 1
i += 1
match['ip'] = search_value[:i].lstrip('[').rstrip(']')
search_value = search_value[i:]
if 'ip' in match:
# ipv6 addresses are converted to all lowercase
# and use the fully expanded representation
match['ip'] = validate_and_normalize_ip(match['ip'])
if search_value.startswith(':'):
i = 1
while i < len(search_value) and search_value[i].isdigit():
i += 1
match['port'] = int(search_value[1:i])
search_value = search_value[i:]
# replication parameters
if search_value.startswith('R'):
search_value = search_value[1:]
if search_value and search_value[0].isdigit():
i = 1
while (i < len(search_value) and
search_value[i] in '0123456789.'):
i += 1
match['replication_ip'] = search_value[:i]
search_value = search_value[i:]
elif search_value and search_value.startswith('['):
i = 1
while i < len(search_value) and search_value[i] != ']':
i += 1
i += 1
match['replication_ip'] = search_value[:i].lstrip('[').rstrip(']')
search_value = search_value[i:]
if 'replication_ip' in match:
# ipv6 addresses are converted to all lowercase
# and use the fully expanded representation
match['replication_ip'] = \
validate_and_normalize_ip(match['replication_ip'])
if search_value.startswith(':'):
i = 1
while i < len(search_value) and search_value[i].isdigit():
i += 1
match['replication_port'] = int(search_value[1:i])
search_value = search_value[i:]
if search_value.startswith('/'):
i = 1
while i < len(search_value) and search_value[i] != '_':
i += 1
match['device'] = search_value[1:i]
search_value = search_value[i:]
if search_value.startswith('_'):
match['meta'] = search_value[1:]
search_value = ''
if search_value:
raise ValueError('Invalid <search-value>: %s' %
repr(orig_search_value))
return match
def parse_search_values_from_opts(opts):
"""
Convert optparse style options into a dictionary for searching.
:param opts: optparse style options
:returns: a dictionary with search values to filter devices,
supported parameters are id, region, zone, ip, port,
replication_ip, replication_port, device, weight, meta
"""
search_values = {}
for key in ('id', 'region', 'zone', 'ip', 'port', 'replication_ip',
'replication_port', 'device', 'weight', 'meta'):
value = getattr(opts, key, None)
if value:
if key == 'ip' or key == 'replication_ip':
value = validate_and_normalize_address(value)
search_values[key] = value
return search_values
def parse_change_values_from_opts(opts):
"""
Convert optparse style options into a dictionary for changing.
:param opts: optparse style options
:returns: a dictonary with change values to filter devices,
supported parameters are ip, port, replication_ip,
replication_port
"""
change_values = {}
for key in ('change_ip', 'change_port', 'change_replication_ip',
'change_replication_port', 'change_device', 'change_meta'):
value = getattr(opts, key, None)
if value:
if key == 'change_ip' or key == 'change_replication_ip':
value = validate_and_normalize_address(value)
change_values[key.replace('change_', '')] = value
return change_values
def parse_add_value(add_value):
"""
Convert an add value, like 'r1z2-10.1.2.3:7878/sdf', to a dictionary.
If the string does not start with 'r<N>', then the value of 'region' in
the returned dictionary will be None. Callers should check for this and
set a reasonable default. This is done so callers can emit errors or
warnings if desired.
Similarly, 'replication_ip' and 'replication_port' will be None if not
specified.
:returns: dictionary with keys 'region', 'zone', 'ip', 'port', 'device',
'replication_ip', 'replication_port', 'meta'
:raises ValueError: if add_value is malformed
"""
region = None
rest = add_value
if add_value.startswith('r'):
i = 1
while i < len(add_value) and add_value[i].isdigit():
i += 1
region = int(add_value[1:i])
rest = add_value[i:]
if not rest.startswith('z'):
raise ValueError('Invalid add value: %s' % add_value)
i = 1
while i < len(rest) and rest[i].isdigit():
i += 1
zone = int(rest[1:i])
rest = rest[i:]
if not rest.startswith('-'):
raise ValueError('Invalid add value: %s' % add_value)
ip, port, rest = parse_address(rest[1:])
replication_ip = replication_port = None
if rest.startswith('R'):
replication_ip, replication_port, rest = \
parse_address(rest[1:])
if not rest.startswith('/'):
raise ValueError(
'Invalid add value: %s' % add_value)
i = 1
while i < len(rest) and rest[i] != '_':
i += 1
device_name = rest[1:i]
if not validate_device_name(device_name):
raise ValueError('Invalid device name')
rest = rest[i:]
meta = ''
if rest.startswith('_'):
meta = rest[1:]
return {'region': region, 'zone': zone, 'ip': ip, 'port': port,
'device': device_name, 'replication_ip': replication_ip,
'replication_port': replication_port, 'meta': meta}
def parse_address(rest):
if rest.startswith('['):
# remove first [] for ip
rest = rest.replace('[', '', 1).replace(']', '', 1)
pos = 0
while (pos < len(rest) and
not (rest[pos] == 'R' or rest[pos] == '/')):
pos += 1
address = rest[:pos]
rest = rest[pos:]
port_start = address.rfind(':')
if port_start == -1:
raise ValueError('Invalid port in add value')
ip = address[:port_start]
try:
port = int(address[(port_start + 1):])
except (TypeError, ValueError):
raise ValueError(
'Invalid port %s in add value' % address[port_start:])
# if this is an ipv6 address then we want to convert it
# to all lowercase and use its fully expanded representation
# to make searches easier
ip = validate_and_normalize_ip(ip)
return (ip, port, rest)
def validate_args(argvish):
"""
Build OptionParse and validate it whether the format is new command-line
format or not.
"""
opts, args = parse_args(argvish)
# id can be 0 (swift starts generating id from 0),
# also zone, region and weight can be set to zero.
new_cmd_format = opts.id is not None or opts.region is not None or \
opts.zone is not None or opts.ip or opts.port or \
opts.replication_ip or opts.replication_port or \
opts.device or opts.weight is not None or opts.meta
return (new_cmd_format, opts, args)
def parse_args(argvish):
"""
Build OptionParser and evaluate command line arguments.
"""
parser = optparse.OptionParser()
parser.add_option('-u', '--id', type="int",
help="Device ID")
parser.add_option('-r', '--region', type="int",
help="Region")
parser.add_option('-z', '--zone', type="int",
help="Zone")
parser.add_option('-i', '--ip', type="string",
help="IP address")
parser.add_option('-p', '--port', type="int",
help="Port number")
parser.add_option('-j', '--replication-ip', type="string",
help="Replication IP address")
parser.add_option('-q', '--replication-port', type="int",
help="Replication port number")
parser.add_option('-d', '--device', type="string",
help="Device name (e.g. md0, sdb1)")
parser.add_option('-w', '--weight', type="float",
help="Device weight")
parser.add_option('-m', '--meta', type="string", default="",
help="Extra device info (just a string)")
parser.add_option('-I', '--change-ip', type="string",
help="IP address for change")
parser.add_option('-P', '--change-port', type="int",
help="Port number for change")
parser.add_option('-J', '--change-replication-ip', type="string",
help="Replication IP address for change")
parser.add_option('-Q', '--change-replication-port', type="int",
help="Replication port number for change")
parser.add_option('-D', '--change-device', type="string",
help="Device name (e.g. md0, sdb1) for change")
parser.add_option('-M', '--change-meta', type="string", default="",
help="Extra device info (just a string) for change")
parser.add_option('-y', '--yes', default=False, action="store_true",
help="Assume a yes response to all questions")
return parser.parse_args(argvish)
def parse_builder_ring_filename_args(argvish):
first_arg = argvish[1]
if first_arg.endswith('.ring.gz'):
ring_file = first_arg
builder_file = first_arg[:-len('.ring.gz')] + '.builder'
else:
builder_file = first_arg
if not builder_file.endswith('.builder'):
ring_file = first_arg
else:
ring_file = builder_file[:-len('.builder')]
ring_file += '.ring.gz'
return builder_file, ring_file
def build_dev_from_opts(opts):
"""
Convert optparse stype options into a device dictionary.
"""
for attribute, shortopt, longopt in (['region', '-r', '--region'],
['zone', '-z', '--zone'],
['ip', '-i', '--ip'],
['port', '-p', '--port'],
['device', '-d', '--device'],
['weight', '-w', '--weight']):
if getattr(opts, attribute, None) is None:
raise ValueError('Required argument %s/%s not specified.' %
(shortopt, longopt))
ip = validate_and_normalize_address(opts.ip)
replication_ip = validate_and_normalize_address(
(opts.replication_ip or opts.ip))
replication_port = opts.replication_port or opts.port
if not validate_device_name(opts.device):
raise ValueError('Invalid device name')
return {'region': opts.region, 'zone': opts.zone, 'ip': ip,
'port': opts.port, 'device': opts.device, 'meta': opts.meta,
'replication_ip': replication_ip,
'replication_port': replication_port, 'weight': opts.weight}
def dispersion_report(builder, search_filter=None,
verbose=False, recalculate=False):
if recalculate or not builder._dispersion_graph:
builder._build_dispersion_graph()
max_allowed_replicas = builder._build_max_replicas_by_tier()
worst_tier = None
max_dispersion = 0.0
sorted_graph = []
for tier, replica_counts in sorted(builder._dispersion_graph.items()):
tier_name = get_tier_name(tier, builder)
if search_filter and not re.match(search_filter, tier_name):
continue
max_replicas = int(max_allowed_replicas[tier])
at_risk_parts = sum(replica_counts[i] * (i - max_replicas)
for i in range(max_replicas + 1,
len(replica_counts)))
placed_parts = sum(replica_counts[i] * i for i in range(
1, len(replica_counts)))
tier_dispersion = 100.0 * at_risk_parts / placed_parts
if tier_dispersion > max_dispersion:
max_dispersion = tier_dispersion
worst_tier = tier_name
if not verbose:
continue
tier_report = {
'max_replicas': max_replicas,
'placed_parts': placed_parts,
'dispersion': tier_dispersion,
'replicas': replica_counts,
}
sorted_graph.append((tier_name, tier_report))
return {
'max_dispersion': max_dispersion,
'worst_tier': worst_tier,
'graph': sorted_graph,
}
def validate_replicas_by_tier(replicas, replicas_by_tier):
"""
Validate the sum of the replicas at each tier.
The sum of the replicas at each tier should be less than or very close to
the upper limit indicated by replicas
:param replicas: float,the upper limit of replicas
:param replicas_by_tier: defaultdict,the replicas by tier
"""
tiers = ['cluster', 'regions', 'zones', 'servers', 'devices']
for i, tier_name in enumerate(tiers):
replicas_at_tier = sum(replicas_by_tier[t] for t in
replicas_by_tier if len(t) == i)
if abs(replicas - replicas_at_tier) > 1e-10:
raise exceptions.RingValidationError(
'%s != %s at tier %s' % (
replicas_at_tier, replicas, tier_name))
def format_device(region=None, zone=None, ip=None, device=None, **kwargs):
"""
Convert device dict or tier attributes to a representative string.
:returns: a string, the normalized format of a device tier
"""
return "r%sz%s-%s/%s" % (region, zone, ip, device)
def get_tier_name(tier, builder):
if len(tier) == 1:
return "r%s" % (tier[0], )
if len(tier) == 2:
return "r%sz%s" % (tier[0], tier[1])
if len(tier) == 3:
return "r%sz%s-%s" % (tier[0], tier[1], tier[2])
if len(tier) == 4:
device = builder.devs[tier[3]] or {}
return format_device(tier[0], tier[1], tier[2], device.get(
'device', 'IDd%s' % tier[3]))
def validate_device_name(device_name):
return not (
device_name.startswith(' ') or
device_name.endswith(' ') or
len(device_name) == 0)
def pretty_dev(device):
return format_device(**device)
| swift-master | swift/common/ring/utils.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import array
import contextlib
import six.moves.cPickle as pickle
import json
from collections import defaultdict
from gzip import GzipFile
from os.path import getmtime
import struct
from time import time
import os
from itertools import chain, count
from tempfile import NamedTemporaryFile
import sys
import zlib
import six
from six.moves import range
from swift.common.exceptions import RingLoadError
from swift.common.utils import hash_path, validate_configuration, md5
from swift.common.ring.utils import tiers_for_dev
DEFAULT_RELOAD_TIME = 15
def calc_replica_count(replica2part2dev_id):
if not replica2part2dev_id:
return 0
base = len(replica2part2dev_id) - 1
extra = 1.0 * len(replica2part2dev_id[-1]) / len(replica2part2dev_id[0])
return base + extra
def normalize_devices(devs):
# NOTE(akscram): Replication parameters like replication_ip
# and replication_port are required for
# replication process. An old replication
# ring doesn't contain this parameters into
# device. Old-style pickled rings won't have
# region information.
for dev in devs:
if dev is None:
continue
dev.setdefault('region', 1)
if 'ip' in dev:
dev.setdefault('replication_ip', dev['ip'])
if 'port' in dev:
dev.setdefault('replication_port', dev['port'])
class RingReader(object):
chunk_size = 2 ** 16
def __init__(self, filename):
self.fp = open(filename, 'rb')
self._reset()
def _reset(self):
self._buffer = b''
self.size = 0
self.raw_size = 0
self._md5 = md5(usedforsecurity=False)
self._decomp = zlib.decompressobj(32 + zlib.MAX_WBITS)
@property
def close(self):
return self.fp.close
def seek(self, pos, ref=0):
if (pos, ref) != (0, 0):
raise NotImplementedError
self._reset()
return self.fp.seek(pos, ref)
def _buffer_chunk(self):
chunk = self.fp.read(self.chunk_size)
if not chunk:
return False
self.size += len(chunk)
self._md5.update(chunk)
chunk = self._decomp.decompress(chunk)
self.raw_size += len(chunk)
self._buffer += chunk
return True
def read(self, amount=-1):
if amount < 0:
raise IOError("don't be greedy")
while amount > len(self._buffer):
if not self._buffer_chunk():
break
result, self._buffer = self._buffer[:amount], self._buffer[amount:]
return result
def readline(self):
# apparently pickle needs this?
while b'\n' not in self._buffer:
if not self._buffer_chunk():
break
line, sep, self._buffer = self._buffer.partition(b'\n')
return line + sep
def readinto(self, buffer):
chunk = self.read(len(buffer))
buffer[:len(chunk)] = chunk
return len(chunk)
@property
def md5(self):
return self._md5.hexdigest()
class RingData(object):
"""Partitioned consistent hashing ring data (used for serialization)."""
def __init__(self, replica2part2dev_id, devs, part_shift,
next_part_power=None, version=None):
normalize_devices(devs)
self.devs = devs
self._replica2part2dev_id = replica2part2dev_id
self._part_shift = part_shift
self.next_part_power = next_part_power
self.version = version
self.md5 = self.size = self.raw_size = None
@property
def replica_count(self):
"""Number of replicas (full or partial) used in the ring."""
return calc_replica_count(self._replica2part2dev_id)
@classmethod
def deserialize_v1(cls, gz_file, metadata_only=False):
"""
Deserialize a v1 ring file into a dictionary with `devs`, `part_shift`,
and `replica2part2dev_id` keys.
If the optional kwarg `metadata_only` is True, then the
`replica2part2dev_id` is not loaded and that key in the returned
dictionary just has the value `[]`.
:param file gz_file: An opened file-like object which has already
consumed the 6 bytes of magic and version.
:param bool metadata_only: If True, only load `devs` and `part_shift`
:returns: A dict containing `devs`, `part_shift`, and
`replica2part2dev_id`
"""
json_len, = struct.unpack('!I', gz_file.read(4))
ring_dict = json.loads(gz_file.read(json_len))
ring_dict['replica2part2dev_id'] = []
if metadata_only:
return ring_dict
byteswap = (ring_dict.get('byteorder', sys.byteorder) != sys.byteorder)
partition_count = 1 << (32 - ring_dict['part_shift'])
for x in range(ring_dict['replica_count']):
part2dev = array.array('H', gz_file.read(2 * partition_count))
if byteswap:
part2dev.byteswap()
ring_dict['replica2part2dev_id'].append(part2dev)
return ring_dict
@classmethod
def load(cls, filename, metadata_only=False):
"""
Load ring data from a file.
:param filename: Path to a file serialized by the save() method.
:param bool metadata_only: If True, only load `devs` and `part_shift`.
:returns: A RingData instance containing the loaded data.
"""
with contextlib.closing(RingReader(filename)) as gz_file:
# See if the file is in the new format
magic = gz_file.read(4)
if magic == b'R1NG':
format_version, = struct.unpack('!H', gz_file.read(2))
if format_version == 1:
ring_data = cls.deserialize_v1(
gz_file, metadata_only=metadata_only)
else:
raise Exception('Unknown ring format version %d' %
format_version)
else:
# Assume old-style pickled ring
gz_file.seek(0)
ring_data = pickle.load(gz_file)
if hasattr(ring_data, 'devs'):
# pickled RingData; make sure we've got region/replication info
normalize_devices(ring_data.devs)
else:
ring_data = RingData(ring_data['replica2part2dev_id'],
ring_data['devs'], ring_data['part_shift'],
ring_data.get('next_part_power'),
ring_data.get('version'))
for attr in ('md5', 'size', 'raw_size'):
setattr(ring_data, attr, getattr(gz_file, attr))
return ring_data
def serialize_v1(self, file_obj):
# Write out new-style serialization magic and version:
file_obj.write(struct.pack('!4sH', b'R1NG', 1))
ring = self.to_dict()
# Only include next_part_power if it is set in the
# builder, otherwise just ignore it
_text = {'devs': ring['devs'], 'part_shift': ring['part_shift'],
'replica_count': len(ring['replica2part2dev_id']),
'byteorder': sys.byteorder}
if ring['version'] is not None:
_text['version'] = ring['version']
next_part_power = ring.get('next_part_power')
if next_part_power is not None:
_text['next_part_power'] = next_part_power
json_text = json.dumps(_text, sort_keys=True,
ensure_ascii=True).encode('ascii')
json_len = len(json_text)
file_obj.write(struct.pack('!I', json_len))
file_obj.write(json_text)
for part2dev_id in ring['replica2part2dev_id']:
if six.PY2:
# Can't just use tofile() because a GzipFile apparently
# doesn't count as an 'open file'
file_obj.write(part2dev_id.tostring())
else:
part2dev_id.tofile(file_obj)
def save(self, filename, mtime=1300507380.0):
"""
Serialize this RingData instance to disk.
:param filename: File into which this instance should be serialized.
:param mtime: time used to override mtime for gzip, default or None
if the caller wants to include time
"""
# Override the timestamp so that the same ring data creates
# the same bytes on disk. This makes a checksum comparison a
# good way to see if two rings are identical.
tempf = NamedTemporaryFile(dir=".", prefix=filename, delete=False)
gz_file = GzipFile(filename, mode='wb', fileobj=tempf, mtime=mtime)
self.serialize_v1(gz_file)
gz_file.close()
tempf.flush()
os.fsync(tempf.fileno())
tempf.close()
os.chmod(tempf.name, 0o644)
os.rename(tempf.name, filename)
def to_dict(self):
return {'devs': self.devs,
'replica2part2dev_id': self._replica2part2dev_id,
'part_shift': self._part_shift,
'next_part_power': self.next_part_power,
'version': self.version}
class Ring(object):
"""
Partitioned consistent hashing ring.
:param serialized_path: path to serialized RingData instance
:param reload_time: time interval in seconds to check for a ring change
:param ring_name: ring name string (basically specified from policy)
:param validation_hook: hook point to validate ring configuration ontime
:raises RingLoadError: if the loaded ring data violates its constraint
"""
def __init__(self, serialized_path, reload_time=None, ring_name=None,
validation_hook=lambda ring_data: None):
# can't use the ring unless HASH_PATH_SUFFIX is set
validate_configuration()
if ring_name:
self.serialized_path = os.path.join(serialized_path,
ring_name + '.ring.gz')
else:
self.serialized_path = os.path.join(serialized_path)
self.reload_time = (DEFAULT_RELOAD_TIME if reload_time is None
else reload_time)
self._validation_hook = validation_hook
self._reload(force=True)
def _reload(self, force=False):
self._rtime = time() + self.reload_time
if force or self.has_changed():
ring_data = RingData.load(self.serialized_path)
try:
self._validation_hook(ring_data)
except RingLoadError:
if force:
raise
else:
# In runtime reload at working server, it's ok to use old
# ring data if the new ring data is invalid.
return
self._mtime = getmtime(self.serialized_path)
self._devs = ring_data.devs
self._replica2part2dev_id = ring_data._replica2part2dev_id
self._part_shift = ring_data._part_shift
self._rebuild_tier_data()
self._update_bookkeeping()
self._next_part_power = ring_data.next_part_power
self._version = ring_data.version
self._md5 = ring_data.md5
self._size = ring_data.size
self._raw_size = ring_data.raw_size
def _update_bookkeeping(self):
# Do this now, when we know the data has changed, rather than
# doing it on every call to get_more_nodes().
#
# Since this is to speed up the finding of handoffs, we only
# consider devices with at least one partition assigned. This
# way, a region, zone, or server with no partitions assigned
# does not count toward our totals, thereby keeping the early
# bailouts in get_more_nodes() working.
dev_ids_with_parts = set()
for part2dev_id in self._replica2part2dev_id:
for dev_id in part2dev_id:
dev_ids_with_parts.add(dev_id)
regions = set()
zones = set()
ips = set()
self._num_devs = 0
self._num_assigned_devs = 0
self._num_weighted_devs = 0
for dev in self._devs:
if dev is None:
continue
self._num_devs += 1
if dev.get('weight', 0) > 0:
self._num_weighted_devs += 1
if dev['id'] in dev_ids_with_parts:
regions.add(dev['region'])
zones.add((dev['region'], dev['zone']))
ips.add((dev['region'], dev['zone'], dev['ip']))
self._num_assigned_devs += 1
self._num_regions = len(regions)
self._num_zones = len(zones)
self._num_ips = len(ips)
@property
def next_part_power(self):
if time() > self._rtime:
self._reload()
return self._next_part_power
@property
def part_power(self):
return 32 - self._part_shift
@property
def version(self):
return self._version
@property
def md5(self):
return self._md5
@property
def size(self):
return self._size
@property
def raw_size(self):
return self._raw_size
def _rebuild_tier_data(self):
self.tier2devs = defaultdict(list)
for dev in self._devs:
if not dev:
continue
for tier in tiers_for_dev(dev):
self.tier2devs[tier].append(dev)
tiers_by_length = defaultdict(list)
for tier in self.tier2devs:
tiers_by_length[len(tier)].append(tier)
self.tiers_by_length = sorted(tiers_by_length.values(),
key=lambda x: len(x[0]))
for tiers in self.tiers_by_length:
tiers.sort()
@property
def replica_count(self):
"""Number of replicas (full or partial) used in the ring."""
return calc_replica_count(self._replica2part2dev_id)
@property
def partition_count(self):
"""Number of partitions in the ring."""
return len(self._replica2part2dev_id[0])
@property
def device_count(self):
"""Number of devices in the ring."""
return self._num_devs
@property
def weighted_device_count(self):
"""Number of devices with weight in the ring."""
return self._num_weighted_devs
@property
def assigned_device_count(self):
"""Number of devices with assignments in the ring."""
return self._num_assigned_devs
@property
def devs(self):
"""devices in the ring"""
if time() > self._rtime:
self._reload()
return self._devs
def has_changed(self):
"""
Check to see if the ring on disk is different than the current one in
memory.
:returns: True if the ring on disk has changed, False otherwise
"""
return getmtime(self.serialized_path) != self._mtime
def _get_part_nodes(self, part):
part_nodes = []
seen_ids = set()
for r2p2d in self._replica2part2dev_id:
if part < len(r2p2d):
dev_id = r2p2d[part]
if dev_id not in seen_ids:
part_nodes.append(self.devs[dev_id])
seen_ids.add(dev_id)
return [dict(node, index=i) for i, node in enumerate(part_nodes)]
def get_part(self, account, container=None, obj=None):
"""
Get the partition for an account/container/object.
:param account: account name
:param container: container name
:param obj: object name
:returns: the partition number
"""
key = hash_path(account, container, obj, raw_digest=True)
if time() > self._rtime:
self._reload()
part = struct.unpack_from('>I', key)[0] >> self._part_shift
return part
def get_part_nodes(self, part):
"""
Get the nodes that are responsible for the partition. If one
node is responsible for more than one replica of the same
partition, it will only appear in the output once.
:param part: partition to get nodes for
:returns: list of node dicts
See :func:`get_nodes` for a description of the node dicts.
"""
if time() > self._rtime:
self._reload()
return self._get_part_nodes(part)
def get_nodes(self, account, container=None, obj=None):
"""
Get the partition and nodes for an account/container/object.
If a node is responsible for more than one replica, it will
only appear in the output once.
:param account: account name
:param container: container name
:param obj: object name
:returns: a tuple of (partition, list of node dicts)
Each node dict will have at least the following keys:
====== ===============================================================
id unique integer identifier amongst devices
index offset into the primary node list for the partition
weight a float of the relative weight of this device as compared to
others; this indicates how many partitions the builder will try
to assign to this device
zone integer indicating which zone the device is in; a given
partition will not be assigned to multiple devices within the
same zone
ip the ip address of the device
port the tcp port of the device
device the device's name on disk (sdb1, for example)
meta general use 'extra' field; for example: the online date, the
hardware description
====== ===============================================================
"""
part = self.get_part(account, container, obj)
return part, self._get_part_nodes(part)
def get_more_nodes(self, part):
"""
Generator to get extra nodes for a partition for hinted handoff.
The handoff nodes will try to be in zones other than the
primary zones, will take into account the device weights, and
will usually keep the same sequences of handoffs even with
ring changes.
:param part: partition to get handoff nodes for
:returns: generator of node dicts
See :func:`get_nodes` for a description of the node dicts.
"""
if time() > self._rtime:
self._reload()
primary_nodes = self._get_part_nodes(part)
used = set(d['id'] for d in primary_nodes)
index = count()
same_regions = set(d['region'] for d in primary_nodes)
same_zones = set((d['region'], d['zone']) for d in primary_nodes)
same_ips = set(
(d['region'], d['zone'], d['ip']) for d in primary_nodes)
parts = len(self._replica2part2dev_id[0])
part_hash = md5(str(part).encode('ascii'),
usedforsecurity=False).digest()
start = struct.unpack_from('>I', part_hash)[0] >> self._part_shift
inc = int(parts / 65536) or 1
# Multiple loops for execution speed; the checks and bookkeeping get
# simpler as you go along
hit_all_regions = len(same_regions) == self._num_regions
for handoff_part in chain(range(start, parts, inc),
range(inc - ((parts - start) % inc),
start, inc)):
if hit_all_regions:
# At this point, there are no regions left untouched, so we
# can stop looking.
break
for part2dev_id in self._replica2part2dev_id:
if handoff_part < len(part2dev_id):
dev_id = part2dev_id[handoff_part]
dev = self._devs[dev_id]
region = dev['region']
if dev_id not in used and region not in same_regions:
yield dict(dev, handoff_index=next(index))
used.add(dev_id)
same_regions.add(region)
zone = dev['zone']
ip = (region, zone, dev['ip'])
same_zones.add((region, zone))
same_ips.add(ip)
if len(same_regions) == self._num_regions:
hit_all_regions = True
break
hit_all_zones = len(same_zones) == self._num_zones
for handoff_part in chain(range(start, parts, inc),
range(inc - ((parts - start) % inc),
start, inc)):
if hit_all_zones:
# Much like we stopped looking for fresh regions before, we
# can now stop looking for fresh zones; there are no more.
break
for part2dev_id in self._replica2part2dev_id:
if handoff_part < len(part2dev_id):
dev_id = part2dev_id[handoff_part]
dev = self._devs[dev_id]
zone = (dev['region'], dev['zone'])
if dev_id not in used and zone not in same_zones:
yield dict(dev, handoff_index=next(index))
used.add(dev_id)
same_zones.add(zone)
ip = zone + (dev['ip'],)
same_ips.add(ip)
if len(same_zones) == self._num_zones:
hit_all_zones = True
break
hit_all_ips = len(same_ips) == self._num_ips
for handoff_part in chain(range(start, parts, inc),
range(inc - ((parts - start) % inc),
start, inc)):
if hit_all_ips:
# We've exhausted the pool of unused backends, so stop
# looking.
break
for part2dev_id in self._replica2part2dev_id:
if handoff_part < len(part2dev_id):
dev_id = part2dev_id[handoff_part]
dev = self._devs[dev_id]
ip = (dev['region'], dev['zone'], dev['ip'])
if dev_id not in used and ip not in same_ips:
yield dict(dev, handoff_index=next(index))
used.add(dev_id)
same_ips.add(ip)
if len(same_ips) == self._num_ips:
hit_all_ips = True
break
hit_all_devs = len(used) == self._num_assigned_devs
for handoff_part in chain(range(start, parts, inc),
range(inc - ((parts - start) % inc),
start, inc)):
if hit_all_devs:
# We've used every device we have, so let's stop looking for
# unused devices now.
break
for part2dev_id in self._replica2part2dev_id:
if handoff_part < len(part2dev_id):
dev_id = part2dev_id[handoff_part]
if dev_id not in used:
dev = self._devs[dev_id]
yield dict(dev, handoff_index=next(index))
used.add(dev_id)
if len(used) == self._num_assigned_devs:
hit_all_devs = True
break
| swift-master | swift/common/ring/ring.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import time
import traceback
from eventlet import Timeout
import swift.common.db
from swift.account.backend import AccountBroker, DATADIR
from swift.account.utils import account_listing_response, get_response_headers
from swift.common.db import DatabaseConnectionError, DatabaseAlreadyExists
from swift.common.request_helpers import get_param, \
split_and_validate_path, validate_internal_account, \
validate_internal_container, constrain_req_limit
from swift.common.utils import get_logger, hash_path, public, \
Timestamp, storage_directory, config_true_value, \
timing_stats, replication, get_log_line, \
config_fallocate_value, fs_has_free_space
from swift.common.constraints import valid_timestamp, check_utf8, \
check_drive, AUTO_CREATE_ACCOUNT_PREFIX
from swift.common import constraints
from swift.common.db_replicator import ReplicatorRpc
from swift.common.base_storage_server import BaseStorageServer
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPAccepted, HTTPBadRequest, \
HTTPCreated, HTTPForbidden, HTTPInternalServerError, \
HTTPMethodNotAllowed, HTTPNoContent, HTTPNotFound, \
HTTPPreconditionFailed, HTTPConflict, Request, \
HTTPInsufficientStorage, HTTPException, wsgi_to_str
from swift.common.request_helpers import is_sys_or_user_meta
def get_account_name_and_placement(req):
"""
Split and validate path for an account.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account = split_and_validate_path(req, 3)
validate_internal_account(account)
return drive, part, account
def get_container_name_and_placement(req):
"""
Split and validate path for a container.
:param req: a swob request
:returns: a tuple of path parts as strings
"""
drive, part, account, container = split_and_validate_path(req, 3, 4)
validate_internal_container(account, container)
return drive, part, account, container
class AccountController(BaseStorageServer):
"""WSGI controller for the account server."""
server_type = 'account-server'
def __init__(self, conf, logger=None):
super(AccountController, self).__init__(conf)
self.logger = logger or get_logger(conf, log_route='account-server')
self.log_requests = config_true_value(conf.get('log_requests', 'true'))
self.root = conf.get('devices', '/srv/node')
self.mount_check = config_true_value(conf.get('mount_check', 'true'))
self.replicator_rpc = ReplicatorRpc(self.root, DATADIR, AccountBroker,
self.mount_check,
logger=self.logger)
if conf.get('auto_create_account_prefix'):
self.logger.warning('Option auto_create_account_prefix is '
'deprecated. Configure '
'auto_create_account_prefix under the '
'swift-constraints section of '
'swift.conf. This option will '
'be ignored in a future release.')
self.auto_create_account_prefix = \
conf['auto_create_account_prefix']
else:
self.auto_create_account_prefix = AUTO_CREATE_ACCOUNT_PREFIX
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
swift.common.db.QUERY_LOGGING = \
config_true_value(conf.get('db_query_logging', 'f'))
self.fallocate_reserve, self.fallocate_is_percent = \
config_fallocate_value(conf.get('fallocate_reserve', '1%'))
def _get_account_broker(self, drive, part, account, **kwargs):
hsh = hash_path(account)
db_dir = storage_directory(DATADIR, part, hsh)
db_path = os.path.join(self.root, drive, db_dir, hsh + '.db')
kwargs.setdefault('account', account)
kwargs.setdefault('logger', self.logger)
return AccountBroker(db_path, **kwargs)
def _deleted_response(self, broker, req, resp, body=''):
# We are here since either the account does not exist or
# it exists but marked for deletion.
headers = {}
# Try to check if account exists and is marked for deletion
try:
if broker.is_status_deleted():
# Account does exist and is marked for deletion
headers = {'X-Account-Status': 'Deleted'}
except DatabaseConnectionError:
# Account does not exist!
pass
return resp(request=req, headers=headers, charset='utf-8', body=body)
def check_free_space(self, drive):
drive_root = os.path.join(self.root, drive)
return fs_has_free_space(
drive_root, self.fallocate_reserve, self.fallocate_is_percent)
@public
@timing_stats()
def DELETE(self, req):
"""Handle HTTP DELETE request."""
drive, part, account = get_account_name_and_placement(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
req_timestamp = valid_timestamp(req)
broker = self._get_account_broker(drive, part, account)
if broker.is_deleted():
return self._deleted_response(broker, req, HTTPNotFound)
broker.delete_db(req_timestamp.internal)
return self._deleted_response(broker, req, HTTPNoContent)
def _update_metadata(self, req, broker, req_timestamp):
metadata = {
wsgi_to_str(key): (wsgi_to_str(value), req_timestamp.internal)
for key, value in req.headers.items()
if is_sys_or_user_meta('account', key)}
if metadata:
broker.update_metadata(metadata, validate_metadata=True)
@public
@timing_stats()
def PUT(self, req):
"""Handle HTTP PUT request."""
drive, part, account, container = get_container_name_and_placement(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
if not self.check_free_space(drive):
return HTTPInsufficientStorage(drive=drive, request=req)
if container: # put account container
if 'x-timestamp' not in req.headers:
timestamp = Timestamp.now()
else:
timestamp = valid_timestamp(req)
pending_timeout = None
container_policy_index = \
req.headers.get('X-Backend-Storage-Policy-Index', 0)
if 'x-trans-id' in req.headers:
pending_timeout = 3
broker = self._get_account_broker(drive, part, account,
pending_timeout=pending_timeout)
if account.startswith(self.auto_create_account_prefix) and \
not os.path.exists(broker.db_file):
try:
broker.initialize(timestamp.internal)
except DatabaseAlreadyExists:
pass
if (req.headers.get('x-account-override-deleted', 'no').lower() !=
'yes' and broker.is_deleted()) \
or not os.path.exists(broker.db_file):
return HTTPNotFound(request=req)
broker.put_container(container, req.headers['x-put-timestamp'],
req.headers['x-delete-timestamp'],
req.headers['x-object-count'],
req.headers['x-bytes-used'],
container_policy_index)
if req.headers['x-delete-timestamp'] > \
req.headers['x-put-timestamp']:
return HTTPNoContent(request=req)
else:
return HTTPCreated(request=req)
else: # put account
timestamp = valid_timestamp(req)
broker = self._get_account_broker(drive, part, account)
if not os.path.exists(broker.db_file):
try:
broker.initialize(timestamp.internal)
created = True
except DatabaseAlreadyExists:
created = False
elif broker.is_status_deleted():
return self._deleted_response(broker, req, HTTPForbidden,
body='Recently deleted')
else:
created = broker.is_deleted()
broker.update_put_timestamp(timestamp.internal)
if broker.is_deleted():
return HTTPConflict(request=req)
self._update_metadata(req, broker, timestamp)
if created:
return HTTPCreated(request=req)
else:
return HTTPAccepted(request=req)
@public
@timing_stats()
def HEAD(self, req):
"""Handle HTTP HEAD request."""
drive, part, account = get_account_name_and_placement(req)
out_content_type = listing_formats.get_listing_content_type(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account,
pending_timeout=0.1,
stale_reads_ok=True)
if broker.is_deleted():
return self._deleted_response(broker, req, HTTPNotFound)
headers = get_response_headers(broker)
headers['Content-Type'] = out_content_type
return HTTPNoContent(request=req, headers=headers, charset='utf-8')
@public
@timing_stats()
def GET(self, req):
"""Handle HTTP GET request."""
drive, part, account = get_account_name_and_placement(req)
prefix = get_param(req, 'prefix')
delimiter = get_param(req, 'delimiter')
reverse = config_true_value(get_param(req, 'reverse'))
limit = constrain_req_limit(req, constraints.ACCOUNT_LISTING_LIMIT)
marker = get_param(req, 'marker', '')
end_marker = get_param(req, 'end_marker')
out_content_type = listing_formats.get_listing_content_type(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account,
pending_timeout=0.1,
stale_reads_ok=True)
if broker.is_deleted():
return self._deleted_response(broker, req, HTTPNotFound)
return account_listing_response(account, req, out_content_type, broker,
limit, marker, end_marker, prefix,
delimiter, reverse)
@public
@replication
@timing_stats()
def REPLICATE(self, req):
"""
Handle HTTP REPLICATE request.
Handler for RPC calls for account replication.
"""
post_args = split_and_validate_path(req, 3)
drive, partition, hash = post_args
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
if not self.check_free_space(drive):
return HTTPInsufficientStorage(drive=drive, request=req)
try:
args = json.load(req.environ['wsgi.input'])
except ValueError as err:
return HTTPBadRequest(body=str(err), content_type='text/plain')
ret = self.replicator_rpc.dispatch(post_args, args)
ret.request = req
return ret
@public
@timing_stats()
def POST(self, req):
"""Handle HTTP POST request."""
drive, part, account = get_account_name_and_placement(req)
req_timestamp = valid_timestamp(req)
try:
check_drive(self.root, drive, self.mount_check)
except ValueError:
return HTTPInsufficientStorage(drive=drive, request=req)
if not self.check_free_space(drive):
return HTTPInsufficientStorage(drive=drive, request=req)
broker = self._get_account_broker(drive, part, account)
if broker.is_deleted():
return self._deleted_response(broker, req, HTTPNotFound)
self._update_metadata(req, broker, req_timestamp)
return HTTPNoContent(request=req)
def __call__(self, env, start_response):
start_time = time.time()
req = Request(env)
self.logger.txn_id = req.headers.get('x-trans-id', None)
if not check_utf8(wsgi_to_str(req.path_info), internal=True):
res = HTTPPreconditionFailed(body='Invalid UTF8')
else:
try:
# disallow methods which are not publicly accessible
if req.method not in self.allowed_methods:
res = HTTPMethodNotAllowed()
else:
res = getattr(self, req.method)(req)
except HTTPException as error_response:
res = error_response
except (Exception, Timeout):
self.logger.exception('ERROR __call__ error with %(method)s'
' %(path)s ',
{'method': req.method, 'path': req.path})
res = HTTPInternalServerError(body=traceback.format_exc())
if self.log_requests:
trans_time = time.time() - start_time
additional_info = ''
if res.headers.get('x-container-timestamp') is not None:
additional_info += 'x-container-timestamp: %s' % \
res.headers['x-container-timestamp']
log_msg = get_log_line(req, res, trans_time, additional_info,
self.log_format, self.anonymization_method,
self.anonymization_salt)
if req.method.upper() == 'REPLICATE':
self.logger.debug(log_msg)
else:
self.logger.info(log_msg)
return res(env, start_response)
def app_factory(global_conf, **local_conf):
"""paste.deploy app factory for creating WSGI account server apps"""
conf = global_conf.copy()
conf.update(local_conf)
return AccountController(conf)
| swift-master | swift/account/server.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.account.backend import AccountBroker, DATADIR
from swift.common import db_replicator
class AccountReplicator(db_replicator.Replicator):
server_type = 'account'
brokerclass = AccountBroker
datadir = DATADIR
default_port = 6202
| swift-master | swift/account/replicator.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Pluggable Back-end for Account Server
"""
import sqlite3
import six
from swift.common.utils import Timestamp, RESERVED_BYTE
from swift.common.db import DatabaseBroker, utf8encode, zero_like
DATADIR = 'accounts'
POLICY_STAT_TRIGGER_SCRIPT = """
CREATE TRIGGER container_insert_ps AFTER INSERT ON container
BEGIN
INSERT OR IGNORE INTO policy_stat
(storage_policy_index, container_count, object_count, bytes_used)
VALUES (new.storage_policy_index, 0, 0, 0);
UPDATE policy_stat
SET container_count = container_count + (1 - new.deleted),
object_count = object_count + new.object_count,
bytes_used = bytes_used + new.bytes_used
WHERE storage_policy_index = new.storage_policy_index;
END;
CREATE TRIGGER container_delete_ps AFTER DELETE ON container
BEGIN
UPDATE policy_stat
SET container_count = container_count - (1 - old.deleted),
object_count = object_count - old.object_count,
bytes_used = bytes_used - old.bytes_used
WHERE storage_policy_index = old.storage_policy_index;
END;
"""
class AccountBroker(DatabaseBroker):
"""Encapsulates working with an account database."""
db_type = 'account'
db_contains_type = 'container'
db_reclaim_timestamp = 'delete_timestamp'
def _initialize(self, conn, put_timestamp, **kwargs):
"""
Create a brand new account database (tables, indices, triggers, etc.)
:param conn: DB connection object
:param put_timestamp: put timestamp
"""
if not self.account:
raise ValueError(
'Attempting to create a new database with no account set')
self.create_container_table(conn)
self.create_account_stat_table(conn, put_timestamp)
self.create_policy_stat_table(conn)
def create_container_table(self, conn):
"""
Create container table which is specific to the account DB.
:param conn: DB connection object
"""
conn.executescript("""
CREATE TABLE container (
ROWID INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT,
put_timestamp TEXT,
delete_timestamp TEXT,
object_count INTEGER,
bytes_used INTEGER,
deleted INTEGER DEFAULT 0,
storage_policy_index INTEGER DEFAULT 0
);
CREATE INDEX ix_container_deleted_name ON
container (deleted, name);
CREATE TRIGGER container_insert AFTER INSERT ON container
BEGIN
UPDATE account_stat
SET container_count = container_count + (1 - new.deleted),
object_count = object_count + new.object_count,
bytes_used = bytes_used + new.bytes_used,
hash = chexor(hash, new.name,
new.put_timestamp || '-' ||
new.delete_timestamp || '-' ||
new.object_count || '-' || new.bytes_used);
END;
CREATE TRIGGER container_update BEFORE UPDATE ON container
BEGIN
SELECT RAISE(FAIL, 'UPDATE not allowed; DELETE and INSERT');
END;
CREATE TRIGGER container_delete AFTER DELETE ON container
BEGIN
UPDATE account_stat
SET container_count = container_count - (1 - old.deleted),
object_count = object_count - old.object_count,
bytes_used = bytes_used - old.bytes_used,
hash = chexor(hash, old.name,
old.put_timestamp || '-' ||
old.delete_timestamp || '-' ||
old.object_count || '-' || old.bytes_used);
END;
""" + POLICY_STAT_TRIGGER_SCRIPT)
def create_account_stat_table(self, conn, put_timestamp):
"""
Create account_stat table which is specific to the account DB.
Not a part of Pluggable Back-ends, internal to the baseline code.
:param conn: DB connection object
:param put_timestamp: put timestamp
"""
conn.executescript("""
CREATE TABLE account_stat (
account TEXT,
created_at TEXT,
put_timestamp TEXT DEFAULT '0',
delete_timestamp TEXT DEFAULT '0',
container_count INTEGER,
object_count INTEGER DEFAULT 0,
bytes_used INTEGER DEFAULT 0,
hash TEXT default '00000000000000000000000000000000',
id TEXT,
status TEXT DEFAULT '',
status_changed_at TEXT DEFAULT '0',
metadata TEXT DEFAULT ''
);
INSERT INTO account_stat (container_count) VALUES (0);
""")
conn.execute('''
UPDATE account_stat SET account = ?, created_at = ?, id = ?,
put_timestamp = ?, status_changed_at = ?
''', (self.account, Timestamp.now().internal, self._new_db_id(),
put_timestamp, put_timestamp))
def create_policy_stat_table(self, conn):
"""
Create policy_stat table which is specific to the account DB.
Not a part of Pluggable Back-ends, internal to the baseline code.
:param conn: DB connection object
"""
conn.executescript("""
CREATE TABLE policy_stat (
storage_policy_index INTEGER PRIMARY KEY,
container_count INTEGER DEFAULT 0,
object_count INTEGER DEFAULT 0,
bytes_used INTEGER DEFAULT 0
);
INSERT OR IGNORE INTO policy_stat (
storage_policy_index, container_count, object_count,
bytes_used
)
SELECT 0, container_count, object_count, bytes_used
FROM account_stat
WHERE container_count > 0;
""")
def get_db_version(self, conn):
if self._db_version == -1:
self._db_version = 0
for row in conn.execute('''
SELECT name FROM sqlite_master
WHERE name = 'ix_container_deleted_name' '''):
self._db_version = 1
return self._db_version
def _commit_puts_load(self, item_list, entry):
"""See :func:`swift.common.db.DatabaseBroker._commit_puts_load`"""
# check to see if the update includes policy_index or not
(name, put_timestamp, delete_timestamp, object_count, bytes_used,
deleted) = entry[:6]
if len(entry) > 6:
storage_policy_index = entry[6]
else:
# legacy support during upgrade until first non legacy storage
# policy is defined
storage_policy_index = 0
item_list.append(
{'name': name,
'put_timestamp': put_timestamp,
'delete_timestamp': delete_timestamp,
'object_count': object_count,
'bytes_used': bytes_used,
'deleted': deleted,
'storage_policy_index': storage_policy_index})
def empty(self):
"""
Check if the account DB is empty.
:returns: True if the database has no active containers.
"""
self._commit_puts_stale_ok()
with self.get() as conn:
row = conn.execute(
'SELECT container_count from account_stat').fetchone()
return zero_like(row[0])
def make_tuple_for_pickle(self, record):
return (record['name'], record['put_timestamp'],
record['delete_timestamp'], record['object_count'],
record['bytes_used'], record['deleted'],
record['storage_policy_index'])
def put_container(self, name, put_timestamp, delete_timestamp,
object_count, bytes_used, storage_policy_index):
"""
Create a container with the given attributes.
:param name: name of the container to create (a native string)
:param put_timestamp: put_timestamp of the container to create
:param delete_timestamp: delete_timestamp of the container to create
:param object_count: number of objects in the container
:param bytes_used: number of bytes used by the container
:param storage_policy_index: the storage policy for this container
"""
if Timestamp(delete_timestamp) > Timestamp(put_timestamp) and \
zero_like(object_count):
deleted = 1
else:
deleted = 0
record = {'name': name, 'put_timestamp': put_timestamp,
'delete_timestamp': delete_timestamp,
'object_count': object_count,
'bytes_used': bytes_used,
'deleted': deleted,
'storage_policy_index': storage_policy_index}
self.put_record(record)
def _is_deleted_info(self, status, container_count, delete_timestamp,
put_timestamp):
"""
Apply delete logic to database info.
:returns: True if the DB is considered to be deleted, False otherwise
"""
return status == 'DELETED' or zero_like(container_count) and (
Timestamp(delete_timestamp) > Timestamp(put_timestamp))
def _is_deleted(self, conn):
"""
Check account_stat table and evaluate info.
:param conn: database conn
:returns: True if the DB is considered to be deleted, False otherwise
"""
info = conn.execute('''
SELECT put_timestamp, delete_timestamp, container_count, status
FROM account_stat''').fetchone()
return self._is_deleted_info(**info)
def is_status_deleted(self):
"""Only returns true if the status field is set to DELETED."""
with self.get() as conn:
row = conn.execute('''
SELECT put_timestamp, delete_timestamp, status
FROM account_stat''').fetchone()
return row['status'] == "DELETED" or (
row['delete_timestamp'] > row['put_timestamp'])
def get_policy_stats(self, do_migrations=False):
"""
Get global policy stats for the account.
:param do_migrations: boolean, if True the policy stat dicts will
always include the 'container_count' key;
otherwise it may be omitted on legacy databases
until they are migrated.
:returns: dict of policy stats where the key is the policy index and
the value is a dictionary like {'object_count': M,
'bytes_used': N, 'container_count': L}
"""
columns = [
'storage_policy_index',
'container_count',
'object_count',
'bytes_used',
]
def run_query():
return (conn.execute('''
SELECT %s
FROM policy_stat
''' % ', '.join(columns)).fetchall())
self._commit_puts_stale_ok()
info = []
with self.get() as conn:
try:
info = run_query()
except sqlite3.OperationalError as err:
if "no such column: container_count" in str(err):
if do_migrations:
self._migrate_add_container_count(conn)
else:
columns.remove('container_count')
info = run_query()
elif "no such table: policy_stat" in str(err):
if do_migrations:
self.create_policy_stat_table(conn)
info = run_query()
# else, pass and let the results be empty
else:
raise
policy_stats = {}
for row in info:
stats = dict(row)
key = stats.pop('storage_policy_index')
policy_stats[key] = stats
return policy_stats
def get_info(self):
"""
Get global data for the account.
:returns: dict with keys: account, created_at, put_timestamp,
delete_timestamp, status_changed_at, container_count,
object_count, bytes_used, hash, id
"""
self._commit_puts_stale_ok()
with self.get() as conn:
return dict(conn.execute('''
SELECT account, created_at, put_timestamp, delete_timestamp,
status_changed_at, container_count, object_count,
bytes_used, hash, id
FROM account_stat
''').fetchone())
def list_containers_iter(self, limit, marker, end_marker, prefix,
delimiter, reverse=False, allow_reserved=False):
"""
Get a list of containers sorted by name starting at marker onward, up
to limit entries. Entries will begin with the prefix and will not have
the delimiter after the prefix.
:param limit: maximum number of entries to get
:param marker: marker query
:param end_marker: end marker query
:param prefix: prefix query
:param delimiter: delimiter for query
:param reverse: reverse the result order.
:param allow_reserved: exclude names with reserved-byte by default
:returns: list of tuples of (name, object_count, bytes_used,
put_timestamp, 0)
"""
delim_force_gte = False
if six.PY2:
(marker, end_marker, prefix, delimiter) = utf8encode(
marker, end_marker, prefix, delimiter)
if reverse:
# Reverse the markers if we are reversing the listing.
marker, end_marker = end_marker, marker
self._commit_puts_stale_ok()
if delimiter and not prefix:
prefix = ''
if prefix:
end_prefix = prefix[:-1] + chr(ord(prefix[-1]) + 1)
orig_marker = marker
with self.get() as conn:
results = []
while len(results) < limit:
query = """
SELECT name, object_count, bytes_used, put_timestamp, 0
FROM container
WHERE """
query_args = []
if end_marker and (not prefix or end_marker < end_prefix):
query += ' name < ? AND'
query_args.append(end_marker)
elif prefix:
query += ' name < ? AND'
query_args.append(end_prefix)
if delim_force_gte:
query += ' name >= ? AND'
query_args.append(marker)
# Always set back to False
delim_force_gte = False
elif marker and (not prefix or marker >= prefix):
query += ' name > ? AND'
query_args.append(marker)
elif prefix:
query += ' name >= ? AND'
query_args.append(prefix)
if not allow_reserved:
query += ' name >= ? AND'
query_args.append(chr(ord(RESERVED_BYTE) + 1))
if self.get_db_version(conn) < 1:
query += ' +deleted = 0'
else:
query += ' deleted = 0'
query += ' ORDER BY name %s LIMIT ?' % \
('DESC' if reverse else '')
query_args.append(limit - len(results))
curs = conn.execute(query, query_args)
curs.row_factory = None
# Delimiters without a prefix is ignored, further if there
# is no delimiter then we can simply return the result as
# prefixes are now handled in the SQL statement.
if prefix is None or not delimiter:
return [r for r in curs]
# We have a delimiter and a prefix (possibly empty string) to
# handle
rowcount = 0
for row in curs:
rowcount += 1
name = row[0]
if reverse:
end_marker = name
else:
marker = name
if len(results) >= limit:
curs.close()
return results
end = name.find(delimiter, len(prefix))
if end >= 0:
if reverse:
end_marker = name[:end + len(delimiter)]
else:
marker = ''.join([
name[:end],
delimiter[:-1],
chr(ord(delimiter[-1:]) + 1),
])
# we want result to be inclusive of delim+1
delim_force_gte = True
dir_name = name[:end + len(delimiter)]
if dir_name != orig_marker:
results.append([dir_name, 0, 0, '0', 1])
curs.close()
break
results.append(row)
if not rowcount:
break
return results
def merge_items(self, item_list, source=None):
"""
Merge items into the container table.
:param item_list: list of dictionaries of {'name', 'put_timestamp',
'delete_timestamp', 'object_count', 'bytes_used',
'deleted', 'storage_policy_index'}
:param source: if defined, update incoming_sync with the source
"""
def _really_merge_items(conn):
max_rowid = -1
curs = conn.cursor()
for rec in item_list:
rec.setdefault('storage_policy_index', 0) # legacy
record = [rec['name'], rec['put_timestamp'],
rec['delete_timestamp'], rec['object_count'],
rec['bytes_used'], rec['deleted'],
rec['storage_policy_index']]
query = '''
SELECT name, put_timestamp, delete_timestamp,
object_count, bytes_used, deleted,
storage_policy_index
FROM container WHERE name = ?
'''
if self.get_db_version(conn) >= 1:
query += ' AND deleted IN (0, 1)'
curs_row = curs.execute(query, (rec['name'],))
curs_row.row_factory = None
row = curs_row.fetchone()
if row:
row = list(row)
for i in range(5):
if record[i] is None and row[i] is not None:
record[i] = row[i]
if Timestamp(row[1]) > \
Timestamp(record[1]): # Keep newest put_timestamp
record[1] = row[1]
if Timestamp(row[2]) > \
Timestamp(record[2]): # Keep newest delete_timestamp
record[2] = row[2]
# If deleted, mark as such
if Timestamp(record[2]) > Timestamp(record[1]) and \
zero_like(record[3]):
record[5] = 1
else:
record[5] = 0
curs.execute('''
DELETE FROM container WHERE name = ? AND
deleted IN (0, 1)
''', (record[0],))
curs.execute('''
INSERT INTO container (name, put_timestamp,
delete_timestamp, object_count, bytes_used,
deleted, storage_policy_index)
VALUES (?, ?, ?, ?, ?, ?, ?)
''', record)
if source:
max_rowid = max(max_rowid, rec['ROWID'])
if source:
try:
curs.execute('''
INSERT INTO incoming_sync (sync_point, remote_id)
VALUES (?, ?)
''', (max_rowid, source))
except sqlite3.IntegrityError:
curs.execute('''
UPDATE incoming_sync
SET sync_point=max(?, sync_point)
WHERE remote_id=?
''', (max_rowid, source))
conn.commit()
with self.get() as conn:
# create the policy stat table if needed and add spi to container
try:
_really_merge_items(conn)
except sqlite3.OperationalError as err:
if 'no such column: storage_policy_index' not in str(err):
raise
self._migrate_add_storage_policy_index(conn)
_really_merge_items(conn)
def _migrate_add_container_count(self, conn):
"""
Add the container_count column to the 'policy_stat' table and
update it
:param conn: DB connection object
"""
# add the container_count column
curs = conn.cursor()
curs.executescript('''
DROP TRIGGER container_delete_ps;
DROP TRIGGER container_insert_ps;
ALTER TABLE policy_stat
ADD COLUMN container_count INTEGER DEFAULT 0;
''' + POLICY_STAT_TRIGGER_SCRIPT)
# keep the simple case simple, if there's only one entry in the
# policy_stat table we just copy the total container count from the
# account_stat table
# if that triggers an update then the where changes <> 0 *would* exist
# and the insert or replace from the count subqueries won't execute
curs.executescript("""
UPDATE policy_stat
SET container_count = (
SELECT container_count
FROM account_stat)
WHERE (
SELECT COUNT(storage_policy_index)
FROM policy_stat
) <= 1;
INSERT OR REPLACE INTO policy_stat (
storage_policy_index,
container_count,
object_count,
bytes_used
)
SELECT p.storage_policy_index,
c.count,
p.object_count,
p.bytes_used
FROM (
SELECT storage_policy_index,
COUNT(*) as count
FROM container
WHERE deleted = 0
GROUP BY storage_policy_index
) c
JOIN policy_stat p
ON p.storage_policy_index = c.storage_policy_index
WHERE NOT EXISTS(
SELECT changes() as change
FROM policy_stat
WHERE change <> 0
);
""")
conn.commit()
def _migrate_add_storage_policy_index(self, conn):
"""
Add the storage_policy_index column to the 'container' table and
set up triggers, creating the policy_stat table if needed.
:param conn: DB connection object
"""
try:
self.create_policy_stat_table(conn)
except sqlite3.OperationalError as err:
if 'table policy_stat already exists' not in str(err):
raise
conn.executescript('''
ALTER TABLE container
ADD COLUMN storage_policy_index INTEGER DEFAULT 0;
''' + POLICY_STAT_TRIGGER_SCRIPT)
| swift-master | swift/account/backend.py |
swift-master | swift/account/__init__.py |
|
# Copyright (c) 2010-2013 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import six
from swift.common import constraints
from swift.common.middleware import listing_formats
from swift.common.swob import HTTPOk, HTTPNoContent, str_to_wsgi
from swift.common.utils import Timestamp
from swift.common.storage_policy import POLICIES
class FakeAccountBroker(object):
"""
Quacks like an account broker, but doesn't actually do anything. Responds
like an account broker would for a real, empty account with no metadata.
"""
def get_info(self):
now = Timestamp.now().internal
return {'container_count': 0,
'object_count': 0,
'bytes_used': 0,
'created_at': now,
'put_timestamp': now}
def list_containers_iter(self, *_, **__):
return []
@property
def metadata(self):
return {}
def get_policy_stats(self):
return {}
def get_response_headers(broker):
info = broker.get_info()
resp_headers = {
'X-Account-Container-Count': info['container_count'],
'X-Account-Object-Count': info['object_count'],
'X-Account-Bytes-Used': info['bytes_used'],
'X-Timestamp': Timestamp(info['created_at']).normal,
'X-PUT-Timestamp': Timestamp(info['put_timestamp']).normal}
policy_stats = broker.get_policy_stats()
for policy_idx, stats in policy_stats.items():
policy = POLICIES.get_by_index(policy_idx)
if not policy:
continue
header_prefix = 'X-Account-Storage-Policy-%s-%%s' % policy.name
for key, value in stats.items():
header_name = header_prefix % key.replace('_', '-')
resp_headers[header_name] = value
resp_headers.update((str_to_wsgi(key), str_to_wsgi(value))
for key, (value, _timestamp) in
broker.metadata.items() if value != '')
return resp_headers
def account_listing_response(account, req, response_content_type, broker=None,
limit=constraints.ACCOUNT_LISTING_LIMIT,
marker='', end_marker='', prefix='', delimiter='',
reverse=False):
if broker is None:
broker = FakeAccountBroker()
resp_headers = get_response_headers(broker)
account_list = broker.list_containers_iter(limit, marker, end_marker,
prefix, delimiter, reverse,
req.allow_reserved_names)
data = []
for (name, object_count, bytes_used, put_timestamp, is_subdir) \
in account_list:
name_ = name.decode('utf8') if six.PY2 else name
if is_subdir:
data.append({'subdir': name_})
else:
data.append(
{'name': name_, 'count': object_count, 'bytes': bytes_used,
'last_modified': Timestamp(put_timestamp).isoformat})
if response_content_type.endswith('/xml'):
account_list = listing_formats.account_to_xml(data, account)
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
elif response_content_type.endswith('/json'):
account_list = json.dumps(data).encode('ascii')
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
elif data:
account_list = listing_formats.listing_to_text(data)
ret = HTTPOk(body=account_list, request=req, headers=resp_headers)
else:
ret = HTTPNoContent(request=req, headers=resp_headers)
ret.content_type = response_content_type
ret.charset = 'utf-8'
return ret
| swift-master | swift/account/utils.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import random
import socket
from logging import DEBUG
from math import sqrt
from time import time
import itertools
from eventlet import GreenPool, sleep, Timeout
import six
import swift.common.db
from swift.account.backend import AccountBroker, DATADIR
from swift.common.constraints import check_drive
from swift.common.direct_client import direct_delete_container, \
direct_delete_object, direct_get_container
from swift.common.exceptions import ClientException
from swift.common.request_helpers import USE_REPLICATION_NETWORK_HEADER
from swift.common.ring import Ring
from swift.common.ring.utils import is_local_device
from swift.common.utils import get_logger, whataremyips, config_true_value, \
Timestamp, md5, node_to_string
from swift.common.daemon import Daemon
from swift.common.storage_policy import POLICIES, PolicyError
class AccountReaper(Daemon):
"""
Removes data from status=DELETED accounts. These are accounts that have
been asked to be removed by the reseller via services
remove_storage_account XMLRPC call.
The account is not deleted immediately by the services call, but instead
the account is simply marked for deletion by setting the status column in
the account_stat table of the account database. This account reaper scans
for such accounts and removes the data in the background. The background
deletion process will occur on the primary account server for the account.
:param server_conf: The [account-server] dictionary of the account server
configuration file
:param reaper_conf: The [account-reaper] dictionary of the account server
configuration file
See the etc/account-server.conf-sample for information on the possible
configuration parameters.
"""
def __init__(self, conf, logger=None):
self.conf = conf
self.logger = logger or get_logger(conf, log_route='account-reaper')
self.devices = conf.get('devices', '/srv/node')
self.mount_check = config_true_value(conf.get('mount_check', 'true'))
self.interval = float(conf.get('interval', 3600))
self.swift_dir = conf.get('swift_dir', '/etc/swift')
self.account_ring = None
self.container_ring = None
self.object_ring = None
self.node_timeout = float(conf.get('node_timeout', 10))
self.conn_timeout = float(conf.get('conn_timeout', 0.5))
self.myips = whataremyips(conf.get('bind_ip', '0.0.0.0'))
self.bind_port = int(conf.get('bind_port', 6202))
self.concurrency = int(conf.get('concurrency', 25))
self.container_concurrency = self.object_concurrency = \
sqrt(self.concurrency)
self.container_pool = GreenPool(size=self.container_concurrency)
swift.common.db.DB_PREALLOCATION = \
config_true_value(conf.get('db_preallocation', 'f'))
self.delay_reaping = int(conf.get('delay_reaping') or 0)
reap_warn_after = float(conf.get('reap_warn_after') or 86400 * 30)
self.reap_not_done_after = reap_warn_after + self.delay_reaping
self.start_time = time()
self.reset_stats()
def get_account_ring(self):
"""The account :class:`swift.common.ring.Ring` for the cluster."""
if not self.account_ring:
self.account_ring = Ring(self.swift_dir, ring_name='account')
return self.account_ring
def get_container_ring(self):
"""The container :class:`swift.common.ring.Ring` for the cluster."""
if not self.container_ring:
self.container_ring = Ring(self.swift_dir, ring_name='container')
return self.container_ring
def get_object_ring(self, policy_idx):
"""
Get the ring identified by the policy index
:param policy_idx: Storage policy index
:returns: A ring matching the storage policy
"""
return POLICIES.get_object_ring(policy_idx, self.swift_dir)
def run_forever(self, *args, **kwargs):
"""Main entry point when running the reaper in normal daemon mode.
This repeatedly calls :func:`run_once` no quicker than the
configuration interval.
"""
self.logger.debug('Daemon started.')
sleep(random.random() * self.interval)
while True:
begin = time()
self.run_once()
elapsed = time() - begin
if elapsed < self.interval:
sleep(self.interval - elapsed)
def run_once(self, *args, **kwargs):
"""
Main entry point when running the reaper in 'once' mode, where it will
do a single pass over all accounts on the server. This is called
repeatedly by :func:`run_forever`. This will call :func:`reap_device`
once for each device on the server.
"""
self.logger.debug('Begin devices pass: %s', self.devices)
begin = time()
try:
for device in os.listdir(self.devices):
try:
check_drive(self.devices, device, self.mount_check)
except ValueError as err:
self.logger.increment('errors')
self.logger.debug('Skipping: %s', err)
continue
self.reap_device(device)
except (Exception, Timeout):
self.logger.exception("Exception in top-level account reaper "
"loop")
elapsed = time() - begin
self.logger.info('Devices pass completed: %.02fs', elapsed)
def reap_device(self, device):
"""
Called once per pass for each device on the server. This will scan the
accounts directory for the device, looking for partitions this device
is the primary for, then looking for account databases that are marked
status=DELETED and still have containers and calling
:func:`reap_account`. Account databases marked status=DELETED that no
longer have containers will eventually be permanently removed by the
reclaim process within the account replicator (see
:mod:`swift.db_replicator`).
:param device: The device to look for accounts to be deleted.
"""
datadir = os.path.join(self.devices, device, DATADIR)
if not os.path.exists(datadir):
return
for partition in os.listdir(datadir):
partition_path = os.path.join(datadir, partition)
if not partition.isdigit():
continue
nodes = self.get_account_ring().get_part_nodes(int(partition))
if not os.path.isdir(partition_path):
continue
container_shard = None
for container_shard, node in enumerate(nodes):
if is_local_device(self.myips, None, node['ip'], None) and \
(not self.bind_port or
self.bind_port == node['port']) and \
(device == node['device']):
break
else:
continue
for suffix in os.listdir(partition_path):
suffix_path = os.path.join(partition_path, suffix)
if not os.path.isdir(suffix_path):
continue
for hsh in os.listdir(suffix_path):
hsh_path = os.path.join(suffix_path, hsh)
if not os.path.isdir(hsh_path):
continue
for fname in sorted(os.listdir(hsh_path), reverse=True):
if fname.endswith('.ts'):
break
elif fname.endswith('.db'):
self.start_time = time()
broker = \
AccountBroker(os.path.join(hsh_path, fname),
logger=self.logger)
if broker.is_status_deleted() and \
not broker.empty():
self.reap_account(
broker, partition, nodes,
container_shard=container_shard)
def reset_stats(self):
self.stats_return_codes = {}
self.stats_containers_deleted = 0
self.stats_objects_deleted = 0
self.stats_containers_remaining = 0
self.stats_objects_remaining = 0
self.stats_containers_possibly_remaining = 0
self.stats_objects_possibly_remaining = 0
def reap_account(self, broker, partition, nodes, container_shard=None):
"""
Called once per pass for each account this server is the primary for
and attempts to delete the data for the given account. The reaper will
only delete one account at any given time. It will call
:func:`reap_container` up to sqrt(self.concurrency) times concurrently
while reaping the account.
If there is any exception while deleting a single container, the
process will continue for any other containers and the failed
containers will be tried again the next time this function is called
with the same parameters.
If there is any exception while listing the containers for deletion,
the process will stop (but will obviously be tried again the next time
this function is called with the same parameters). This isn't likely
since the listing comes from the local database.
After the process completes (successfully or not) statistics about what
was accomplished will be logged.
This function returns nothing and should raise no exception but only
update various self.stats_* values for what occurs.
:param broker: The AccountBroker for the account to delete.
:param partition: The partition in the account ring the account is on.
:param nodes: The primary node dicts for the account to delete.
:param container_shard: int used to shard containers reaped. If None,
will reap all containers.
.. seealso::
:class:`swift.account.backend.AccountBroker` for the broker class.
.. seealso::
:func:`swift.common.ring.Ring.get_nodes` for a description
of the node dicts.
"""
begin = time()
info = broker.get_info()
if time() - float(Timestamp(info['delete_timestamp'])) <= \
self.delay_reaping:
return False
account = info['account']
self.logger.info('Beginning pass on account %s', account)
self.reset_stats()
container_limit = 1000
if container_shard is not None:
container_limit *= len(nodes)
try:
containers = list(broker.list_containers_iter(
container_limit, '', None, None, None, allow_reserved=True))
while containers:
try:
for (container, _junk, _junk, _junk, _junk) in containers:
if six.PY3:
container_ = container.encode('utf-8')
else:
container_ = container
this_shard = (
int(md5(container_, usedforsecurity=False)
.hexdigest(), 16) % len(nodes))
if container_shard not in (this_shard, None):
continue
self.container_pool.spawn(self.reap_container, account,
partition, nodes, container)
self.container_pool.waitall()
except (Exception, Timeout):
self.logger.exception(
'Exception with containers for account %s', account)
containers = list(broker.list_containers_iter(
container_limit, containers[-1][0], None, None, None,
allow_reserved=True))
log_buf = ['Completed pass on account %s' % account]
except (Exception, Timeout):
self.logger.exception('Exception with account %s', account)
log_buf = ['Incomplete pass on account %s' % account]
if self.stats_containers_deleted:
log_buf.append(', %s containers deleted' %
self.stats_containers_deleted)
if self.stats_objects_deleted:
log_buf.append(', %s objects deleted' % self.stats_objects_deleted)
if self.stats_containers_remaining:
log_buf.append(', %s containers remaining' %
self.stats_containers_remaining)
if self.stats_objects_remaining:
log_buf.append(', %s objects remaining' %
self.stats_objects_remaining)
if self.stats_containers_possibly_remaining:
log_buf.append(', %s containers possibly remaining' %
self.stats_containers_possibly_remaining)
if self.stats_objects_possibly_remaining:
log_buf.append(', %s objects possibly remaining' %
self.stats_objects_possibly_remaining)
if self.stats_return_codes:
log_buf.append(', return codes: ')
for code in sorted(self.stats_return_codes):
log_buf.append('%s %sxxs, ' % (self.stats_return_codes[code],
code))
log_buf[-1] = log_buf[-1][:-2]
log_buf.append(', elapsed: %.02fs' % (time() - begin))
self.logger.info(''.join(log_buf))
self.logger.timing_since('timing', self.start_time)
delete_timestamp = Timestamp(info['delete_timestamp'])
if self.stats_containers_remaining and \
begin - float(delete_timestamp) >= self.reap_not_done_after:
self.logger.warning(
'Account %(account)s has not been reaped since %(time)s' %
{'account': account, 'time': delete_timestamp.isoformat})
return True
def reap_container(self, account, account_partition, account_nodes,
container):
"""
Deletes the data and the container itself for the given container. This
will call :func:`reap_object` up to sqrt(self.concurrency) times
concurrently for the objects in the container.
If there is any exception while deleting a single object, the process
will continue for any other objects in the container and the failed
objects will be tried again the next time this function is called with
the same parameters.
If there is any exception while listing the objects for deletion, the
process will stop (but will obviously be tried again the next time this
function is called with the same parameters). This is a possibility
since the listing comes from querying just the primary remote container
server.
Once all objects have been attempted to be deleted, the container
itself will be attempted to be deleted by sending a delete request to
all container nodes. The format of the delete request is such that each
container server will update a corresponding account server, removing
the container from the account's listing.
This function returns nothing and should raise no exception but only
update various self.stats_* values for what occurs.
:param account: The name of the account for the container.
:param account_partition: The partition for the account on the account
ring.
:param account_nodes: The primary node dicts for the account.
:param container: The name of the container to delete.
* See also: :func:`swift.common.ring.Ring.get_nodes` for a description
of the account node dicts.
"""
account_nodes = list(account_nodes)
part, nodes = self.get_container_ring().get_nodes(account, container)
node = nodes[-1]
pool = GreenPool(size=self.object_concurrency)
marker = ''
while True:
objects = None
try:
headers, objects = direct_get_container(
node, part, account, container,
marker=marker,
conn_timeout=self.conn_timeout,
response_timeout=self.node_timeout,
headers={USE_REPLICATION_NETWORK_HEADER: 'true'})
self.stats_return_codes[2] = \
self.stats_return_codes.get(2, 0) + 1
self.logger.increment('return_codes.2')
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
'Exception with %s', node_to_string(node))
self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment(
'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error):
self.logger.error(
'Timeout Exception with %s', node_to_string(node))
if not objects:
break
try:
policy_index = headers.get('X-Backend-Storage-Policy-Index', 0)
policy = POLICIES.get_by_index(policy_index)
if not policy:
self.logger.error('ERROR: invalid storage policy index: %r'
% policy_index)
for obj in objects:
pool.spawn(self.reap_object, account, container, part,
nodes, obj['name'], policy_index)
pool.waitall()
except (Exception, Timeout):
self.logger.exception('Exception with objects for container '
'%(container)s for account %(account)s',
{'container': container,
'account': account})
marker = objects[-1]['name']
successes = 0
failures = 0
timestamp = Timestamp.now()
for node in nodes:
anode = account_nodes.pop()
try:
direct_delete_container(
node, part, account, container,
conn_timeout=self.conn_timeout,
response_timeout=self.node_timeout,
headers={'X-Account-Host': '%(ip)s:%(port)s' % anode,
'X-Account-Partition': str(account_partition),
'X-Account-Device': anode['device'],
'X-Account-Override-Deleted': 'yes',
'X-Timestamp': timestamp.internal,
USE_REPLICATION_NETWORK_HEADER: 'true'})
successes += 1
self.stats_return_codes[2] = \
self.stats_return_codes.get(2, 0) + 1
self.logger.increment('return_codes.2')
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
'Exception with %s', node_to_string(node))
failures += 1
self.logger.increment('containers_failures')
self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment(
'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error):
self.logger.error(
'Timeout Exception with %s', node_to_string(node))
failures += 1
self.logger.increment('containers_failures')
if successes > failures:
self.stats_containers_deleted += 1
self.logger.increment('containers_deleted')
elif not successes:
self.stats_containers_remaining += 1
self.logger.increment('containers_remaining')
else:
self.stats_containers_possibly_remaining += 1
self.logger.increment('containers_possibly_remaining')
def reap_object(self, account, container, container_partition,
container_nodes, obj, policy_index):
"""
Deletes the given object by issuing a delete request to each node for
the object. The format of the delete request is such that each object
server will update a corresponding container server, removing the
object from the container's listing.
This function returns nothing and should raise no exception but only
update various self.stats_* values for what occurs.
:param account: The name of the account for the object.
:param container: The name of the container for the object.
:param container_partition: The partition for the container on the
container ring.
:param container_nodes: The primary node dicts for the container.
:param obj: The name of the object to delete.
:param policy_index: The storage policy index of the object's container
* See also: :func:`swift.common.ring.Ring.get_nodes` for a description
of the container node dicts.
"""
cnodes = itertools.cycle(container_nodes)
try:
ring = self.get_object_ring(policy_index)
except PolicyError:
self.stats_objects_remaining += 1
self.logger.increment('objects_remaining')
return
part, nodes = ring.get_nodes(account, container, obj)
successes = 0
failures = 0
timestamp = Timestamp.now()
for node in nodes:
cnode = next(cnodes)
try:
direct_delete_object(
node, part, account, container, obj,
conn_timeout=self.conn_timeout,
response_timeout=self.node_timeout,
headers={'X-Container-Host': '%(ip)s:%(port)s' % cnode,
'X-Container-Partition': str(container_partition),
'X-Container-Device': cnode['device'],
'X-Backend-Storage-Policy-Index': policy_index,
'X-Timestamp': timestamp.internal,
USE_REPLICATION_NETWORK_HEADER: 'true'})
successes += 1
self.stats_return_codes[2] = \
self.stats_return_codes.get(2, 0) + 1
self.logger.increment('return_codes.2')
except ClientException as err:
if self.logger.getEffectiveLevel() <= DEBUG:
self.logger.exception(
'Exception with %s', node_to_string(node))
failures += 1
self.logger.increment('objects_failures')
self.stats_return_codes[err.http_status // 100] = \
self.stats_return_codes.get(err.http_status // 100, 0) + 1
self.logger.increment(
'return_codes.%d' % (err.http_status // 100,))
except (Timeout, socket.error):
failures += 1
self.logger.increment('objects_failures')
self.logger.error(
'Timeout Exception with %s', node_to_string(node))
if successes > failures:
self.stats_objects_deleted += 1
self.logger.increment('objects_deleted')
elif not successes:
self.stats_objects_remaining += 1
self.logger.increment('objects_remaining')
else:
self.stats_objects_possibly_remaining += 1
self.logger.increment('objects_possibly_remaining')
| swift-master | swift/account/reaper.py |
# Copyright (c) 2010-2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from swift.account.backend import AccountBroker
from swift.common.exceptions import InvalidAccountInfo
from swift.common.db_auditor import DatabaseAuditor
class AccountAuditor(DatabaseAuditor):
"""Audit accounts."""
server_type = "account"
broker_class = AccountBroker
def _audit(self, info, broker):
# Validate per policy counts
policy_stats = broker.get_policy_stats(do_migrations=True)
policy_totals = {
'container_count': 0,
'object_count': 0,
'bytes_used': 0,
}
for policy_stat in policy_stats.values():
for key in policy_totals:
policy_totals[key] += policy_stat[key]
for key in policy_totals:
if policy_totals[key] == info[key]:
continue
return InvalidAccountInfo(
'The total %(key)s for the account %(account)s (%(total)s) '
'does not match the sum of %(key)s across policies (%(sum)s)'
% {'key': key, 'account': info.get('account'),
'total': info[key], 'sum': policy_totals[key]})
| swift-master | swift/account/auditor.py |
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Copyright (c) 2010-2012 OpenStack Foundation.
#
# Swift documentation build configuration file, created by
# sphinx-quickstart on Tue May 18 13:50:15 2010.
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import datetime
import logging
import os
import sys
# NOTE(amotoki): Our current doc build job uses an older version of
# liberasurecode which comes from Ubuntu 16.04.
# pyeclib emits a warning message if liberasurecode <1.3.1 is used [1] and
# this causes the doc build failure if warning-is-error is enabled in Sphinx.
# As a workaround we suppress the warning message from pyeclib until we use
# a newer version of liberasurecode in our doc build job.
# [1] https://github.com/openstack/pyeclib/commit/d163972b
logging.getLogger('pyeclib').setLevel(logging.ERROR)
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.extend([os.path.abspath('../swift'), os.path.abspath('..'),
os.path.abspath('../bin')])
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'openstackdocstheme',
'sphinxcontrib.rsvgconverter']
todo_include_todos = True
# Add any paths that contain templates here, relative to this directory.
# templates_path = []
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'Swift'
if 'SOURCE_DATE_EPOCH' in os.environ:
now = float(os.environ.get('SOURCE_DATE_EPOCH'))
now = datetime.datetime.utcfromtimestamp(now)
else:
now = datetime.date.today()
copyright = '%d, OpenStack Foundation' % now.year
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
# unused_docs = []
# List of directories, relative to source directory, that shouldn't be searched
# for source files.
exclude_trees = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
show_authors = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
modindex_common_prefix = ['swift.']
# -- Options for HTML output -----------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme = 'default'
# html_theme_path = ["."]
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {
# turn off the "these docs aren't current" banner
'display_badge': False,
}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any paths that contain "extra" files, such as .htaccess or
# robots.txt.
html_extra_path = ['_extra']
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_use_modindex = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'swiftdoc'
# -- Options for LaTeX output -------------------------------------------------
# The paper size ('letter' or 'a4').
# latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
# latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'doc-swift.tex', 'Swift Documentation',
'Swift Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_use_modindex = True
latex_use_xindy = False
# -- Options for openstackdocstheme -------------------------------------------
openstackdocs_repo_name = 'openstack/swift'
openstackdocs_pdf_link = True
openstackdocs_auto_name = False
openstackdocs_bug_project = 'swift'
openstackdocs_bug_tag = ''
| swift-master | doc/source/conf.py |
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# swift documentation build configuration file, created by
# sphinx-quickstart on Mon Oct 3 17:01:55 2016.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import datetime
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'reno.sphinxext',
'openstackdocstheme',
]
# Add any paths that contain templates here, relative to this directory.
# templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'Swift Release Notes'
copyright = '%d, OpenStack Foundation' % datetime.datetime.now().year
# Release notes do not need a version number in the title, they
# cover multiple releases.
# The short X.Y version.
version = ''
# The full version, including alpha/beta/rc tags.
release = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#
# today = ''
#
# Else, today_fmt is used as the format for a strftime call.
#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# If true, `todo` and `todoList` produce output, else they produce nothing.
# todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents.
# "<project> v<release> documentation" by default.
#
# html_title = u'swift v2.10.0'
# A shorter title for the navigation bar. Default is the same as html_title.
#
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#
# html_logo = None
# The name of an image file (relative to this directory) to use as a favicon of
# the docs. This file should be a Windows icon file (.ico) being 16x16 or
# 32x32 pixels large.
#
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#
# html_extra_path = []
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#
# html_additional_pages = {}
# If false, no module index is generated.
#
# html_domain_indices = True
# If false, no index is generated.
#
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Language to be used for generating the HTML full-text search index.
# Sphinx supports the following languages:
# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'
#
# html_search_language = 'en'
# A dictionary with options for the search language support, empty by default.
# 'ja' uses this config value.
# 'zh' user can custom change `jieba` dictionary path.
#
# html_search_options = {'type': 'default'}
# The name of a javascript file (relative to the configuration directory) that
# implements a search results scorer. If empty, the default will be used.
#
# html_search_scorer = 'scorer.js'
# Output file base name for HTML help builder.
htmlhelp_basename = 'SwiftReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
# latex_elements = {
# # The paper size ('letterpaper' or 'a4paper').
# #
# # 'papersize': 'letterpaper',
# # The font size ('10pt', '11pt' or '12pt').
# #
# # 'pointsize': '10pt',
# # Additional stuff for the LaTeX preamble.
# #
# # 'preamble': '',
# # Latex figure (float) alignment
# #
# # 'figure_align': 'htbp',
# }
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
# latex_documents = [
# (master_doc, 'swift.tex', u'swift Documentation',
# u'swift', 'manual'),
# ]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#
# latex_use_parts = False
# If true, show page references after internal links.
#
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#
# latex_appendices = []
# It false, will not define \strong, \code, itleref, \crossref ... but only
# \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
# packages.
#
# latex_keep_old_macro_names = True
# If false, no module index is generated.
#
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
# man_pages = [
# (master_doc, 'swift', u'swift Documentation',
# [author], 1)
# ]
# If true, show URL addresses after external links.
#
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
# texinfo_documents = [
# (master_doc, 'swift', u'swift Documentation',
# author, 'swift', 'One line description of project.',
# 'Miscellaneous'),
# ]
# Documents to append as an appendix to all manuals.
#
# texinfo_appendices = []
# If false, no module index is generated.
#
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
# texinfo_no_detailmenu = False
locale_dirs = ['locale/']
# -- Options for openstackdocstheme -------------------------------------------
openstackdocs_repo_name = 'openstack/swift'
openstackdocs_auto_name = False
openstackdocs_bug_project = 'swift'
openstackdocs_bug_tag = ''
| swift-master | releasenotes/source/conf.py |
r"""
Parse additional arguments along with the setup.py arguments such as install, build, distribute, sdist, etc.
Usage:
python setup.py install <additional_flags>..<additional_flags> <additional_arg>=<value>..<additional_arg>=<value>
export CC=<C++ compiler>; python setup.py install <additional_flags>..<additional_flags> <additional_arg>=<value>..<additional_arg>=<value>
Examples:
python setup.py install --force_cuda --cuda_home=/usr/local/cuda
export CC=g++7; python setup.py install --force_cuda --cuda_home=/usr/local/cuda
Additional flags:
--cpu_only: Force building only a CPU version. However, if
torch.cuda.is_available() is False, it will default to CPU_ONLY.
--force_cuda: If torch.cuda.is_available() is false, but you have a working
nvcc, compile cuda files. --force_cuda will supercede --cpu_only.
Additional arguments:
--blas=<value> : type of blas library to use for CPU matrix multiplications.
Options: [openblas, mkl, atlas, blas]. By default, it will use the first
numpy blas library it finds.
--cuda_home=<value> : a directory that contains <value>/bin/nvcc and
<value>/lib64/libcudart.so. By default, use
`torch.utils.cpp_extension._find_cuda_home()`.
--blas_include_dirs=<comma_separated_values> : additional include dirs. Only
activated when --blas=<value> is set.
--blas_library_dirs=<comma_separated_values> : additional library dirs. Only
activated when --blas=<value> is set.
"""
import sys
if sys.version_info < (3, 6):
sys.stdout.write(
"Minkowski Engine requires Python 3.6 or higher. Please use anaconda https://www.anaconda.com/distribution/ for an isolated python environment.\n"
)
sys.exit(1)
try:
import torch
except ImportError:
raise ImportError("Pytorch not found. Please install pytorch first.")
import codecs
import os
import re
import subprocess
import warnings
from pathlib import Path
from sys import argv, platform
from setuptools import setup
from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
if platform == "win32":
raise ImportError("Windows is currently not supported.")
elif platform == "darwin":
# Set the distutils to use clang instead of g++ for valid std
if "CC" not in os.environ:
os.environ["CC"] = "/usr/local/opt/llvm/bin/clang"
here = os.path.abspath(os.path.dirname(__file__))
def read(*parts):
with codecs.open(os.path.join(here, *parts), "r") as fp:
return fp.read()
def find_version(*file_paths):
version_file = read(*file_paths)
version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
if version_match:
return version_match.group(1)
raise RuntimeError("Unable to find version string.")
def run_command(*args):
subprocess.check_call(args)
def _argparse(pattern, argv, is_flag=True, is_list=False):
if is_flag:
found = pattern in argv
if found:
argv.remove(pattern)
return found, argv
else:
arr = [arg for arg in argv if pattern == arg.split("=")[0]]
if is_list:
if len(arr) == 0: # not found
return False, argv
else:
assert "=" in arr[0], f"{arr[0]} requires a value."
argv.remove(arr[0])
val = arr[0].split("=")[1]
if "," in val:
return val.split(","), argv
else:
return [val], argv
else:
if len(arr) == 0: # not found
return False, argv
else:
assert "=" in arr[0], f"{arr[0]} requires a value."
argv.remove(arr[0])
return arr[0].split("=")[1], argv
run_command("rm", "-rf", "build")
run_command("pip", "uninstall", "MinkowskiEngine", "-y")
# For cpu only build
CPU_ONLY, argv = _argparse("--cpu_only", argv)
FORCE_CUDA, argv = _argparse("--force_cuda", argv)
if not torch.cuda.is_available() and not FORCE_CUDA:
warnings.warn(
"torch.cuda.is_available() is False. MinkowskiEngine will compile with CPU_ONLY. Please use `--force_cuda` to compile with CUDA."
)
CPU_ONLY = CPU_ONLY or not torch.cuda.is_available()
if FORCE_CUDA:
print("--------------------------------")
print("| FORCE_CUDA set |")
print("--------------------------------")
CPU_ONLY = False
# args with return value
CUDA_HOME, argv = _argparse("--cuda_home", argv, False)
BLAS, argv = _argparse("--blas", argv, False)
BLAS_INCLUDE_DIRS, argv = _argparse("--blas_include_dirs", argv, False, is_list=True)
BLAS_LIBRARY_DIRS, argv = _argparse("--blas_library_dirs", argv, False, is_list=True)
MAX_COMPILATION_THREADS = 12
Extension = CUDAExtension
extra_link_args = []
include_dirs = []
libraries = []
CC_FLAGS = []
NVCC_FLAGS = []
if CPU_ONLY:
print("--------------------------------")
print("| WARNING: CPU_ONLY build set |")
print("--------------------------------")
Extension = CppExtension
else:
print("--------------------------------")
print("| CUDA compilation set |")
print("--------------------------------")
# system python installation
libraries.append("cusparse")
if not (CUDA_HOME is False): # False when not set, str otherwise
print(f"Using CUDA_HOME={CUDA_HOME}")
if sys.platform == "win32":
vc_version = os.getenv("VCToolsVersion", "")
if vc_version.startswith("14.16."):
CC_FLAGS += ["/sdl"]
else:
CC_FLAGS += ["/sdl", "/permissive-"]
else:
CC_FLAGS += ["-fopenmp"]
if "darwin" in platform:
CC_FLAGS += ["-stdlib=libc++", "-std=c++17"]
NVCC_FLAGS += ["--expt-relaxed-constexpr", "--expt-extended-lambda"]
FAST_MATH, argv = _argparse("--fast_math", argv)
if FAST_MATH:
NVCC_FLAGS.append("--use_fast_math")
BLAS_LIST = ["flexiblas", "openblas", "mkl", "atlas", "blas"]
if not (BLAS is False): # False only when not set, str otherwise
assert BLAS in BLAS_LIST, f"Blas option {BLAS} not in valid options {BLAS_LIST}"
if BLAS == "mkl":
libraries.append("mkl_rt")
CC_FLAGS.append("-DUSE_MKL")
NVCC_FLAGS.append("-DUSE_MKL")
else:
libraries.append(BLAS)
if not (BLAS_INCLUDE_DIRS is False):
include_dirs += BLAS_INCLUDE_DIRS
if not (BLAS_LIBRARY_DIRS is False):
extra_link_args += [f"-Wl,-rpath,{BLAS_LIBRARY_DIRS}"]
else:
# find the default BLAS library
import numpy.distutils.system_info as sysinfo
# Search blas in this order
for blas in BLAS_LIST:
if "libraries" in sysinfo.get_info(blas):
BLAS = blas
libraries += sysinfo.get_info(blas)["libraries"]
break
else:
# BLAS not found
raise ImportError(
' \
\nBLAS not found from numpy.distutils.system_info.get_info. \
\nPlease specify BLAS with: python setup.py install --blas=openblas" \
\nfor more information, please visit https://github.com/NVIDIA/MinkowskiEngine/wiki/Installation'
)
print(f"\nUsing BLAS={BLAS}")
# The Ninja cannot compile the files that have the same name with different
# extensions correctly and uses the nvcc/CC based on the extension. Import a
# .cpp file to the corresponding .cu file to force the nvcc compilation.
SOURCE_SETS = {
"cpu": [
CppExtension,
[
"math_functions_cpu.cpp",
"coordinate_map_manager.cpp",
"convolution_cpu.cpp",
"convolution_transpose_cpu.cpp",
"local_pooling_cpu.cpp",
"local_pooling_transpose_cpu.cpp",
"global_pooling_cpu.cpp",
"broadcast_cpu.cpp",
"pruning_cpu.cpp",
"interpolation_cpu.cpp",
"quantization.cpp",
"direct_max_pool.cpp",
],
["pybind/minkowski.cpp"],
["-DCPU_ONLY"],
],
"gpu": [
CUDAExtension,
[
"math_functions_cpu.cpp",
"math_functions_gpu.cu",
"coordinate_map_manager.cu",
"coordinate_map_gpu.cu",
"convolution_kernel.cu",
"convolution_gpu.cu",
"convolution_transpose_gpu.cu",
"pooling_avg_kernel.cu",
"pooling_max_kernel.cu",
"local_pooling_gpu.cu",
"local_pooling_transpose_gpu.cu",
"global_pooling_gpu.cu",
"broadcast_kernel.cu",
"broadcast_gpu.cu",
"pruning_gpu.cu",
"interpolation_gpu.cu",
"spmm.cu",
"gpu.cu",
"quantization.cpp",
"direct_max_pool.cpp",
],
["pybind/minkowski.cu"],
[],
],
}
debug, argv = _argparse("--debug", argv)
HERE = Path(os.path.dirname(__file__)).absolute()
SRC_PATH = HERE / "src"
if "CC" in os.environ or "CXX" in os.environ:
# distutils only checks CC not CXX
if "CXX" in os.environ:
os.environ["CC"] = os.environ["CXX"]
CC = os.environ["CXX"]
else:
CC = os.environ["CC"]
print(f"Using {CC} for c++ compilation")
if torch.__version__ < "1.7.0":
NVCC_FLAGS += [f"-ccbin={CC}"]
else:
print("Using the default compiler")
if debug:
CC_FLAGS += ["-g", "-DDEBUG"]
NVCC_FLAGS += ["-g", "-DDEBUG", "-Xcompiler=-fno-gnu-unique"]
else:
CC_FLAGS += ["-O3"]
NVCC_FLAGS += ["-O3", "-Xcompiler=-fno-gnu-unique"]
if "MAX_JOBS" not in os.environ and os.cpu_count() > MAX_COMPILATION_THREADS:
# Clip the num compilation thread to 8
os.environ["MAX_JOBS"] = str(MAX_COMPILATION_THREADS)
target = "cpu" if CPU_ONLY else "gpu"
Extension = SOURCE_SETS[target][0]
SRC_FILES = SOURCE_SETS[target][1]
BIND_FILES = SOURCE_SETS[target][2]
ARGS = SOURCE_SETS[target][3]
CC_FLAGS += ARGS
NVCC_FLAGS += ARGS
ext_modules = [
Extension(
name="MinkowskiEngineBackend._C",
sources=[*[str(SRC_PATH / src_file) for src_file in SRC_FILES], *BIND_FILES],
extra_compile_args={"cxx": CC_FLAGS, "nvcc": NVCC_FLAGS},
libraries=libraries,
),
]
# Python interface
setup(
name="MinkowskiEngine",
version=find_version("MinkowskiEngine", "__init__.py"),
install_requires=["torch", "numpy"],
packages=["MinkowskiEngine", "MinkowskiEngine.utils", "MinkowskiEngine.modules"],
package_dir={"MinkowskiEngine": "./MinkowskiEngine"},
ext_modules=ext_modules,
include_dirs=[str(SRC_PATH), str(SRC_PATH / "3rdparty"), *include_dirs],
cmdclass={"build_ext": BuildExtension.with_options(use_ninja=True)},
author="Christopher Choy",
author_email="[email protected]",
description="a convolutional neural network library for sparse tensors",
long_description=read("README.md"),
long_description_content_type="text/markdown",
url="https://github.com/NVIDIA/MinkowskiEngine",
keywords=[
"pytorch",
"Minkowski Engine",
"Sparse Tensor",
"Convolutional Neural Networks",
"3D Vision",
"Deep Learning",
],
zip_safe=False,
classifiers=[
# https: // pypi.org/classifiers/
"Environment :: Console",
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"Intended Audience :: Other Audience",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Programming Language :: C++",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Topic :: Multimedia :: Graphics",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Physics",
"Topic :: Scientific/Engineering :: Visualization",
],
python_requires=">=3.6",
)
| MinkowskiEngine-master | setup.py |
import unittest
if __name__ == '__main__':
unittest.main()
| MinkowskiEngine-master | tests/run_test.py |
# Copyright (c) 2020 NVIDIA CORPORATION.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
import torch.nn as nn
from tests.python.common import load_file
from MinkowskiEngine.utils import batched_coordinates, sparse_quantize
from MinkowskiTensor import SparseTensorQuantizationMode
from MinkowskiTensorField import TensorField
from MinkowskiOps import MinkowskiLinear, MinkowskiToSparseTensor
from MinkowskiNonlinearity import MinkowskiReLU
from MinkowskiNormalization import MinkowskiBatchNorm
from MinkowskiConvolution import MinkowskiConvolution, MinkowskiConvolutionTranspose
class TestTensorField(unittest.TestCase):
def test(self):
coords = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
)
feats = torch.FloatTensor([[0, 1, 2, 3, 5, 6, 7]]).T
sfield = TensorField(feats, coords)
# Convert to a sparse tensor
stensor = sfield.sparse(
quantization_mode=SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE
)
print(stensor)
self.assertTrue(
{0.5, 2.5, 5.5, 7} == {a for a in stensor.F.squeeze().detach().numpy()}
)
# device cuda
if not torch.cuda.is_available():
return
sfield = TensorField(feats, coords, device="cuda")
# Convert to a sparse tensor
stensor = sfield.sparse(
quantization_mode=SparseTensorQuantizationMode.UNWEIGHTED_AVERAGE
)
print(stensor)
self.assertTrue(
{0.5, 2.5, 5.5, 7}
== {a for a in stensor.F.squeeze().detach().cpu().numpy()}
)
def test_maxpool(self):
coords = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
)
feats = torch.FloatTensor([[0, 1, 2, 3, 5, 6, 7]]).T
sfield = TensorField(feats, coords)
# Convert to a sparse tensor
stensor = sfield.sparse(quantization_mode=SparseTensorQuantizationMode.MAX_POOL)
print(stensor)
self.assertTrue(
{1, 3, 6, 7} == {a for a in stensor.F.squeeze().detach().numpy()}
)
# device cuda
if not torch.cuda.is_available():
return
sfield = TensorField(feats, coords, device="cuda")
# Convert to a sparse tensor
stensor = sfield.sparse(quantization_mode=SparseTensorQuantizationMode.MAX_POOL)
print(stensor)
self.assertTrue(
{1, 3, 6, 7} == {a for a in stensor.F.squeeze().detach().cpu().numpy()}
)
def test_pcd(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors)
bcoords = batched_coordinates([coords / voxel_size])
tfield = TensorField(colors, bcoords)
self.assertTrue(len(tfield) == len(colors))
stensor = tfield.sparse()
print(stensor)
def test_network(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors)
bcoords = batched_coordinates([coords / voxel_size])
tfield = TensorField(colors, bcoords).float()
network = nn.Sequential(
MinkowskiLinear(3, 16),
MinkowskiBatchNorm(16),
MinkowskiReLU(),
MinkowskiLinear(16, 32),
MinkowskiBatchNorm(32),
MinkowskiReLU(),
MinkowskiToSparseTensor(),
MinkowskiConvolution(32, 64, kernel_size=3, stride=2, dimension=3),
)
print(network(tfield))
def test_network_device(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors)
bcoords = batched_coordinates([coords / voxel_size])
tfield = TensorField(colors, bcoords, device=0).float()
network = nn.Sequential(
MinkowskiLinear(3, 16),
MinkowskiBatchNorm(16),
MinkowskiReLU(),
MinkowskiLinear(16, 32),
MinkowskiBatchNorm(32),
MinkowskiReLU(),
MinkowskiToSparseTensor(),
MinkowskiConvolution(32, 64, kernel_size=3, stride=2, dimension=3),
).to(0)
print(network(tfield))
def slice(self):
device = "cuda"
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors).float()
bcoords = batched_coordinates([coords / voxel_size], dtype=torch.float32)
tfield = TensorField(colors, bcoords, device=device)
network = nn.Sequential(
MinkowskiLinear(3, 16),
MinkowskiBatchNorm(16),
MinkowskiReLU(),
MinkowskiLinear(16, 32),
MinkowskiBatchNorm(32),
MinkowskiReLU(),
MinkowskiToSparseTensor(),
MinkowskiConvolution(32, 64, kernel_size=3, stride=2, dimension=3),
MinkowskiConvolutionTranspose(64, 32, kernel_size=3, stride=2, dimension=3),
).to(device)
otensor = network(tfield)
ofield = otensor.slice(tfield)
self.assertEqual(len(tfield), len(ofield))
self.assertEqual(ofield.F.size(1), otensor.F.size(1))
ofield = otensor.cat_slice(tfield)
self.assertEqual(len(tfield), len(ofield))
self.assertEqual(ofield.F.size(1), (otensor.F.size(1) + tfield.F.size(1)))
def slice_no_duplicate(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
# Extract unique coords
coords, colors = sparse_quantize(coords / voxel_size, colors)
bcoords = batched_coordinates([coords], dtype=torch.float32)
colors = torch.from_numpy(colors).float()
tfield = TensorField(colors, bcoords)
network = nn.Sequential(
MinkowskiLinear(3, 16),
MinkowskiBatchNorm(16),
MinkowskiReLU(),
MinkowskiLinear(16, 32),
MinkowskiBatchNorm(32),
MinkowskiReLU(),
MinkowskiToSparseTensor(),
MinkowskiConvolution(32, 64, kernel_size=3, stride=2, dimension=3),
MinkowskiConvolutionTranspose(64, 32, kernel_size=3, stride=2, dimension=3),
)
otensor = network(tfield)
ofield = otensor.slice(tfield)
self.assertEqual(len(tfield), len(ofield))
self.assertEqual(ofield.F.size(1), otensor.F.size(1))
ofield = otensor.cat_slice(tfield)
self.assertEqual(len(tfield), len(ofield))
self.assertEqual(ofield.F.size(1), (otensor.F.size(1) + tfield.F.size(1)))
def stride_slice(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors).float()
bcoords = batched_coordinates([coords / voxel_size], dtype=torch.float32)
tfield = TensorField(colors, bcoords)
network = nn.Sequential(
MinkowskiToSparseTensor(),
MinkowskiConvolution(3, 8, kernel_size=3, stride=4, dimension=3),
MinkowskiReLU(),
MinkowskiConvolution(8, 16, kernel_size=3, stride=4, dimension=3),
)
otensor = network(tfield)
ofield = otensor.slice(tfield)
self.assertTrue(len(ofield) == len(tfield))
def field_to_sparse(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors).float()
bcoords = batched_coordinates([coords / voxel_size], dtype=torch.float32)
tfield = TensorField(colors, bcoords)
network = nn.Sequential(
MinkowskiToSparseTensor(),
MinkowskiConvolution(3, 8, kernel_size=3, stride=4, dimension=3),
MinkowskiReLU(),
MinkowskiConvolution(8, 16, kernel_size=3, stride=4, dimension=3),
)
otensor = network(tfield)
otensor.F.sum().backward()
field_to_sparse = tfield.sparse(coordinate_map_key=otensor.coordinate_map_key)
self.assertTrue(len(field_to_sparse.F) == len(otensor))
class TestTensorFieldSplat(unittest.TestCase):
def setUp(self):
coords, colors, pcd = load_file("1.ply")
voxel_size = 0.02
colors = torch.from_numpy(colors).float()
bcoords = batched_coordinates([coords / voxel_size], dtype=torch.float32)
self.tensor_field = TensorField(coordinates=bcoords, features=colors)
def test_splat(self):
self.tensor_field.splat()
def test_small(self):
coords = torch.FloatTensor([[0, 0.1], [0, 1.1]])
feats = torch.FloatTensor([[1], [2]])
tfield = TensorField(coordinates=coords, features=feats)
tensor = tfield.splat()
print(tfield)
print(tensor)
print(tensor.interpolate(tfield))
def test_small2(self):
coords = torch.FloatTensor([[0, 0.1, 0.1], [0, 1.1, 1.1]])
feats = torch.FloatTensor([[1], [2]])
tfield = TensorField(coordinates=coords, features=feats)
tensor = tfield.splat()
print(tfield)
print(tensor)
print(tensor.interpolate(tfield)) | MinkowskiEngine-master | tests/python/tensor_field.py |
# Copyright (c) Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import unittest
import torch
import numpy as np
import MinkowskiEngine as ME
class CoordinateManagerTestCase(unittest.TestCase):
def test_coordinate_manager(self):
coordinates = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
)
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CPU
)
key, (unique_map, inverse_map) = manager.insert_and_map(coordinates, [1])
# mapping and inverse mapping should recover the original coordinates
self.assertTrue(
torch.all(coordinates[unique_map.long()][inverse_map.long()] == coordinates)
)
# copied coordinates should retrieve the original coordinates
retrieved_coordinates = manager.get_coordinates(key)
self.assertTrue(
torch.all(coordinates == retrieved_coordinates[inverse_map.long()])
)
# Create a strided map
stride_key = manager.stride(key, [4])
strided_coords = manager.get_coordinates(stride_key)
self.assertTrue(len(strided_coords) == 2)
# # Create a transposed stride map
# transposed_key = cm.transposed_stride(stride_key, [2], [3], [1])
# print("Transposed Stride: ", cm.get_coords(transposed_key))
# print(cm)
# # Create a transposed stride map
# transposed_key = cm.transposed_stride(
# stride_key, [2], [3], [1], force_creation=True
# )
# print("Forced Transposed Stride: ", cm.get_coords(transposed_key))
# print(cm)
# # Create a reduction map
# key = cm.reduce()
# print("Reduction: ", cm.get_coords(key))
# print(cm)
# print("Reduction mapping: ", cm.get_row_indices_per_batch(stride_key))
# print(cm)
def test_stride(self):
coordinates = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
)
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CPU
)
key, (unique_map, inverse_map) = manager.insert_and_map(coordinates, [1])
# Create a strided map
stride_key = manager.stride(key, [4])
print(manager.get_coordinates(key))
print(manager.get_coordinates(stride_key))
print(
manager.kernel_map(
key,
stride_key,
[4],
[4],
[1],
ME.RegionType.HYPER_CUBE,
torch.IntTensor(),
False,
True,
)
)
# print(manager.stride_map(key, stride_key))
def test_kernel_map(self):
coordinates = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
)
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CPU
)
key, (unique_map, inverse_map) = manager.insert_and_map(coordinates, [1])
# Create a strided map
stride_key = manager.stride(key, [4])
print(manager.get_coordinates(key))
print(manager.get_coordinates(stride_key))
print(
manager.kernel_map(
key,
stride_key,
[4],
[4],
[1],
ME.RegionType.HYPER_CUBE,
torch.IntTensor(),
False,
False,
)
)
# print(manager.stride_map(key, stride_key))
def test_stride_cuda(self):
coordinates = torch.IntTensor(
[[0, 1], [0, 1], [0, 2], [0, 2], [1, 0], [1, 0], [1, 1]]
).cuda()
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CUDA
)
key, (unique_map, inverse_map) = manager.insert_and_map(coordinates, [1])
# Create a strided map
stride_key = manager.stride(key, [4])
print(manager.get_coordinates(key))
print(manager.get_coordinates(stride_key))
# print(
# manager.kernel_map(
# key,
# stride_key,
# [4],
# [4],
# [1],
# ME.RegionType.HYPER_CUBE,
# torch.IntTensor(),
# False,
# True,
# )
# )
print(manager.stride_map(key, stride_key))
print(
manager.kernel_map(
key,
stride_key,
[4],
[4],
[1],
ME.RegionType.HYPER_CUBE,
torch.IntTensor(),
False,
False,
)
)
def test_negative_coords(self):
coords = torch.IntTensor(
[[0, -3], [0, -2], [0, -1], [0, 0], [0, 1], [0, 2], [0, 3]]
)
# Initialize map
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CPU
)
key, (unique_map, inverse_map) = manager.insert_and_map(coords, [1])
# Create a strided map
stride_key = manager.stride(key, [2])
strided_coords = manager.get_coordinates(stride_key).numpy().tolist()
self.assertTrue(len(strided_coords) == 4)
self.assertTrue([0, -4] in strided_coords)
self.assertTrue([0, -2] in strided_coords)
self.assertTrue([0, 2] in strided_coords)
def test_origin_map(self):
manager = ME.CoordinateManager(
D=1, coordinate_map_type=ME.CoordinateMapType.CPU
)
coords = torch.IntTensor(
[[0, -3], [0, -2], [0, -1], [0, 0], [1, 1], [1, 2], [1, 3]]
)
# key with batch_size 2
key, (unique_map, inverse_map) = manager.insert_and_map(coords, [1])
batch_indices, origin_map = manager.origin_map(key)
print(origin_map)
# self.assertTrue(set(origin_map[0].numpy()) == set([0, 1, 2, 3]))
key = manager.origin()
batch_coordinates = manager.get_coordinates(key)
print(batch_coordinates)
self.assertTrue(len(batch_coordinates) == 2)
if not ME.is_cuda_available():
return
manager = ME.CoordinateManager(
D=1,
coordinate_map_type=ME.CoordinateMapType.CUDA,
allocator_type=ME.GPUMemoryAllocatorType.PYTORCH,
)
key, (unique_map, inverse_map) = manager.insert_and_map(coords.to(0), [1])
origin_map = manager.origin_map(key)
print(origin_map)
key = manager.origin()
self.assertTrue(manager.number_of_unique_batch_indices() == 2)
batch_coordinates = manager.get_coordinates(key)
print(batch_coordinates)
self.assertTrue(len(batch_coordinates) == 2)
def test_gpu_allocator(self):
if not ME.is_cuda_available():
return
# Set the global GPU memory manager backend. By default PYTORCH.
ME.set_gpu_allocator(ME.GPUMemoryAllocatorType.PYTORCH)
ME.set_gpu_allocator(ME.GPUMemoryAllocatorType.CUDA)
# Create a coords man with the specified GPU memory manager backend.
# No effect with CPU_ONLY build
manager = ME.CoordinateManager(
D=1,
coordinate_map_type=ME.CoordinateMapType.CPU,
allocator_type=ME.GPUMemoryAllocatorType.CUDA,
)
def test_unique(self):
coordinates = torch.IntTensor([[0, 0], [0, 0], [0, 1], [0, 2]])
unique_map, inverse_map = ME.utils.unique_coordinate_map(coordinates)
self.assertTrue(len(unique_map) == 3)
| MinkowskiEngine-master | tests/python/coordinate_manager.py |
# Copyright (c) 2020 NVIDIA CORPORATION.
# Copyright (c) 2018-2020 Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
import MinkowskiEngine as ME
class TestUtility(unittest.TestCase):
def test(self):
self.assertTrue(ME.is_cuda_available() == torch.cuda.is_available())
if ME.is_cuda_available():
print(ME.cuda_version())
print(ME.get_gpu_memory_info())
| MinkowskiEngine-master | tests/python/utility_functions.py |
# Copyright (c) Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import os
import argparse
import numpy as np
from urllib.request import urlretrieve
try:
import open3d as o3d
except ImportError:
raise ImportError("Please install open3d with `pip install open3d`.")
import torch
import MinkowskiEngine as ME
from MinkowskiCommon import convert_to_int_list
import examples.minkunet as UNets
from tests.python.common import data_loader, load_file, batched_coordinates
from examples.common import Timer
# Check if the weights and file exist and download
if not os.path.isfile("weights.pth"):
print("Downloading weights and a room ply file...")
urlretrieve(
"http://cvgl.stanford.edu/data2/minkowskiengine/weights.pth", "weights.pth"
)
urlretrieve("http://cvgl.stanford.edu/data2/minkowskiengine/1.ply", "1.ply")
parser = argparse.ArgumentParser()
parser.add_argument("--file_name", type=str, default="1.ply")
parser.add_argument("--weights", type=str, default="weights.pth")
parser.add_argument("--use_cpu", action="store_true")
parser.add_argument("--backward", action="store_true")
parser.add_argument("--max_batch", type=int, default=12)
def quantize(coordinates):
D = coordinates.size(1) - 1
coordinate_manager = ME.CoordinateManager(
D=D, coordinate_map_type=ME.CoordinateMapType.CPU
)
coordinate_map_key = ME.CoordinateMapKey(convert_to_int_list(1, D), "")
key, (unique_map, inverse_map) = coordinate_manager.insert_and_map(
coordinates, *coordinate_map_key.get_key()
)
return unique_map, inverse_map
def load_file(file_name, voxel_size):
pcd = o3d.io.read_point_cloud(file_name)
coords = torch.from_numpy(np.array(pcd.points))
feats = torch.from_numpy(np.array(pcd.colors)).float()
quantized_coords = torch.floor(coords / voxel_size).int()
inds, inverse_inds = quantize(quantized_coords)
return quantized_coords[inds], feats[inds], pcd
def forward(coords, colors, model):
# Measure time
timer = Timer()
for i in range(5):
# Feed-forward pass and get the prediction
timer.tic()
sinput = ME.SparseTensor(
features=colors,
coordinates=coords,
device=device,
allocator_type=ME.GPUMemoryAllocatorType.PYTORCH,
)
logits = model(sinput)
timer.toc()
return timer.min_time, len(logits)
def train(coords, colors, model):
# Measure time
timer = Timer()
for i in range(5):
# Feed-forward pass and get the prediction
timer.tic()
sinput = ME.SparseTensor(
colors,
coords,
device=device,
allocator_type=ME.GPUMemoryAllocatorType.PYTORCH,
)
logits = model(sinput)
logits.F.sum().backward()
timer.toc()
return timer.min_time, len(logits)
def test_network(coords, feats, model, batch_sizes, forward_only=True):
for batch_size in batch_sizes:
bcoords = batched_coordinates([coords for i in range(batch_size)])
bfeats = torch.cat([feats for i in range(batch_size)], 0)
if forward_only:
with torch.no_grad():
time, length = forward(bcoords, bfeats, model)
else:
time, length = train(bcoords, bfeats, model)
print(f"{net.__name__}\t{voxel_size}\t{batch_size}\t{length}\t{time}")
torch.cuda.empty_cache()
if __name__ == "__main__":
config = parser.parse_args()
device = torch.device(
"cuda" if (torch.cuda.is_available() and not config.use_cpu) else "cpu"
)
print(f"Using {device}")
print(f"Using backward {config.backward}")
# Define a model and load the weights
batch_sizes = [i for i in range(2, config.max_batch + 1, 2)]
batch_sizes = [1, *batch_sizes]
for net in [UNets.MinkUNet14, UNets.MinkUNet18, UNets.MinkUNet34, UNets.MinkUNet50]:
model = net(3, 20).to(device)
model.eval()
for voxel_size in [0.02]:
print(voxel_size)
coords, feats, _ = load_file(config.file_name, voxel_size)
test_network(coords, feats, model, batch_sizes, not config.backward)
torch.cuda.empty_cache()
del model
| MinkowskiEngine-master | tests/python/network_speed.py |
# Copyright (c) 2020 NVIDIA CORPORATION.
# Copyright (c) 2018-2020 Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
import time
import numpy as np
import MinkowskiEngineBackend._C as _C
from MinkowskiEngine import (
SparseTensor,
MinkowskiAlgorithm,
MinkowskiConvolution,
MinkowskiConvolutionFunction,
MinkowskiConvolutionTranspose,
MinkowskiConvolutionTransposeFunction,
MinkowskiGenerativeConvolutionTranspose,
MinkowskiChannelwiseConvolution,
KernelGenerator,
)
from MinkowskiEngine.utils import batched_coordinates
from tests.python.common import data_loader, load_file
from utils.gradcheck import gradcheck
LEAK_TEST_ITER = 100000
class TestConvolution(unittest.TestCase):
def test_expansion(self):
print(f"{self.__class__.__name__}: test_expansion")
in_channels, out_channels, D = 2, 2, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
# Initialize context
conv = MinkowskiConvolution(
in_channels,
out_channels,
kernel_size=3,
stride=2,
bias=False,
expand_coordinates=True,
dimension=D,
).double()
input = SparseTensor(
feats,
coordinates=coords,
minkowski_algorithm=MinkowskiAlgorithm.SPEED_OPTIMIZED,
)
print(input)
output = conv(input)
print(output)
if not torch.cuda.is_available():
return
input = SparseTensor(
feats,
coordinates=coords,
minkowski_algorithm=MinkowskiAlgorithm.SPEED_OPTIMIZED,
device="cuda",
)
conv = conv.to("cuda")
print(input)
output = conv(input)
print(output)
def test_kernel_map(self):
print(f"{self.__class__.__name__}: test_gpu")
if not torch.cuda.is_available():
return
in_channels, out_channels, D = 2, 2, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
# Initialize context
conv1 = MinkowskiConvolution(
in_channels, out_channels, kernel_size=2, stride=2, bias=True, dimension=D
).double()
conv2 = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D
).double()
device = torch.device("cuda")
input = SparseTensor(
feats,
coordinates=coords,
device=device,
minkowski_algorithm=MinkowskiAlgorithm.SPEED_OPTIMIZED,
)
print(input)
conv1 = conv1.to(device)
conv2 = conv2.to(device)
output = conv2(conv1(input))
print(output)
def test_gpu(self):
print(f"{self.__class__.__name__}: test_gpu")
if not torch.cuda.is_available():
return
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D
)
print(conv)
input = SparseTensor(feats, coordinates=coords)
conv = conv.double()
output = conv(input)
print(output)
device = torch.device("cuda")
input = SparseTensor(feats.to(device), coordinates=coords.to(device))
conv = conv.to(device)
output = conv(input)
print(output)
# Check backward
fn = MinkowskiConvolutionFunction()
grad = output.F.clone().zero_()
grad[0] = 1
output.F.backward(grad)
self.assertTrue(
gradcheck(
fn,
(
input.F,
conv.kernel,
conv.kernel_generator,
conv.convolution_mode,
input.coordinate_map_key,
None,
input.coordinate_manager,
),
)
)
def test(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D
)
conv = conv.double()
output = conv(input)
print(output)
self.assertEqual(input.coordinate_map_key.get_tensor_stride(), [1, 1])
self.assertEqual(output.coordinate_map_key.get_tensor_stride(), [2, 2])
if torch.cuda.is_available():
input_gpu = SparseTensor(feats, coordinates=coords, device="cuda")
conv_gpu = conv.cuda()
output_gpu = conv_gpu(input_gpu)
self.assertTrue(torch.allclose(output_gpu.F.var(0).cpu(), output.F.var(0)))
self.assertTrue(
torch.allclose(output_gpu.F.mean(0).cpu(), output.F.mean(0))
)
# kernel_map = input.coords_man.kernel_map(
# 1, 2, stride=2, kernel_size=3)
# print(kernel_map)
# Check backward
fn = MinkowskiConvolutionFunction()
conv = conv.cpu()
self.assertTrue(
gradcheck(
fn,
(
input.F,
conv.kernel,
conv.kernel_generator,
conv.convolution_mode,
input.coordinate_map_key,
output.coordinate_map_key,
input.coordinate_manager,
),
)
)
for i in range(LEAK_TEST_ITER):
input = SparseTensor(feats, coordinates=coords)
conv(input).F.sum().backward()
if i % 1000 == 0:
print(i)
def test_analytic(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 2, 1
coords = torch.IntTensor([[0, 0], [0, 1], [0, 2]])
feats = torch.FloatTensor([[0, 1], [1, 0], [1, 1]])
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=2, stride=2, bias=False, dimension=D
)
conv.kernel[:] = torch.FloatTensor([[[1, 2], [2, 1]], [[0, 1], [1, 0]]])
output = conv(input)
print(output)
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=2, stride=1, bias=False, dimension=D
)
conv.kernel[:] = torch.FloatTensor([[[1, 2], [2, 1]], [[0, 1], [1, 0]]])
output = conv(input)
print(output)
class TestConvolutionMode(unittest.TestCase):
def test_gpu(self):
print(f"{self.__class__.__name__}: test_gpu")
if not torch.cuda.is_available():
return
in_channels, out_channels, D = 3, 2, 2
coords, feats, labels = data_loader(in_channels, batch_size=20)
feats = feats.double()
feats.requires_grad_()
device = torch.device("cuda")
conv = (
MinkowskiConvolution(
in_channels,
out_channels,
kernel_size=2,
stride=1,
bias=False,
dimension=D,
)
.to(device)
.double()
)
# Initialize context
for mode in [_C.ConvolutionMode.DIRECT_GEMM, _C.ConvolutionMode.COPY_GEMM]:
conv.convolution_mode = mode
input = SparseTensor(feats, coordinates=coords, device=device)
print(mode, input.F.numel(), len(input), input)
output = conv(input)
print(output)
# Check backward
fn = MinkowskiConvolutionFunction()
grad = output.F.clone().zero_()
grad[0] = 1
output.F.backward(grad)
self.assertTrue(
gradcheck(
fn,
(
input.F,
conv.kernel,
conv.kernel_generator,
conv.convolution_mode,
input.coordinate_map_key,
None,
input.coordinate_manager,
),
)
)
class TestConvolutionTranspose(unittest.TestCase):
def test_gpu(self):
print(f"{self.__class__.__name__}: test_gpu")
if not torch.cuda.is_available():
return
device = torch.device("cuda")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats.to(device), coordinates=coords.to(device))
# Initialize context
conv = (
MinkowskiConvolution(
in_channels,
out_channels,
kernel_size=3,
stride=2,
bias=True,
dimension=D,
)
.double()
.to(device)
)
conv_tr = (
MinkowskiConvolutionTranspose(
out_channels,
in_channels,
kernel_size=3,
stride=2,
bias=True,
dimension=D,
)
.double()
.to(device)
)
tr_input = conv(input)
print(tr_input)
output = conv_tr(tr_input)
print(output)
# Check backward
fn = MinkowskiConvolutionTransposeFunction()
self.assertTrue(
gradcheck(
fn,
(
tr_input.F,
conv_tr.kernel,
conv_tr.kernel_generator,
conv_tr.convolution_mode,
tr_input.coordinate_map_key,
output.coordinate_map_key,
tr_input.coordinate_manager,
),
)
)
def test(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D
).double()
conv_tr = MinkowskiConvolutionTranspose(
out_channels, in_channels, kernel_size=2, stride=2, bias=True, dimension=D
).double()
print("Initial input: ", input)
input = conv(input)
print("Conv output: ", input)
output = conv_tr(input)
print("Conv tr output: ", output)
# Check backward
fn = MinkowskiConvolutionTransposeFunction()
self.assertTrue(
gradcheck(
fn,
(
input.F,
conv_tr.kernel,
conv_tr.kernel_generator,
conv_tr.convolution_mode,
input.coordinate_map_key,
output.coordinate_map_key,
input.coordinate_manager,
),
)
)
def test_analytic(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 2, 2
coords = torch.IntTensor([[0, 0, 0], [0, 1, 1], [0, 2, 1]])
feats = torch.FloatTensor([[0, 1], [1, 0], [1, 1]])
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=2, stride=2, bias=False, dimension=D
)
conv.kernel[:] = torch.FloatTensor(
[[[1, 2], [2, 1]], [[0, 1], [1, 0]], [[0, 1], [1, 1]], [[1, 1], [1, 0]]]
)
output = conv(input)
print(output)
conv_tr = MinkowskiConvolutionTranspose(
in_channels, out_channels, kernel_size=2, stride=2, bias=False, dimension=D
)
conv_tr.kernel[:] = torch.FloatTensor(
[[[1, 2], [2, 1]], [[0, 1], [1, 0]], [[0, 1], [1, 1]], [[1, 1], [1, 0]]]
)
output_tr = conv_tr(output)
print(output_tr)
def test_analytic_odd(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 2, 2
coords = torch.IntTensor([[0, 0, 0], [0, 1, 1], [0, 2, 1]])
feats = torch.FloatTensor([[0, 1], [1, 0], [1, 1]])
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=False, dimension=D
)
conv.kernel[:] = torch.FloatTensor(
[
[[1, 2], [2, 1]],
[[0, 1], [1, 0]],
[[0, 1], [1, 1]],
[[1, 1], [1, 0]],
[[1, 1], [1, 0]],
[[2, 1], [1, 0.5]],
[[1, 1], [1, 0.1]],
[[1, 1], [1, 0.7]],
[[1, 0.3], [1, 0.5]],
]
)
output = conv(input)
print(output)
conv_tr = MinkowskiConvolutionTranspose(
in_channels, out_channels, kernel_size=3, stride=2, bias=False, dimension=D
)
conv_tr.kernel[:] = torch.FloatTensor(
[
[[1, 2], [2, 1]],
[[0, 1], [1, 0]],
[[0, 1], [1, 1]],
[[1, 1], [1, 0]],
[[1, 1], [1, 0]],
[[2, 1], [1, 0.5]],
[[1, 1], [1, 0.1]],
[[1, 1], [1, 0.7]],
[[1, 0.3], [1, 0.5]],
]
)
output_tr = conv_tr(output)
print(output_tr)
class TestGenerativeConvolutionTranspose(unittest.TestCase):
def test_gpu(self):
print(f"{self.__class__.__name__}: test_gpu")
if not torch.cuda.is_available():
return
device = torch.device("cuda")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats.to(device), coordinates=coords.to(device))
# Initialize context
conv = (
MinkowskiConvolution(
in_channels,
out_channels,
kernel_size=3,
stride=2,
bias=True,
dimension=D,
)
.double()
.to(device)
)
conv_tr = (
MinkowskiGenerativeConvolutionTranspose(
out_channels,
in_channels,
kernel_size=3,
stride=2,
bias=True,
dimension=D,
)
.double()
.to(device)
)
tr_input = conv(input)
print(tr_input)
output = conv_tr(tr_input)
print(output)
# Check backward
fn = MinkowskiConvolutionTransposeFunction()
self.assertTrue(
gradcheck(
fn,
(
tr_input.F,
conv_tr.kernel,
conv_tr.kernel_generator,
conv_tr.convolution_mode,
tr_input.coordinate_map_key,
output.coordinate_map_key,
tr_input.coordinate_manager,
),
)
)
def test(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D
).double()
conv_tr = MinkowskiGenerativeConvolutionTranspose(
out_channels, in_channels, kernel_size=3, stride=2, bias=True, dimension=D
).double()
print("Initial input: ", input)
input = conv(input)
print("Conv output: ", input)
output = conv_tr(input)
print("Conv tr output: ", output)
# Check backward
fn = MinkowskiConvolutionTransposeFunction()
self.assertTrue(
gradcheck(
fn,
(
input.F,
conv_tr.kernel,
conv_tr.kernel_generator,
conv_tr.convolution_mode,
input.coordinate_map_key,
output.coordinate_map_key,
input.coordinate_manager,
),
)
)
class TestChannelwiseConvolution(unittest.TestCase):
def test(self):
print(f"{self.__class__.__name__}: test")
in_channels, out_channels, D = 2, 3, 2
coords, feats, labels = data_loader(in_channels)
feats = feats.double()
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords)
# Initialize context
conv = MinkowskiChannelwiseConvolution(
in_channels, kernel_size=3, stride=2, bias=True, dimension=D
)
conv = conv.double()
output = conv(input)
print(output)
self.assertEqual(input.coordinate_map_key.get_tensor_stride(), [1, 1])
self.assertEqual(output.coordinate_map_key.get_tensor_stride(), [2, 2])
class TestPCD(unittest.TestCase):
def test_forward(self):
coords, colors, pcd = load_file("1.ply")
device = "cuda"
X = []
Y = []
W = []
for IC in [3, 8, 16, 24, 32, 48, 64, 96, 128]:
for OC in [3, 8, 16, 24, 32, 48, 64, 96, 128, 192, 256]:
for batch_size in [1, 5, 10, 15, 20]:
for voxel_size in [0.2, 0.1, 0.075, 0.05, 0.025]:
min_times = []
for mode in [
_C.ConvolutionMode.DIRECT_GEMM,
_C.ConvolutionMode.COPY_GEMM,
]:
min_time = 100000
dcoords = torch.from_numpy(
np.floor(coords / voxel_size)
).int()
bcoords = batched_coordinates(
[dcoords for i in range(batch_size)]
)
in_feats = torch.rand(len(bcoords), IC).to(0)
sinput = SparseTensor(
in_feats, coordinates=bcoords, device=device
)
conv = MinkowskiConvolution(
in_channels=IC,
out_channels=OC,
kernel_size=3,
stride=2,
convolution_mode=mode,
dimension=3,
).to(device)
soutput = conv(sinput)
loss = soutput.F.sum()
for i in range(10):
stime = time.time()
loss.backward()
min_time = min(time.time() - stime, min_time)
min_times.append(min_time)
X.append(
[
IC,
OC,
len(sinput),
len(soutput),
]
)
Y.append(np.argmin(min_times))
W.append(np.abs(min_times[0] - min_times[1]))
print(X[-1], Y[-1], W[-1])
import pickle as pkl
with open("forward-speed.pkl", "wb") as f:
pkl.dump([X, Y, W], f)
def test_backward(self):
coords, colors, pcd = load_file("1.ply")
device = "cuda"
X = []
Y = []
W = []
for IC in [8, 16, 24, 32, 48, 64, 96, 128]:
for OC in [8, 16, 24, 32, 48, 64, 96, 128, 192, 256]:
for batch_size in [1, 5, 10, 15, 20]:
for voxel_size in [0.2, 0.1, 0.075, 0.05, 0.025]:
min_times = []
for mode in [
_C.ConvolutionMode.DIRECT_GEMM,
_C.ConvolutionMode.COPY_GEMM,
]:
min_time = 100000
dcoords = torch.from_numpy(
np.floor(coords / voxel_size)
).int()
bcoords = batched_coordinates(
[dcoords for i in range(batch_size)]
)
in_feats = torch.rand(len(bcoords), IC).to(0)
sinput = SparseTensor(
in_feats, coordinates=bcoords, device=device
)
conv = MinkowskiConvolution(
in_channels=IC,
out_channels=OC,
kernel_size=3,
stride=2,
convolution_mode=mode,
dimension=3,
).to(device)
soutput = conv(sinput)
loss = soutput.F.sum()
for i in range(5):
stime = time.time()
loss.backward()
min_time = min(time.time() - stime, min_time)
min_times.append(min_time)
X.append(
[
IC,
OC,
len(sinput),
len(soutput),
]
)
Y.append(np.argmin(min_times))
W.append(np.abs(min_times[0] - min_times[1]))
print(X[-1], Y[-1], W[-1])
import pickle as pkl
with open("backward-speed.pkl", "wb") as f:
pkl.dump([X, Y, W], f)
| MinkowskiEngine-master | tests/python/convolution.py |
# Copyright (c) 2020 NVIDIA CORPORATION.
# Copyright (c) 2018-2020 Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
from MinkowskiEngine import (
SparseTensor,
MinkowskiConvolution,
MinkowskiInterpolationFunction,
MinkowskiInterpolation,
)
from utils.gradcheck import gradcheck
from tests.python.common import data_loader
LEAK_TEST_ITER = 10000000
class TestInterpolation(unittest.TestCase):
def test(self):
in_channels, D = 2, 2
coords, feats, labels = data_loader(in_channels, batch_size=2)
feats = feats.double()
tfield = torch.Tensor(
[
[0, 0.1, 2.7],
[0, 0.3, 2],
[1, 1.5, 2.5],
]
).double()
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords)
interp = MinkowskiInterpolation(return_kernel_map=True, return_weights=False)
output, (in_map, out_map) = interp(input, tfield)
print(input)
print(output)
# Check backward
output.sum().backward()
fn = MinkowskiInterpolationFunction()
self.assertTrue(
gradcheck(
fn,
(
input.F,
tfield,
input.coordinate_map_key,
input._manager,
),
)
)
for i in range(LEAK_TEST_ITER):
input = SparseTensor(feats, coordinates=coords)
tfield = torch.DoubleTensor(
[
[0, 0.1, 2.7],
[0, 0.3, 2],
[1, 1.5, 2.5],
],
)
output, _ = interp(input, tfield)
output.sum().backward()
def test_gpu(self):
in_channels, D = 2, 2
coords, feats, labels = data_loader(in_channels, batch_size=2)
feats = feats.double()
tfield = torch.cuda.DoubleTensor(
[
[0, 0.1, 2.7],
[0, 0.3, 2],
[1, 1.5, 2.5],
],
)
feats.requires_grad_()
input = SparseTensor(feats, coordinates=coords, device="cuda")
interp = MinkowskiInterpolation()
output = interp(input, tfield)
print(input)
print(output)
output.sum().backward()
# Check backward
fn = MinkowskiInterpolationFunction()
self.assertTrue(
gradcheck(
fn,
(
input.F,
tfield,
input.coordinate_map_key,
input._manager,
),
)
)
for i in range(LEAK_TEST_ITER):
input = SparseTensor(feats, coordinates=coords, device="cuda")
tfield = torch.cuda.DoubleTensor(
[
[0, 0.1, 2.7],
[0, 0.3, 2],
[1, 1.5, 2.5],
],
)
output = interp(input, tfield)
output.sum().backward()
def test_zero(self):
# Issue #383 https://github.com/NVIDIA/MinkowskiEngine/issues/383
#
# create point and features, all with batch 0
pc = torch.randint(-10, 10, size=(32, 4), dtype=torch.float32, device='cuda')
pc[:, 0] = 0
feat = torch.randn(32, 3, dtype=torch.float32, device='cuda', requires_grad=True)
# feature to interpolate
x = SparseTensor(feat, pc, device='cuda')
interp = MinkowskiInterpolation()
# samples with original coordinates, OK for now
samples = pc
y = interp(x, samples)
print(y.shape, y.stride())
torch.sum(y).backward()
# samples with all zeros, shape is inconsistent and backward gives error
samples = torch.zeros_like(pc)
samples[:, 0] = 0
y = interp(x, samples)
print(y.shape, y.stride())
torch.sum(y).backward()
def test_strided_tensor(self):
in_channels, D = 2, 2
tfield = torch.Tensor(
[
[0, 0.1, 2.7],
[0, 0.3, 2],
[1, 1.5, 2.5],
]
)
coords = torch.IntTensor([[0, 0, 2], [0, 0, 4], [0, 2, 4]])
feats = torch.rand(len(coords), 1)
input = SparseTensor(feats, coordinates=coords, tensor_stride=2)
interp = MinkowskiInterpolation()
output = interp(input, tfield)
print(input)
print(output)
| MinkowskiEngine-master | tests/python/interpolation.py |
# Copyright (c) 2021 NVIDIA CORPORATION.
# Copyright (c) 2018-2020 Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
from MinkowskiEngine import MinkowskiDirectMaxPoolingFunction
from utils.gradcheck import gradcheck
class TestCase(unittest.TestCase):
def test(self):
if not torch.cuda.is_available():
return
pool = MinkowskiDirectMaxPoolingFunction()
in_map = torch.randint(0, 5, (10,)).int()
out_map = torch.randint(0, 3, (10,)).int()
in_feat = torch.rand(5, 16).double()
in_feat.requires_grad_()
out_nrows = 3
out_feat = pool.apply(in_map, out_map, in_feat, out_nrows)
print(out_feat)
out_feat.sum().backward()
self.assertTrue(
gradcheck(
pool,
(in_map, out_map, in_feat, out_nrows),
)
)
if not torch.cuda.is_available():
return
in_map = in_map.cuda()
out_map = out_map.cuda()
in_feat = in_feat.cuda()
out_feat = pool.apply(in_map, out_map, in_feat, out_nrows)
print(out_feat)
self.assertTrue(
gradcheck(
pool,
(in_map, out_map, in_feat, out_nrows),
)
)
def test_long(self):
if not torch.cuda.is_available():
return
pool = MinkowskiDirectMaxPoolingFunction()
in_map = torch.randint(0, 5, (10,))
out_map = torch.randint(0, 3, (10,))
in_feat = torch.rand(5, 16).double()
in_feat.requires_grad_()
out_nrows = 3
out_feat = pool.apply(in_map, out_map, in_feat, out_nrows)
print(out_feat)
out_feat.sum().backward()
self.assertTrue(
gradcheck(
pool,
(in_map, out_map, in_feat, out_nrows),
)
)
if not torch.cuda.is_available():
return
in_map = in_map.cuda()
out_map = out_map.cuda()
in_feat = in_feat.cuda()
out_feat = pool.apply(in_map, out_map, in_feat, out_nrows)
print(out_feat)
self.assertTrue(
gradcheck(
pool,
(in_map, out_map, in_feat, out_nrows),
)
)
| MinkowskiEngine-master | tests/python/direct_pool.py |
# Copyright (c) Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import os
import argparse
import numpy as np
from urllib.request import urlretrieve
try:
import open3d as o3d
except ImportError:
raise ImportError('Please install open3d with `pip install open3d`.')
import torch
import MinkowskiEngine as ME
from examples.common import Timer
# Check if the weights and file exist and download
if not os.path.isfile('1.ply'):
print('Downloading a room ply file...')
urlretrieve("http://cvgl.stanford.edu/data2/minkowskiengine/1.ply", '1.ply')
parser = argparse.ArgumentParser()
parser.add_argument('--file_name', type=str, default='1.ply')
parser.add_argument('--voxel_size', type=float, default=0.02)
parser.add_argument('--batch_size', type=int, default=2)
parser.add_argument('--max_kernel_size', type=int, default=7)
def load_file(file_name, voxel_size):
pcd = o3d.io.read_point_cloud(file_name)
coords = np.array(pcd.points)
feats = np.array(pcd.colors)
quantized_coords = np.floor(coords / voxel_size)
unique_coords, unique_feats = ME.utils.sparse_quantize(quantized_coords, feats)
return unique_coords, unique_feats, pcd
def generate_input_sparse_tensor(file_name, voxel_size=0.05, batch_size=1):
# Create a batch, this process is done in a data loader during training in parallel.
batch = [
load_file(file_name, voxel_size),
] * batch_size
coordinates_, featrues_, pcds = list(zip(*batch))
coordinates, features = ME.utils.sparse_collate(coordinates_, featrues_)
# Normalize features and create a sparse tensor
return features, coordinates
if __name__ == '__main__':
config = parser.parse_args()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Define a model and load the weights
feats = [3, 8, 16, 32, 64, 128]
features, coordinates = generate_input_sparse_tensor(
config.file_name,
voxel_size=config.voxel_size,
batch_size=config.batch_size)
pool = ME.MinkowskiGlobalAvgPooling()
# Measure time
print('Forward')
for feat in feats:
timer = Timer()
features = torch.rand(len(coordinates), feat).to(device)
# Feed-forward pass and get the prediction
for i in range(20):
sinput = ME.SparseTensor(features, coordinates=coordinates, device=device)
timer.tic()
soutput = pool(sinput)
timer.toc()
print(
f'{timer.min_time:.12f} for feature size: {feat} with {len(sinput)} voxel'
)
print('Backward')
for feat in feats:
timer = Timer()
sinput._F = torch.rand(len(sinput), feat).to(device).requires_grad_()
soutput = pool(sinput)
loss = soutput.F.sum()
# Feed-forward pass and get the prediction
for i in range(20):
timer.tic()
loss.backward()
timer.toc()
print(
f'{timer.min_time:.12f} for feature size {feat} with {len(sinput)} voxel'
)
| MinkowskiEngine-master | tests/python/global.py |
# Copyright (c) Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import unittest
import torch
import torch.nn as nn
import MinkowskiEngine as ME
from MinkowskiEngine import (
SparseTensor,
MinkowskiConvolution,
MinkowskiConvolutionTranspose,
MinkowskiBatchNorm,
MinkowskiReLU,
)
from MinkowskiOps import (
MinkowskiToSparseTensor,
to_sparse,
dense_coordinates,
MinkowskiToDenseTensor,
)
class TestDense(unittest.TestCase):
def test(self):
print(f"{self.__class__.__name__}: test_dense")
in_channels, out_channels, D = 2, 3, 2
coords1 = torch.IntTensor([[0, 0], [0, 1], [1, 1]])
feats1 = torch.DoubleTensor([[1, 2], [3, 4], [5, 6]])
coords2 = torch.IntTensor([[1, 1], [1, 2], [2, 1]])
feats2 = torch.DoubleTensor([[7, 8], [9, 10], [11, 12]])
coords, feats = ME.utils.sparse_collate([coords1, coords2], [feats1, feats2])
input = SparseTensor(feats, coords)
input.requires_grad_()
dinput, min_coord, tensor_stride = input.dense()
self.assertTrue(dinput[0, 0, 0, 1] == 3)
self.assertTrue(dinput[0, 1, 0, 1] == 4)
self.assertTrue(dinput[0, 0, 1, 1] == 5)
self.assertTrue(dinput[0, 1, 1, 1] == 6)
self.assertTrue(dinput[1, 0, 1, 1] == 7)
self.assertTrue(dinput[1, 1, 1, 1] == 8)
self.assertTrue(dinput[1, 0, 2, 1] == 11)
self.assertTrue(dinput[1, 1, 2, 1] == 12)
# Initialize context
conv = MinkowskiConvolution(
in_channels, out_channels, kernel_size=3, stride=2, bias=True, dimension=D,
)
conv = conv.double()
output = conv(input)
print(input.C, output.C)
# Convert to a dense tensor
dense_output, min_coord, tensor_stride = output.dense()
print(dense_output.shape)
print(dense_output)
print(min_coord)
print(tensor_stride)
dense_output, min_coord, tensor_stride = output.dense(
min_coordinate=torch.IntTensor([-2, -2])
)
print(dense_output)
print(min_coord)
print(tensor_stride)
print(feats.grad)
loss = dense_output.sum()
loss.backward()
print(feats.grad)
def test_empty(self):
x = torch.zeros(4, 1, 34, 34)
to_dense = ME.MinkowskiToDenseTensor(x.shape)
# Convert to sparse data
sparse_data = ME.to_sparse(x)
dense_data = to_dense(sparse_data)
self.assertEqual(dense_data.shape, x.shape)
class TestDenseToSparse(unittest.TestCase):
def test(self):
dense_tensor = torch.rand(3, 4, 5, 6)
sparse_tensor = to_sparse(dense_tensor)
self.assertEqual(len(sparse_tensor), 3 * 5 * 6)
self.assertEqual(sparse_tensor.F.size(1), 4)
def test_format(self):
dense_tensor = torch.rand(3, 4, 5, 6)
sparse_tensor = to_sparse(dense_tensor, format="BXXC")
self.assertEqual(len(sparse_tensor), 3 * 4 * 5)
self.assertEqual(sparse_tensor.F.size(1), 6)
def test_network(self):
dense_tensor = torch.rand(3, 4, 11, 11, 11, 11) # BxCxD1xD2x....xDN
dense_tensor.requires_grad = True
# Since the shape is fixed, cache the coordinates for faster inference
coordinates = dense_coordinates(dense_tensor.shape)
network = nn.Sequential(
# Add layers that can be applied on a regular pytorch tensor
nn.ReLU(),
MinkowskiToSparseTensor(remove_zeros=False, coordinates=coordinates),
MinkowskiConvolution(4, 5, stride=2, kernel_size=3, dimension=4),
MinkowskiBatchNorm(5),
MinkowskiReLU(),
MinkowskiConvolutionTranspose(5, 6, stride=2, kernel_size=3, dimension=4),
MinkowskiToDenseTensor(
dense_tensor.shape
), # must have the same tensor stride.
)
for i in range(5):
print(f"Iteration: {i}")
output = network(dense_tensor)
output.sum().backward()
assert dense_tensor.grad is not None
| MinkowskiEngine-master | tests/python/dense.py |
# Copyright (c) Chris Choy ([email protected]).
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to
# use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
# of the Software, and to permit persons to whom the Software is furnished to do
# so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#
# Please cite "4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural
# Networks", CVPR'19 (https://arxiv.org/abs/1904.08755) if you use any part
# of the code.
import torch
import unittest
import numpy as np
from MinkowskiEngine.utils import sparse_quantize
import MinkowskiEngineBackend._C as MEB
class TestQuantization(unittest.TestCase):
def test(self):
N = 16575
ignore_label = 255
coords = np.random.rand(N, 3) * 100
feats = np.random.rand(N, 4)
labels = np.floor(np.random.rand(N) * 3)
labels = labels.astype(np.int32)
# Make duplicates
coords[:3] = 0
labels[:3] = 2
quantized_coords, quantized_feats, quantized_labels = sparse_quantize(
coords.astype(np.int32), feats, labels, ignore_label
)
print(quantized_labels)
def test_device(self):
N = 16575
coords = np.random.rand(N, 3) * 100
# Make duplicates
coords[:3] = 0
unique_map = sparse_quantize(
coords.astype(np.int32), return_maps_only=True, device="cpu"
)
print(len(unique_map))
unique_map = sparse_quantize(
coords.astype(np.int32), return_maps_only=True, device="cuda"
)
print(len(unique_map))
def test_mapping(self):
N = 16575
coords = (np.random.rand(N, 3) * 100).astype(np.int32)
mapping, inverse_mapping = MEB.quantize_np(coords)
print("N unique:", len(mapping), "N:", N)
self.assertTrue((coords == coords[mapping][inverse_mapping]).all())
self.assertTrue((coords == coords[mapping[inverse_mapping]]).all())
coords = torch.from_numpy(coords)
mapping, inverse_mapping = MEB.quantize_th(coords)
print("N unique:", len(mapping), "N:", N)
self.assertTrue((coords == coords[mapping[inverse_mapping]]).all())
unique_coords, index, reverse_index = sparse_quantize(
coords, return_index=True, return_inverse=True
)
self.assertTrue((coords == coords[index[reverse_index]]).all())
def test_label_np(self):
N = 16575
coords = (np.random.rand(N, 3) * 100).astype(np.int32)
labels = np.floor(np.random.rand(N) * 3).astype(np.int32)
# Make duplicates
coords[:3] = 0
labels[:3] = 2
mapping, inverse_mapping, colabel = MEB.quantize_label_np(coords, labels, -1)
self.assertTrue(np.sum(np.abs(coords[mapping][inverse_mapping] - coords)) == 0)
self.assertTrue(np.sum(colabel < 0) > 3)
def test_collision(self):
coords = np.array([[0, 0], [0, 0], [0, 0], [0, 1]], dtype=np.int32)
labels = np.array([0, 1, 2, 3], dtype=np.int32)
unique_coords, colabels = sparse_quantize(
coords, labels=labels, ignore_label=255
)
print(unique_coords)
print(colabels)
self.assertTrue(len(unique_coords) == 2)
self.assertTrue(np.array([0, 0]) in unique_coords)
self.assertTrue(np.array([0, 1]) in unique_coords)
self.assertTrue(len(colabels) == 2)
self.assertTrue(255 in colabels)
def test_quantization_size(self):
coords = torch.randn((1000, 3), dtype=torch.float)
feats = torch.randn((1000, 10), dtype=torch.float)
res = sparse_quantize(coords, feats, quantization_size=0.1)
print(res[0].shape, res[1].shape)
res = sparse_quantize(coords.numpy(), feats.numpy(), quantization_size=0.1)
print(res[0].shape, res[1].shape)
if __name__ == "__main__":
unittest.main()
| MinkowskiEngine-master | tests/python/quantization.py |
Subsets and Splits