repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
pulp/pulpcore | 4,620 | pulp__pulpcore-4620 | [
"4591"
] | 603b1f2080b774e4def485ad2ab2bf392d4242b2 | diff --git a/pulpcore/app/models/repository.py b/pulpcore/app/models/repository.py
--- a/pulpcore/app/models/repository.py
+++ b/pulpcore/app/models/repository.py
@@ -850,7 +850,9 @@ def add_content(self, content):
raise ResourceImmutableError(self)
repo_content = []
- to_add = set(content.exclude(pk__in=self.content).values_list("pk", flat=True))
+ to_add = set(content.values_list("pk", flat=True)) - set(
+ self.content.values_list("pk", flat=True)
+ )
# Normalize representation if content has already been removed in this version and
# is re-added: Undo removal by setting version_removed to None.
| Regression on performance improvements in rv.get_content()
**Version**
foreman-3.5.3, katello-4.7.6, python39-pulpcore-3.21.17+, and postgresql-12.15
**Describe the bug**
Users are experiencing a slowdown in their installations.
**To Reproduce**
Sync the content.
**Additional context**
https://community.theforeman.org/t/slow-content-proxy-sync/35162/26
https://github.com/pulp/pulpcore/blob/bc9ca511dc13fbae7c49b19f1ebe8b527c67484a/pulpcore/app/models/repository.py#L853
https://github.com/pulp/pulpcore/pull/4275
The Postgres version might be one of the culprits. All the experiments in the attached PR were performed on Postgres 13.
| The reason for believing Postgresql might be the culprit is that we cannot reproduce thus far on Postgresql 13, and the Postgresql 13 changelog contains an entry which near perfectly matches this situation.
> Allow [hash aggregation](https://www.postgresql.org/docs/13/runtime-config-query.html#GUC-ENABLE-HASHAGG) to use disk storage for large aggregation result sets (Jeff Davis)
> Previously, hash aggregation was avoided if it was expected to use more than [work_mem](https://www.postgresql.org/docs/13/runtime-config-resource.html#GUC-WORK-MEM) memory. Now, a hash aggregation plan can be chosen despite that. The hash table will be spilled to disk if it exceeds work_mem times [hash_mem_multiplier](https://www.postgresql.org/docs/13/runtime-config-resource.html#GUC-HASH-MEM-MULTIPLIER).
> This behavior is normally preferable to the old behavior, in which once hash aggregation had been chosen, the hash table would be kept in memory no matter how large it got — which could be very large if the planner had misestimated. If necessary, behavior similar to that can be obtained by increasing hash_mem_multiplier.
https://www.postgresql.org/docs/release/13.0/
At minimum we could probably recommend PG13+ in our docs.
Should we recommend the increase of the `hash_mem_multiplier` for pulp installations, or at least for katello installations?
We shouldn't recommend anything as a workaround until we've confirmed that it works for a user or have verified it ourselves. We need to reproduce the issue. | 2023-10-27T16:32:23 |
|
pulp/pulpcore | 4,622 | pulp__pulpcore-4622 | [
"4383"
] | 2ce2cac9d67b281f846649e4a046dae345259814 | diff --git a/pulpcore/app/models/fields.py b/pulpcore/app/models/fields.py
--- a/pulpcore/app/models/fields.py
+++ b/pulpcore/app/models/fields.py
@@ -11,7 +11,6 @@
from django.db.models.fields import Field, TextField
from django.utils.encoding import force_bytes, force_str
from pulpcore.app.files import TemporaryDownloadedFile
-from pulpcore.app.loggers import deprecation_logger
_logger = logging.getLogger(__name__)
@@ -141,15 +140,7 @@ def decrypt(self, value):
return [self.decrypt(v) for v in value]
dec_value = force_str(_fernet().decrypt(force_bytes(value)))
- try:
- return json.loads(dec_value, cls=self.decoder)
- except json.JSONDecodeError:
- deprecation_logger.info(
- "Failed to decode json in an EncryptedJSONField. Falling back to eval. "
- "Please run pulpcore-manager rotate-db-key to repair."
- "This is deprecated and will be removed in pulpcore 3.40."
- )
- return eval(dec_value)
+ return json.loads(dec_value, cls=self.decoder)
def get_prep_value(self, value):
if value is not None:
| Remove eval fallback from EncryptedJSONField
#4359
| 2023-10-30T09:35:02 |
||
pulp/pulpcore | 4,627 | pulp__pulpcore-4627 | [
"4607"
] | 2e8d45e7af68f57f853bf0af91eec8af19a0acac | diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -180,15 +180,21 @@ def get_view_name_for_model(model_obj, view_action):
def batch_qs(qs, batch_size=1000):
- """
- Returns a queryset batch in the given queryset.
-
- Usage:
- # Make sure to order your querset
- article_qs = Article.objects.order_by('id')
- for qs in batch_qs(article_qs):
- for article in qs:
- print article.body
+ """Returns a queryset batch from the given queryset.
+
+ Make sure to order the queryset.
+
+ Args:
+ qs: The queryset we want to iterate over in batches.
+ batch_size: Defaults to 1000.
+
+ Example:
+ To iterate over a queryset while retrieving records from the DB in batches, use::
+
+ article_qs = Article.objects.order_by('id')
+ for qs in batch_qs(article_qs):
+ for article in qs:
+ print article.body
"""
total = qs.count()
for start in range(0, total, batch_size):
diff --git a/pulpcore/plugin/util.py b/pulpcore/plugin/util.py
--- a/pulpcore/plugin/util.py
+++ b/pulpcore/plugin/util.py
@@ -13,6 +13,7 @@
)
from pulpcore.app.util import ( # noqa: F401
+ batch_qs,
extract_pk,
get_artifact_url,
get_url,
| Expose batch_qs() to the plugin API
If there are no concerns, I would like to have `batch_qs` from https://github.com/pulp/pulpcore/blob/main/pulpcore/app/util.py#L182
Made available to plugins here: https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/util.py#L15
The reason, is that I need this helper function to fix https://github.com/pulp/pulp_deb/issues/921 and I would prefer not to duplicate it.
See also: https://discourse.pulpproject.org/t/exposing-new-things-to-the-plugin-api/1021
Note: I am fine with duplicating the pulpcore function for any pulp_deb backports, and switching to the pulpcore version with whatever pulpcore release first exposes it. So there is no need to backport the pulpcore change anywhere.
| Ok, so my request may have been premature, the draft PR I added would allow me to reuse more pulpcore code than exposing `batch_qs` would. However, it needs to be analyzed carefully to ensure it is performance neutral. If not, I am back to my plan of writing my own version of pulpcore's `remove_duplicates` in pulp_deb, thus making use of `batch_qs`.
The approach from my draft PR turned out to be a complete dead end, so I am back to needing `batch_qs` exposed.
In fact I now have a working (draft PR) in pulp_deb that uses it: https://github.com/pulp/pulp_deb/pull/922 | 2023-10-31T15:07:41 |
|
pulp/pulpcore | 4,636 | pulp__pulpcore-4636 | [
"4635"
] | 60b07a97ef724f893975b8b437a462c47432193a | diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -573,11 +573,22 @@ async def _match_and_stream(self, path, request):
if not ends_in_slash:
rel_path = f"{rel_path}/"
+ headers = self.response_headers(original_rel_path, distro)
+
content_handler_result = await sync_to_async(distro.content_handler)(original_rel_path)
if content_handler_result is not None:
- return content_handler_result
-
- headers = self.response_headers(original_rel_path, distro)
+ if isinstance(content_handler_result, ContentArtifact):
+ if content_handler_result.artifact:
+ return await self._serve_content_artifact(
+ content_handler_result, headers, request
+ )
+ else:
+ return await self._stream_content_artifact(
+ request, StreamResponse(headers=headers), content_handler_result
+ )
+ else:
+ # the result is a response so just return it
+ return content_handler_result
repository = distro.repository
publication = distro.publication
| Generate a Response for artifacts returned from a content handler
The problem when defining a `content_handler` to serve an artifact is trying to generate the response which has to account for things like object storage. The content `Handler` class has helpful methods such as `_serve_content_artifact` and `_stream_content_artifact` but they are methods and they require `self`. This change would make it easier for plugin writers to return a `ContentArtifact` so that the plugin writer does not have to generate the response.
| 2023-11-01T19:06:11 |
||
pulp/pulpcore | 4,641 | pulp__pulpcore-4641 | [
"4633"
] | 2e8d45e7af68f57f853bf0af91eec8af19a0acac | diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py
--- a/pulp_file/app/__init__.py
+++ b/pulp_file/app/__init__.py
@@ -8,6 +8,6 @@ class PulpFilePluginAppConfig(PulpPluginAppConfig):
name = "pulp_file.app"
label = "file"
- version = "3.40.0.dev"
+ version = "3.41.0.dev"
python_package_name = "pulp_file" # TODO Add python_module_name
domain_compatible = True
| pulp_file version is set to 3.40.0.dev
**Version**
pulpcore 3.40.0
**Describe the bug**
Status API reports pulp_file version as 3.40.0.dev
| 2023-11-02T07:38:06 |
||
pulp/pulpcore | 4,643 | pulp__pulpcore-4643 | [
"4648"
] | e3b23df510cf1299e0482cbf7bfdea4822fcb8b0 | diff --git a/pulpcore/app/serializers/base.py b/pulpcore/app/serializers/base.py
--- a/pulpcore/app/serializers/base.py
+++ b/pulpcore/app/serializers/base.py
@@ -358,6 +358,7 @@ class GetOrCreateSerializerMixin:
@classmethod
def get_or_create(cls, natural_key, default_values=None):
+ result = None
try:
result = cls.Meta.model.objects.get(**natural_key)
except ObjectDoesNotExist:
@@ -368,8 +369,21 @@ def get_or_create(cls, natural_key, default_values=None):
serializer = cls(data=data)
try:
serializer.is_valid(raise_exception=True)
- result = serializer.create(serializer.validated_data)
- except (IntegrityError, serializers.ValidationError):
+ except serializers.ValidationError as e:
+ returned_codes = e.get_codes().values()
+ error_codes = set(["required", "invalid"])
+ all_codes = []
+ for c in returned_codes:
+ all_codes.extend(c)
+ if error_codes.intersection(all_codes):
+ # validation failed because fields are invalid, missing
+ raise e
+ # recover from a race condition, where another thread just created the object
+ # validation failed with 400 'unique' error code only
+ result = cls.Meta.model.objects.get(**natural_key)
+ try:
+ result = result or serializer.create(serializer.validated_data)
+ except IntegrityError:
# recover from a race condition, where another thread just created the object
result = cls.Meta.model.objects.get(**natural_key)
return result
| serializer. get_or_create can still raise objectdoesnot exist
**Version**
Please provide the versions of the pulpcore and plugin packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
| 2023-11-02T12:26:55 |
||
pulp/pulpcore | 4,652 | pulp__pulpcore-4652 | [
"4651"
] | c72929ec86b671d3fe8ed0711ac4bbec8633847f | diff --git a/pulpcore/tasking/tasks.py b/pulpcore/tasking/tasks.py
--- a/pulpcore/tasking/tasks.py
+++ b/pulpcore/tasking/tasks.py
@@ -11,7 +11,6 @@
from django.db.models import Model
from django_guid import get_guid
from pulpcore.app.apps import MODULE_PLUGIN_VERSIONS
-from pulpcore.app.loggers import deprecation_logger
from pulpcore.app.models import Task
from pulpcore.app.util import current_task, get_domain, get_url
from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
@@ -129,17 +128,7 @@ def dispatch(
function_name = func
if versions is None:
- try:
- versions = MODULE_PLUGIN_VERSIONS[function_name.split(".", maxsplit=1)[0]]
- except KeyError:
- deprecation_logger.warn(
- _(
- "Using functions outside of pulp components as tasks is not supported and will "
- "result in runtime errors with pulpcore>=3.40."
- )
- )
- # The best we can do now...
- versions = MODULE_PLUGIN_VERSIONS["pulpcore"]
+ versions = MODULE_PLUGIN_VERSIONS[function_name.split(".", maxsplit=1)[0]]
if exclusive_resources is None:
exclusive_resources = []
| Remove the ability to call non-pulp-related functions as task targets
| 2023-11-03T04:35:30 |
||
pulp/pulpcore | 4,668 | pulp__pulpcore-4668 | [
"3364"
] | aa4ef50ac7c1ed693331496b7800c1b1e4327dbf | diff --git a/pulpcore/app/models/content.py b/pulpcore/app/models/content.py
--- a/pulpcore/app/models/content.py
+++ b/pulpcore/app/models/content.py
@@ -166,7 +166,7 @@ def delete(self, *args, **kwargs):
transaction.on_commit(partial(self.file.delete, save=False))
-class ArtifactManager(BulkCreateManager):
+class ArtifactQuerySet(BulkTouchQuerySet):
def orphaned(self, orphan_protection_time):
"""Returns set of orphaned artifacts that are ready to be cleaned up."""
domain_pk = get_domain_pk()
@@ -226,7 +226,7 @@ def storage_path(self, name):
timestamp_of_interest = models.DateTimeField(auto_now=True)
pulp_domain = models.ForeignKey("Domain", default=get_domain_pk, on_delete=models.PROTECT)
- objects = ArtifactManager.from_queryset(BulkTouchQuerySet)()
+ objects = BulkCreateManager.from_queryset(ArtifactQuerySet)()
# All available digest fields ordered by algorithm strength.
DIGEST_FIELDS = _DIGEST_FIELDS
@@ -485,7 +485,7 @@ def init_and_validate(file, expected_digests=None, expected_size=None):
return PulpTemporaryFile(file=file)
-class ContentManager(BulkCreateManager):
+class ContentQuerySet(BulkTouchQuerySet):
def orphaned(self, orphan_protection_time, content_pks=None):
"""Returns set of orphaned content that is ready to be cleaned up."""
expiration = now() - datetime.timedelta(minutes=orphan_protection_time)
@@ -504,7 +504,7 @@ def orphaned(self, orphan_protection_time, content_pks=None):
)
-ContentManager = ContentManager.from_queryset(BulkTouchQuerySet)
+ContentManager = BulkCreateManager.from_queryset(ContentQuerySet)
class Content(MasterModel, QueryMixin):
diff --git a/pulpcore/app/viewsets/content.py b/pulpcore/app/viewsets/content.py
--- a/pulpcore/app/viewsets/content.py
+++ b/pulpcore/app/viewsets/content.py
@@ -1,6 +1,8 @@
from gettext import gettext as _
+from django.conf import settings
from django.db import models
+from django_filters import NumberFilter
from rest_framework import mixins, permissions, status
from rest_framework.response import Response
@@ -22,6 +24,17 @@
)
+class OrphanedFilter(NumberFilter):
+ def filter(self, qs, value):
+ if value is not None:
+ if value < 0:
+ time = settings.ORPHAN_PROTECTION_TIME
+ else:
+ time = value
+ qs = qs.orphaned(int(time))
+ return qs
+
+
class ArtifactFilter(BaseFilterSet):
"""
Artifact filter Plugin content filters should:
@@ -33,6 +46,9 @@ class ArtifactFilter(BaseFilterSet):
"""
repository_version = ArtifactRepositoryVersionFilter()
+ orphaned_for = OrphanedFilter(
+ help_text="Minutes Artifacts have been orphaned for. -1 uses ORPHAN_PROTECTION_TIME."
+ )
class Meta:
model = Artifact
@@ -90,11 +106,17 @@ class ContentFilter(BaseFilterSet):
Return Content which was added in this repository version.
repository_version_removed:
Return Content which was removed from this repository version.
+ orphaned_for:
+ Return Content which has been orphaned for a given number of minutes;
+ -1 uses ORPHAN_PROTECTION_TIME value.
"""
repository_version = ContentRepositoryVersionFilter()
repository_version_added = ContentAddedRepositoryVersionFilter()
repository_version_removed = ContentRemovedRepositoryVersionFilter()
+ orphaned_for = OrphanedFilter(
+ help_text="Minutes Content has been orphaned for. -1 uses ORPHAN_PROTECTION_TIME."
+ )
class BaseContentViewSet(NamedModelViewSet):
| diff --git a/pulpcore/tests/functional/api/using_plugin/test_orphans.py b/pulpcore/tests/functional/api/using_plugin/test_orphans.py
--- a/pulpcore/tests/functional/api/using_plugin/test_orphans.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_orphans.py
@@ -15,6 +15,63 @@ def orphans_api_client(pulpcore_client):
return OrphansApi(pulpcore_client)
+def test_content_orphan_filter(
+ file_content_unit_with_name_factory,
+ file_content_api_client,
+ file_repository_factory,
+ file_repository_api_client,
+ monitor_task,
+):
+ content_unit = file_content_unit_with_name_factory("1.iso")
+
+ # test orphan_for with different values
+ content_units = file_content_api_client.list(
+ orphaned_for="0", pulp_href__in=[content_unit.pulp_href]
+ )
+ assert content_units.count == 1
+ content_units = file_content_api_client.list(
+ orphaned_for="100", pulp_href__in=[content_unit.pulp_href]
+ )
+ assert content_units.count == 0
+
+ # add our content unit to a repo
+ repo = file_repository_factory()
+ body = {"add_content_units": [content_unit.pulp_href]}
+ task = file_repository_api_client.modify(repo.pulp_href, body).task
+ monitor_task(task)
+ content_units = file_content_api_client.list(
+ orphaned_for="0", pulp_href__in=[content_unit.pulp_href]
+ )
+ assert content_units.count == 0
+
+
+def test_artifact_orphan_filter(
+ random_artifact,
+ artifacts_api_client,
+ file_content_api_client,
+ monitor_task,
+):
+ # test orphan_for with different values
+ artifacts = artifacts_api_client.list(
+ orphaned_for="0", pulp_href__in=[random_artifact.pulp_href]
+ )
+ assert artifacts.count == 1
+ artifacts = artifacts_api_client.list(
+ orphaned_for="100", pulp_href__in=[random_artifact.pulp_href]
+ )
+ assert artifacts.count == 0
+
+ # create a content unit with the artifact
+ task = file_content_api_client.create(
+ artifact=random_artifact.pulp_href, relative_path="1.iso"
+ ).task
+ monitor_task(task)
+ artifacts = artifacts_api_client.list(
+ orphaned_for="0", pulp_href__in=[random_artifact.pulp_href]
+ )
+ assert artifacts.count == 0
+
+
def test_orphans_delete(
random_artifact,
file_random_content_unit,
| As a user, I can view orphans that will be cleaned up during orphan cleanup
It would be nice to be able to view orphan content in Pulp before running orphan cleanup. Perhaps two options:
1. An orphan list endpoint that lists orphans that would be cleaned up during orphan cleanup
2. A "dry run" option to orphan cleanup that would list orphans that would be cleaned up during orphan cleanup
| Orphan clean up endpoint is opened only to the admin, meaning that only admin would see orphans too.
One potential concern, such an operation would be slow and intensive, possibly too much so for an API call unless it ran periodically and returned cached results. | 2023-11-06T16:24:26 |
pulp/pulpcore | 4,670 | pulp__pulpcore-4670 | [
"4648"
] | a326f4f79b6cbfa3bc63f3dc8703ab07241f0a2a | diff --git a/pulpcore/app/serializers/base.py b/pulpcore/app/serializers/base.py
--- a/pulpcore/app/serializers/base.py
+++ b/pulpcore/app/serializers/base.py
@@ -358,6 +358,7 @@ class GetOrCreateSerializerMixin:
@classmethod
def get_or_create(cls, natural_key, default_values=None):
+ result = None
try:
result = cls.Meta.model.objects.get(**natural_key)
except ObjectDoesNotExist:
@@ -368,8 +369,21 @@ def get_or_create(cls, natural_key, default_values=None):
serializer = cls(data=data)
try:
serializer.is_valid(raise_exception=True)
- result = serializer.create(serializer.validated_data)
- except (IntegrityError, serializers.ValidationError):
+ except serializers.ValidationError as e:
+ returned_codes = e.get_codes().values()
+ error_codes = set(["required", "invalid"])
+ all_codes = []
+ for c in returned_codes:
+ all_codes.extend(c)
+ if error_codes.intersection(all_codes):
+ # validation failed because fields are invalid, missing
+ raise e
+ # recover from a race condition, where another thread just created the object
+ # validation failed with 400 'unique' error code only
+ result = cls.Meta.model.objects.get(**natural_key)
+ try:
+ result = result or serializer.create(serializer.validated_data)
+ except IntegrityError:
# recover from a race condition, where another thread just created the object
result = cls.Meta.model.objects.get(**natural_key)
return result
| serializer. get_or_create can still raise objectdoesnot exist
**Version**
Please provide the versions of the pulpcore and plugin packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
| 2023-11-07T12:55:39 |
||
pulp/pulpcore | 4,682 | pulp__pulpcore-4682 | [
"4679"
] | 417a8d18ec1bfbf7ddb86ac1b13f1807bb658508 | diff --git a/pulpcore/app/entrypoint.py b/pulpcore/app/entrypoint.py
--- a/pulpcore/app/entrypoint.py
+++ b/pulpcore/app/entrypoint.py
@@ -118,6 +118,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option(
"--access-logformat",
diff --git a/pulpcore/content/entrypoint.py b/pulpcore/content/entrypoint.py
--- a/pulpcore/content/entrypoint.py
+++ b/pulpcore/content/entrypoint.py
@@ -33,6 +33,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option("--access-logformat", "access_log_format")
@click.option("--error-logfile", "--log-file", "errorlog")
| app entrypoint no longer supports --max-requests-jitter
**Version**
3.39
**Describe the bug**
--max-requests-jitter is not recognized
**To Reproduce**
Run the pulpcore-api entrypoint with --max-requests-jitter
**Expected behavior**
Accepts the argument
**Additional context**
Requested for Katello.
| 2023-11-08T00:30:30 |
||
pulp/pulpcore | 4,684 | pulp__pulpcore-4684 | [
"4681"
] | 3837f6b3431244efe00fea1b53c0e6e8f2b580b8 | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| diff --git a/pulpcore/tests/functional/api/pulp_file/test_sync.py b/pulpcore/tests/functional/api/pulp_file/test_sync.py
--- a/pulpcore/tests/functional/api/pulp_file/test_sync.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_sync.py
@@ -1,4 +1,6 @@
"""Tests that sync file plugin repositories."""
+
+import os
import uuid
import pytest
@@ -22,17 +24,20 @@ def test_sync_file_protocol_handler(
):
"""Test syncing from a file repository with the file:// protocol handler"""
wget_recursive_download_on_host(urljoin(fixtures_cfg.remote_fixtures_origin, "file/"), "/tmp")
-
remote_kwargs = {
"url": "file:///tmp/file/PULP_MANIFEST",
"policy": "immediate",
"name": str(uuid.uuid4()),
}
remote = gen_object_with_cleanup(file_remote_api_client, remote_kwargs)
+ files = set(os.listdir("/tmp/file/"))
body = RepositorySyncURL(remote=remote.pulp_href)
monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ # test that all the files are still present
+ assert set(os.listdir("/tmp/file/")) == files
+
file_repo = file_repository_api_client.read(file_repo.pulp_href)
assert file_repo.latest_version_href.endswith("/versions/1/")
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-08T03:39:17 |
pulp/pulpcore | 4,686 | pulp__pulpcore-4686 | [
"4679"
] | d51133fd7f760411abf620327211f6e2f35e7b5a | diff --git a/pulpcore/app/entrypoint.py b/pulpcore/app/entrypoint.py
--- a/pulpcore/app/entrypoint.py
+++ b/pulpcore/app/entrypoint.py
@@ -118,6 +118,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option(
"--access-logformat",
diff --git a/pulpcore/content/entrypoint.py b/pulpcore/content/entrypoint.py
--- a/pulpcore/content/entrypoint.py
+++ b/pulpcore/content/entrypoint.py
@@ -33,6 +33,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option("--access-logformat", "access_log_format")
@click.option("--error-logfile", "--log-file", "errorlog")
| app entrypoint no longer supports --max-requests-jitter
**Version**
3.39
**Describe the bug**
--max-requests-jitter is not recognized
**To Reproduce**
Run the pulpcore-api entrypoint with --max-requests-jitter
**Expected behavior**
Accepts the argument
**Additional context**
Requested for Katello.
| 2023-11-08T14:23:19 |
||
pulp/pulpcore | 4,687 | pulp__pulpcore-4687 | [
"4679"
] | 51878cdb8ff07dcd0ca1793d677d836ffcc48ebc | diff --git a/pulpcore/app/entrypoint.py b/pulpcore/app/entrypoint.py
--- a/pulpcore/app/entrypoint.py
+++ b/pulpcore/app/entrypoint.py
@@ -118,6 +118,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option(
"--access-logformat",
diff --git a/pulpcore/content/entrypoint.py b/pulpcore/content/entrypoint.py
--- a/pulpcore/content/entrypoint.py
+++ b/pulpcore/content/entrypoint.py
@@ -33,6 +33,7 @@ def load(self):
@click.option("--limit-request-fields", type=int)
@click.option("--limit-request-field-size", type=int)
@click.option("--max-requests", type=int)
[email protected]("--max-requests-jitter", type=int)
@click.option("--access-logfile", "accesslog")
@click.option("--access-logformat", "access_log_format")
@click.option("--error-logfile", "--log-file", "errorlog")
| app entrypoint no longer supports --max-requests-jitter
**Version**
3.39
**Describe the bug**
--max-requests-jitter is not recognized
**To Reproduce**
Run the pulpcore-api entrypoint with --max-requests-jitter
**Expected behavior**
Accepts the argument
**Additional context**
Requested for Katello.
| 2023-11-08T14:23:33 |
||
pulp/pulpcore | 4,711 | pulp__pulpcore-4711 | [
"4681"
] | a2f8c7a4e3f3dfc8c11b6133af1fd0e7856dd850 | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
new file mode 100644
--- /dev/null
+++ b/.github/workflows/test.yml
@@ -0,0 +1,129 @@
+# WARNING: DO NOT EDIT!
+#
+# This file was generated by plugin_template, and is managed by it. Please use
+# './plugin-template --github pulpcore' to update this file.
+#
+# For more info visit https://github.com/pulp/plugin_template
+
+---
+name: Test
+on:
+ workflow_call:
+
+jobs:
+ test:
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ env:
+ - TEST: pulp
+ - TEST: docs
+ - TEST: azure
+ - TEST: s3
+ - TEST: lowerbounds
+ outputs:
+ deprecations-pulp: ${{ steps.deprecations.outputs.deprecations-pulp }}
+ deprecations-azure: ${{ steps.deprecations.outputs.deprecations-azure }}
+ deprecations-s3: ${{ steps.deprecations.outputs.deprecations-s3 }}
+ deprecations-lowerbounds: ${{ steps.deprecations.outputs.deprecations-lowerbounds }}
+
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 1
+
+ - uses: actions/setup-python@v4
+ with:
+ python-version: "3.8"
+
+ - uses: actions/download-artifact@v3
+ with:
+ name: plugin_package
+ path: dist/
+
+ - name: Install httpie
+ run: |
+ echo ::group::HTTPIE
+ pip install httpie
+ echo ::endgroup::
+ echo "HTTPIE_CONFIG_DIR=$GITHUB_WORKSPACE/.ci/assets/httpie/" >> $GITHUB_ENV
+
+ - name: Set environment variables
+ run: |
+ echo "TEST=${{ matrix.env.TEST }}" >> $GITHUB_ENV
+
+ - name: Before Install
+ run: .github/workflows/scripts/before_install.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Install
+ run: .github/workflows/scripts/install.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Before Script
+ run: .github/workflows/scripts/before_script.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+ REDIS_DISABLED: ${{ contains('s3', matrix.env.TEST) }}
+
+ - name: Setting secrets
+ if: github.event_name != 'pull_request'
+ run: python3 .github/workflows/scripts/secrets.py "$SECRETS_CONTEXT"
+ env:
+ SECRETS_CONTEXT: ${{ toJson(secrets) }}
+
+ - name: Script
+ run: .github/workflows/scripts/script.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Extract Deprecations from Logs
+ id: deprecations
+ run: echo deprecations-${{ matrix.env.TEST }}=$(docker logs pulp 2>&1 | grep -i pulpcore.deprecation | base64 -w 0) >> $GITHUB_OUTPUT
+
+ - name: Logs
+ if: always()
+ run: |
+ echo "Need to debug? Please check: https://github.com/marketplace/actions/debugging-with-tmate"
+ http --timeout 30 --check-status --pretty format --print hb "https://pulp${PULP_API_ROOT}api/v3/status/" || true
+ docker images || true
+ docker ps -a || true
+ docker logs pulp || true
+ docker exec pulp ls -latr /etc/yum.repos.d/ || true
+ docker exec pulp cat /etc/yum.repos.d/* || true
+ docker exec pulp bash -c "pip3 list && pip3 install pipdeptree && pipdeptree"
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-15T04:19:17 |
pulp/pulpcore | 4,712 | pulp__pulpcore-4712 | [
"4681"
] | 79bde3a0004d6e6d844dd6951fb23ee2d945d01c | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-15T04:20:02 |
|
pulp/pulpcore | 4,713 | pulp__pulpcore-4713 | [
"4681"
] | 0ef0c503362ae0ac145e66a476f3438cfb6245e9 | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-15T04:22:18 |
|
pulp/pulpcore | 4,721 | pulp__pulpcore-4721 | [
"4681"
] | 1c36af33b5fd9ba0fc46956ee8549ff4e6c12786 | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-15T14:35:59 |
|
pulp/pulpcore | 4,722 | pulp__pulpcore-4722 | [
"4681"
] | 537acbef9ca790d90a8ac9877c2b0d69c6db2424 | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@ async def _run(self, extra_data=None):
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml
new file mode 100644
--- /dev/null
+++ b/.github/workflows/test.yml
@@ -0,0 +1,129 @@
+# WARNING: DO NOT EDIT!
+#
+# This file was generated by plugin_template, and is managed by it. Please use
+# './plugin-template --github pulpcore' to update this file.
+#
+# For more info visit https://github.com/pulp/plugin_template
+
+---
+name: Test
+on:
+ workflow_call:
+
+jobs:
+ test:
+ runs-on: ubuntu-latest
+ strategy:
+ fail-fast: false
+ matrix:
+ env:
+ - TEST: pulp
+ - TEST: docs
+ - TEST: azure
+ - TEST: s3
+ - TEST: lowerbounds
+ outputs:
+ deprecations-pulp: ${{ steps.deprecations.outputs.deprecations-pulp }}
+ deprecations-azure: ${{ steps.deprecations.outputs.deprecations-azure }}
+ deprecations-s3: ${{ steps.deprecations.outputs.deprecations-s3 }}
+ deprecations-lowerbounds: ${{ steps.deprecations.outputs.deprecations-lowerbounds }}
+
+ steps:
+ - uses: actions/checkout@v4
+ with:
+ fetch-depth: 1
+
+ - uses: actions/setup-python@v4
+ with:
+ python-version: "3.8"
+
+ - uses: actions/download-artifact@v3
+ with:
+ name: plugin_package
+ path: dist/
+
+ - name: Install httpie
+ run: |
+ echo ::group::HTTPIE
+ pip install httpie
+ echo ::endgroup::
+ echo "HTTPIE_CONFIG_DIR=$GITHUB_WORKSPACE/.ci/assets/httpie/" >> $GITHUB_ENV
+
+ - name: Set environment variables
+ run: |
+ echo "TEST=${{ matrix.env.TEST }}" >> $GITHUB_ENV
+
+ - name: Before Install
+ run: .github/workflows/scripts/before_install.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Install
+ run: .github/workflows/scripts/install.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Before Script
+ run: .github/workflows/scripts/before_script.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+ REDIS_DISABLED: ${{ contains('s3', matrix.env.TEST) }}
+
+ - name: Setting secrets
+ if: github.event_name != 'pull_request'
+ run: python3 .github/workflows/scripts/secrets.py "$SECRETS_CONTEXT"
+ env:
+ SECRETS_CONTEXT: ${{ toJson(secrets) }}
+
+ - name: Script
+ run: .github/workflows/scripts/script.sh
+ shell: bash
+ env:
+ PY_COLORS: '1'
+ ANSIBLE_FORCE_COLOR: '1'
+ GITHUB_PULL_REQUEST: ${{ github.event.number }}
+ GITHUB_PULL_REQUEST_BODY: ${{ github.event.pull_request.body }}
+ GITHUB_BRANCH: ${{ github.head_ref }}
+ GITHUB_REPO_SLUG: ${{ github.repository }}
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+ GITHUB_CONTEXT: ${{ github.event.pull_request.commits_url }}
+
+ - name: Extract Deprecations from Logs
+ id: deprecations
+ run: echo deprecations-${{ matrix.env.TEST }}=$(docker logs pulp 2>&1 | grep -i pulpcore.deprecation | base64 -w 0) >> $GITHUB_OUTPUT
+
+ - name: Logs
+ if: always()
+ run: |
+ echo "Need to debug? Please check: https://github.com/marketplace/actions/debugging-with-tmate"
+ http --timeout 30 --check-status --pretty format --print hb "https://pulp${PULP_API_ROOT}api/v3/status/" || true
+ docker images || true
+ docker ps -a || true
+ docker logs pulp || true
+ docker exec pulp ls -latr /etc/yum.repos.d/ || true
+ docker exec pulp cat /etc/yum.repos.d/* || true
+ docker exec pulp bash -c "pip3 list && pip3 install pipdeptree && pipdeptree"
| file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
| Likely related to https://github.com/pulp/pulpcore/pull/4428
A repeat of https://pulp.plan.io/issues/9146 ? | 2023-11-15T15:06:30 |
pulp/pulpcore | 4,723 | pulp__pulpcore-4723 | [
"4724"
] | 9cab9f40768ee8022317709e741be398d9d468d0 | diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py
--- a/pulp_file/app/__init__.py
+++ b/pulp_file/app/__init__.py
@@ -9,5 +9,5 @@ class PulpFilePluginAppConfig(PulpPluginAppConfig):
name = "pulp_file.app"
label = "file"
version = "3.42.0.dev"
- python_package_name = "pulp_file" # TODO Add python_module_name
+ python_package_name = "pulp-file" # TODO Add python_module_name
domain_compatible = True
| pulp file python package reporting wrongly
Starting with pulpcore 3.40 the pulp_file plugins python package started reporting as pulp_file instead of pulp-file.
| 2023-11-15T15:21:28 |
||
pulp/pulpcore | 4,727 | pulp__pulpcore-4727 | [
"4724"
] | e322692e0cbc3dcda8a7a7e09d626059cd1d6033 | diff --git a/pulp_file/app/__init__.py b/pulp_file/app/__init__.py
--- a/pulp_file/app/__init__.py
+++ b/pulp_file/app/__init__.py
@@ -9,5 +9,5 @@ class PulpFilePluginAppConfig(PulpPluginAppConfig):
name = "pulp_file.app"
label = "file"
version = "3.41.1.dev"
- python_package_name = "pulp_file" # TODO Add python_module_name
+ python_package_name = "pulp-file" # TODO Add python_module_name
domain_compatible = True
| pulp file python package reporting wrongly
Starting with pulpcore 3.40 the pulp_file plugins python package started reporting as pulp_file instead of pulp-file.
| 2023-11-16T10:43:20 |
||
pulp/pulpcore | 4,745 | pulp__pulpcore-4745 | [
"3829"
] | bb4926ba23bfef71c21fabcbee9c68acc07ee8e9 | diff --git a/pulpcore/content/__init__.py b/pulpcore/content/__init__.py
--- a/pulpcore/content/__init__.py
+++ b/pulpcore/content/__init__.py
@@ -8,6 +8,8 @@
from asgiref.sync import sync_to_async
from aiohttp import web
+from .instrumentation import middleware as instrumentation
+
import django
@@ -29,7 +31,7 @@
log = logging.getLogger(__name__)
-app = web.Application(middlewares=[authenticate])
+app = web.Application(middlewares=[authenticate, instrumentation])
CONTENT_MODULE_NAME = "content"
diff --git a/pulpcore/content/instrumentation.py b/pulpcore/content/instrumentation.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/content/instrumentation.py
@@ -0,0 +1,267 @@
+# Copyright 2020, OpenTelemetry Authors
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# TODO: This is a copy of https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1800;
+# it can be removed once the following issues will be resolved:
+# 1. https://github.com/pypi/support/issues/3353
+# 2. https://github.com/open-telemetry/opentelemetry-python-contrib/issues/2053
+
+import urllib
+from aiohttp import web
+from multidict import CIMultiDictProxy
+from timeit import default_timer
+from typing import Tuple, Dict, List, Union
+
+from opentelemetry import context, trace, metrics
+from opentelemetry.context import _SUPPRESS_HTTP_INSTRUMENTATION_KEY
+from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
+from opentelemetry.instrumentation.utils import http_status_to_status_code
+from opentelemetry.propagators.textmap import Getter
+from opentelemetry.propagate import extract
+from opentelemetry.semconv.trace import SpanAttributes
+from opentelemetry.semconv.metrics import MetricInstruments
+from opentelemetry.trace.status import Status, StatusCode
+from opentelemetry.util.http import get_excluded_urls
+from opentelemetry.util.http import remove_url_credentials
+
+_duration_attrs = [
+ SpanAttributes.HTTP_METHOD,
+ SpanAttributes.HTTP_HOST,
+ SpanAttributes.HTTP_SCHEME,
+ SpanAttributes.HTTP_STATUS_CODE,
+ SpanAttributes.HTTP_FLAVOR,
+ SpanAttributes.HTTP_SERVER_NAME,
+ SpanAttributes.NET_HOST_NAME,
+ SpanAttributes.NET_HOST_PORT,
+ SpanAttributes.HTTP_ROUTE,
+]
+
+_active_requests_count_attrs = [
+ SpanAttributes.HTTP_METHOD,
+ SpanAttributes.HTTP_HOST,
+ SpanAttributes.HTTP_SCHEME,
+ SpanAttributes.HTTP_FLAVOR,
+ SpanAttributes.HTTP_SERVER_NAME,
+]
+
+__version__ = "0.42b0.dev"
+_instruments = ("aiohttp ~= 3.0",)
+
+tracer = trace.get_tracer(__name__)
+meter = metrics.get_meter(__name__, __version__)
+_excluded_urls = get_excluded_urls("AIOHTTP_SERVER")
+
+
+def _parse_duration_attrs(req_attrs):
+ duration_attrs = {}
+ for attr_key in _duration_attrs:
+ if req_attrs.get(attr_key) is not None:
+ duration_attrs[attr_key] = req_attrs[attr_key]
+ return duration_attrs
+
+
+def _parse_active_request_count_attrs(req_attrs):
+ active_requests_count_attrs = {}
+ for attr_key in _active_requests_count_attrs:
+ if req_attrs.get(attr_key) is not None:
+ active_requests_count_attrs[attr_key] = req_attrs[attr_key]
+ return active_requests_count_attrs
+
+
+def get_default_span_details(request: web.Request) -> Tuple[str, dict]:
+ """Default implementation for get_default_span_details
+ Args:
+ request: the request object itself.
+ Returns:
+ a tuple of the span name, and any attributes to attach to the span.
+ """
+ span_name = request.path.strip() or f"HTTP {request.method}"
+ return span_name, {}
+
+
+def _get_view_func(request: web.Request) -> str:
+ """Returns the name of the request handler.
+ Args:
+ request: the request object itself.
+ Returns:
+ a string containing the name of the handler function
+ """
+ try:
+ return request.match_info.handler.__name__
+ except AttributeError:
+ return "unknown"
+
+
+def collect_request_attributes(request: web.Request) -> Dict:
+ """Collects HTTP request attributes from the ASGI scope and returns a
+ dictionary to be used as span creation attributes."""
+
+ server_host, port, http_url = (
+ request.url.host,
+ request.url.port,
+ str(request.url),
+ )
+ query_string = request.query_string
+ if query_string and http_url:
+ if isinstance(query_string, bytes):
+ query_string = query_string.decode("utf8")
+ http_url += "?" + urllib.parse.unquote(query_string)
+
+ result = {
+ SpanAttributes.HTTP_SCHEME: request.scheme,
+ SpanAttributes.HTTP_HOST: server_host,
+ SpanAttributes.NET_HOST_PORT: port,
+ SpanAttributes.HTTP_ROUTE: _get_view_func(request),
+ SpanAttributes.HTTP_FLAVOR: f"{request.version.major}.{request.version.minor}",
+ SpanAttributes.HTTP_TARGET: request.path,
+ SpanAttributes.HTTP_URL: remove_url_credentials(http_url),
+ }
+
+ http_method = request.method
+ if http_method:
+ result[SpanAttributes.HTTP_METHOD] = http_method
+
+ http_host_value_list = [request.host] if type(request.host) is not list else request.host
+ if http_host_value_list:
+ result[SpanAttributes.HTTP_SERVER_NAME] = ",".join(http_host_value_list)
+ http_user_agent = request.headers.get("user-agent")
+ if http_user_agent:
+ result[SpanAttributes.HTTP_USER_AGENT] = http_user_agent
+
+ # remove None values
+ result = {k: v for k, v in result.items() if v is not None}
+
+ return result
+
+
+def set_status_code(span, status_code: int) -> None:
+ """Adds HTTP response attributes to span using the status_code argument."""
+
+ try:
+ status_code = int(status_code)
+ except ValueError:
+ span.set_status(
+ Status(
+ StatusCode.ERROR,
+ "Non-integer HTTP status: " + repr(status_code),
+ )
+ )
+ else:
+ span.set_attribute(SpanAttributes.HTTP_STATUS_CODE, status_code)
+ span.set_status(Status(http_status_to_status_code(status_code, server_span=True)))
+
+
+class AiohttpGetter(Getter):
+ """Extract current trace from headers"""
+
+ def get(self, carrier, key: str) -> Union[List, None]:
+ """Getter implementation to retrieve an HTTP header value from the ASGI
+ scope.
+
+ Args:
+ carrier: ASGI scope object
+ key: header name in scope
+ Returns:
+ A list of all header values matching the key, or None if the key
+ does not match any header.
+ """
+ headers: CIMultiDictProxy = carrier.headers
+ if not headers:
+ return None
+ return headers.getall(key, None)
+
+ def keys(self, carrier: Dict) -> List:
+ return list(carrier.keys())
+
+
+getter = AiohttpGetter()
+
+
[email protected]
+async def middleware(request, handler):
+ """Middleware for aiohttp implementing tracing logic"""
+ if (
+ context.get_value("suppress_instrumentation")
+ or context.get_value(_SUPPRESS_HTTP_INSTRUMENTATION_KEY)
+ or _excluded_urls.url_disabled(request.url.path)
+ ):
+ return await handler(request)
+
+ span_name, additional_attributes = get_default_span_details(request)
+
+ req_attrs = collect_request_attributes(request)
+ duration_attrs = _parse_duration_attrs(req_attrs)
+ active_requests_count_attrs = _parse_active_request_count_attrs(req_attrs)
+
+ duration_histogram = meter.create_histogram(
+ name=MetricInstruments.HTTP_SERVER_DURATION,
+ unit="ms",
+ description="measures the duration of the inbound HTTP request",
+ )
+
+ active_requests_counter = meter.create_up_down_counter(
+ name=MetricInstruments.HTTP_SERVER_ACTIVE_REQUESTS,
+ unit="requests",
+ description="measures the number of concurrent HTTP requests those are currently in flight",
+ )
+
+ with tracer.start_as_current_span(
+ span_name,
+ context=extract(request, getter=getter),
+ kind=trace.SpanKind.SERVER,
+ ) as span:
+ attributes = collect_request_attributes(request)
+ attributes.update(additional_attributes)
+ span.set_attributes(attributes)
+ start = default_timer()
+ active_requests_counter.add(1, active_requests_count_attrs)
+ try:
+ resp = await handler(request)
+ set_status_code(span, resp.status)
+ except web.HTTPException as ex:
+ set_status_code(span, ex.status_code)
+ raise
+ finally:
+ duration = max((default_timer() - start) * 1000, 0)
+ duration_histogram.record(duration, duration_attrs)
+ active_requests_counter.add(-1, active_requests_count_attrs)
+ return resp
+
+
+class _InstrumentedApplication(web.Application):
+ """Insert tracing middleware"""
+
+ def __init__(self, *args, **kwargs):
+ middlewares = kwargs.pop("middlewares", [])
+ middlewares.insert(0, middleware)
+ kwargs["middlewares"] = middlewares
+ super().__init__(*args, **kwargs)
+
+
+class AioHttpServerInstrumentor(BaseInstrumentor):
+ # pylint: disable=protected-access,attribute-defined-outside-init
+ """An instrumentor for aiohttp.web.Application
+
+ See `BaseInstrumentor`
+ """
+
+ def _instrument(self, **kwargs):
+ self._original_app = web.Application
+ setattr(web, "Application", _InstrumentedApplication)
+
+ def _uninstrument(self, **kwargs):
+ setattr(web, "Application", self._original_app)
+
+ def instrumentation_dependencies(self):
+ return _instruments
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -448,6 +448,11 @@ async def _send_request():
return _received_otel_span
[email protected]
+def test_path():
+ return os.getenv("PYTEST_CURRENT_TEST").split()[0]
+
+
# Webserver Fixtures
diff --git a/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py b/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py
@@ -0,0 +1,92 @@
+import requests
+import uuid
+
+from urllib.parse import urljoin, urlparse
+from django.conf import settings
+
+from pulpcore.client.pulp_file import FileFileDistribution, RepositoryAddRemoveContent
+
+
+def test_get_requests(
+ file_distribution_api_client,
+ file_repository_api_client,
+ file_repo_with_auto_publish,
+ file_content_unit_with_name_factory,
+ gen_object_with_cleanup,
+ monitor_task,
+ received_otel_span,
+ test_path,
+):
+ """Test if content-app correctly returns mime-types based on filenames."""
+ content_units = [
+ file_content_unit_with_name_factory("otel_test_file1.tar.gz"),
+ file_content_unit_with_name_factory("otel_test_file2.xml.gz"),
+ file_content_unit_with_name_factory("otel_test_file3.txt"),
+ ]
+ units_to_add = list(map(lambda f: f.pulp_href, content_units))
+ data = RepositoryAddRemoveContent(add_content_units=units_to_add)
+ monitor_task(
+ file_repository_api_client.modify(file_repo_with_auto_publish.pulp_href, data).task
+ )
+
+ data = FileFileDistribution(
+ name=str(uuid.uuid4()),
+ base_path=str(uuid.uuid4()),
+ repository=file_repo_with_auto_publish.pulp_href,
+ )
+ distribution = gen_object_with_cleanup(file_distribution_api_client, data)
+
+ for content_unit in content_units:
+ url = urljoin(distribution.base_url, content_unit.relative_path)
+ content_path = urlparse(url).path
+
+ s = requests.Session()
+ s.headers = {"User-Agent": test_path}
+
+ if (
+ settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem"
+ and settings.REDIRECT_TO_OBJECT_STORAGE
+ ):
+ status_code = 302
+ else:
+ status_code = 200
+
+ s.get(url, allow_redirects=False)
+ assert received_otel_span(
+ {
+ "http.method": "GET",
+ "http.target": content_path,
+ "http.status_code": status_code,
+ "http.user_agent": test_path,
+ }
+ )
+
+ s.get(url + "fail")
+ assert received_otel_span(
+ {
+ "http.method": "GET",
+ "http.target": content_path + "fail",
+ "http.status_code": 404,
+ "http.user_agent": test_path,
+ }
+ )
+
+ s.post(url, data={})
+ assert received_otel_span(
+ {
+ "http.method": "POST",
+ "http.target": content_path,
+ "http.status_code": 405,
+ "http.user_agent": test_path,
+ }
+ )
+
+ s.head(url, allow_redirects=False)
+ assert received_otel_span(
+ {
+ "http.method": "HEAD",
+ "http.target": content_path,
+ "http.status_code": status_code,
+ "http.user_agent": test_path,
+ }
+ )
diff --git a/pulpcore/tests/functional/api/test_status.py b/pulpcore/tests/functional/api/test_status.py
--- a/pulpcore/tests/functional/api/test_status.py
+++ b/pulpcore/tests/functional/api/test_status.py
@@ -1,5 +1,4 @@
"""Test the status page."""
-import os
import pytest
from django.conf import settings
@@ -58,11 +57,6 @@
}
[email protected]
-def test_path():
- return os.getenv("PYTEST_CURRENT_TEST").split()[0]
-
-
@pytest.mark.parallel
def test_get_authenticated(test_path, status_api_client, received_otel_span):
"""GET the status path with valid credentials.
| Add Open Telemetry support to the pulp-content app
## Feature
Adds open telemetry metrics support to the `pulp-content` app. For example, it should report response codes and latency of all requests the content app handles.
## Implementation
### Option A: Use the aiohttp-server telemetry support from upstream
This would include use picking up the dependency `opentelemetry-instrumentation-aiohttp-server` and enabling it in pulpcore [like this](https://github.com/pulp/pulpcore/pull/3828/files#diff-7a529f651b0c47d7f7d8b44adaafc9a5fadaa03aa1e455fec23b8ee1c68274e9R35). Also we would add some tests for it.
To pursue this option the [upstream PR](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1800) needs to be merged and released to PyPI.
### Option B: Vendor the upstream not-merged-not-released aiohttp-server telemetry support
This involves us vendoring the [unmerged or unreleased upstream code](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1800) temporarily into Pulp. Since we're bringing the code it doesn't involve us picking up a dependency. It would still need to be enabled in pulpcore [like this](https://github.com/pulp/pulpcore/pull/3828/files#diff-7a529f651b0c47d7f7d8b44adaafc9a5fadaa03aa1e455fec23b8ee1c68274e9R35). We would still need to add tests for it just like Option A.
This option is a temporary option. Option A is really the only long-term option.
| 2023-11-20T17:11:32 |
|
pulp/pulpcore | 4,750 | pulp__pulpcore-4750 | [
"2705"
] | 6ad7976d03b8674f9f6446e6ff7ee5899fb5c427 | diff --git a/pulpcore/app/models/repository.py b/pulpcore/app/models/repository.py
--- a/pulpcore/app/models/repository.py
+++ b/pulpcore/app/models/repository.py
@@ -26,7 +26,7 @@
get_domain_pk,
cache_key,
)
-from pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS
+from pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS, PROTECTED_REPO_VERSION_MESSAGE
from pulpcore.download.factory import DownloaderFactory
from pulpcore.exceptions import ResourceImmutableError
@@ -309,6 +309,42 @@ def artifacts_for_version(version):
"""
return Artifact.objects.filter(content__pk__in=version.content)
+ def protected_versions(self):
+ """
+ Return repository versions that are protected.
+
+ A protected version is one that is being served by a distro directly or via publication.
+
+ Returns:
+ django.db.models.QuerySet: Repo versions which are protected.
+ """
+ from .publication import Distribution, Publication
+
+ # find all repo versions set on a distribution
+ qs = self.versions.filter(pk__in=Distribution.objects.values_list("repository_version_id"))
+
+ # find all repo versions with publications set on a distribution
+ qs |= self.versions.filter(
+ publication__pk__in=Distribution.objects.values_list("publication_id")
+ )
+
+ if distro := Distribution.objects.filter(repository=self.pk).first():
+ if distro.detail_model().SERVE_FROM_PUBLICATION:
+ # if the distro serves publications, protect the latest published repo version
+ version = self.versions.filter(
+ pk__in=Publication.objects.filter(complete=True).values_list(
+ "repository_version_id"
+ )
+ ).last()
+ else:
+ # if the distro does not serve publications, use the latest repo version
+ version = self.latest_version()
+
+ if version:
+ qs |= self.versions.filter(pk=version.pk)
+
+ return qs.distinct()
+
@hook(AFTER_UPDATE, when="retain_repo_versions", has_changed=True)
def _cleanup_old_versions_hook(self):
# Do not attempt to clean up anything, while there is a transaction involving repo versions
@@ -325,10 +361,9 @@ def cleanup_old_versions(self):
_("Attempt to cleanup old versions, while a new version is in flight.")
)
if self.retain_repo_versions:
- # Consider only completed versions for cleanup
- for version in self.versions.complete().order_by("-number")[
- self.retain_repo_versions :
- ]:
+ # Consider only completed versions that aren't protected for cleanup
+ versions = self.versions.complete().exclude(pk__in=self.protected_versions())
+ for version in versions.order_by("-number")[self.retain_repo_versions :]:
_logger.info(
"Deleting repository version {} due to version retention limit.".format(version)
)
@@ -1062,6 +1097,12 @@ def _squash(self, repo_relations, next_version):
# Update next version's counts as they have been modified
next_version._compute_counts()
+ @hook(BEFORE_DELETE)
+ def check_protected(self):
+ """Check if a repo version is protected before trying to delete it."""
+ if self in self.repository.protected_versions():
+ raise Exception(PROTECTED_REPO_VERSION_MESSAGE)
+
def delete(self, **kwargs):
"""
Deletes a RepositoryVersion
diff --git a/pulpcore/app/viewsets/repository.py b/pulpcore/app/viewsets/repository.py
--- a/pulpcore/app/viewsets/repository.py
+++ b/pulpcore/app/viewsets/repository.py
@@ -10,6 +10,7 @@
from rest_framework.viewsets import GenericViewSet
from urllib.parse import urlparse
+from pulpcore.constants import PROTECTED_REPO_VERSION_MESSAGE
from pulpcore.filters import BaseFilterSet
from pulpcore.app import tasks
from pulpcore.app.models import (
@@ -296,6 +297,9 @@ def destroy(self, request, repository_pk, number):
"""
version = self.get_object()
+ if version in version.repository.protected_versions():
+ raise serializers.ValidationError(PROTECTED_REPO_VERSION_MESSAGE)
+
task = dispatch(
tasks.repository.delete_version,
exclusive_resources=[version.repository],
diff --git a/pulpcore/constants.py b/pulpcore/constants.py
--- a/pulpcore/constants.py
+++ b/pulpcore/constants.py
@@ -1,3 +1,4 @@
+from gettext import gettext as _
from pathlib import Path
from types import SimpleNamespace
@@ -104,3 +105,9 @@
"storages.backends.azure_storage.AzureStorage": AZURE_RESPONSE_HEADER_MAP,
"storages.backends.gcloud.GoogleCloudStorage": GCS_RESPONSE_HEADER_MAP,
}
+
+# Message users receive when attempting to delete a protected repo version
+PROTECTED_REPO_VERSION_MESSAGE = _(
+ "The repository version cannot be deleted because it (or its publications) are currently being "
+ "used to distribute content. Please update the necessary distributions first."
+)
| diff --git a/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py b/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
--- a/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
@@ -34,6 +34,28 @@ def file_9_contents(
return content_units
[email protected]
+def file_repository_content(
+ file_remote_ssl_factory,
+ file_repository_factory,
+ file_repository_api_client,
+ file_content_api_client,
+ basic_manifest_path,
+ monitor_task,
+):
+ """Create some content that was synced into a repo on-demand."""
+ remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
+ base_repo = file_repository_factory()
+ task = file_repository_api_client.sync(base_repo.pulp_href, {"remote": remote.pulp_href}).task
+ monitor_task(task)
+ base_repo = file_repository_api_client.read(base_repo.pulp_href)
+ assert base_repo.latest_version_href[-2] == "1"
+ contents = file_content_api_client.list(repository_version=base_repo.latest_version_href)
+ assert contents.count == 3
+
+ return contents
+
+
@pytest.mark.parallel
def test_add_remove_content(
file_repository_api_client,
@@ -626,23 +648,17 @@ def test_filter_artifacts(
@pytest.mark.parallel
-def test_delete_repo_version_resources(
+def test_delete_repo_version_publication(
file_repository_api_client,
file_repository_version_api_client,
file_repository_factory,
file_remote_ssl_factory,
- file_distribution_factory,
basic_manifest_path,
file_publication_api_client,
- file_distribution_api_client,
gen_object_with_cleanup,
monitor_task,
):
- """Test whether removing a repository version affects related resources.
-
- Test whether removing a repository version will remove a related Publication.
- Test whether removing a repository version a Distribution will not be removed.
- """
+ """Test that removing a repo version will delete its publication."""
file_repo = file_repository_factory()
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
@@ -654,9 +670,6 @@ def test_delete_repo_version_resources(
publication = gen_object_with_cleanup(file_publication_api_client, pub_body)
assert publication.repository_version == repo.latest_version_href
- distribution = file_distribution_factory(publication=publication.pulp_href)
- assert distribution.publication == publication.pulp_href
-
# delete repo version used to create publication
file_repository_version_api_client.delete(repo.latest_version_href)
@@ -665,8 +678,53 @@ def test_delete_repo_version_resources(
assert e.value.status == 404
- updated_distribution = file_distribution_api_client.read(distribution.pulp_href)
- assert updated_distribution.publication is None
+
[email protected]
+def test_delete_protected_repo_version(
+ file_repository_api_client,
+ file_repository_version_api_client,
+ file_repository_factory,
+ file_remote_ssl_factory,
+ file_distribution_factory,
+ basic_manifest_path,
+ file_publication_api_client,
+ file_distribution_api_client,
+ gen_object_with_cleanup,
+ monitor_task,
+):
+ """Test that removing a repo version fails if its publication is distributed."""
+ file_repo = file_repository_factory()
+ remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
+ task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
+ monitor_task(task)
+ repo = file_repository_api_client.read(file_repo.pulp_href)
+ assert repo.latest_version_href[-2] == "1"
+
+ pub_body = {"repository": repo.pulp_href}
+ publication = gen_object_with_cleanup(file_publication_api_client, pub_body)
+ assert publication.repository_version == repo.latest_version_href
+
+ distribution = file_distribution_factory(publication=publication.pulp_href)
+ assert distribution.publication == publication.pulp_href
+
+ # deleting a protected repo version fails
+ with pytest.raises(ApiException) as e:
+ file_repository_version_api_client.delete(repo.latest_version_href)
+ assert e.value.status == 400
+ assert "The repository version cannot be deleted" in e.value.body
+
+ # unset the publication for the distribution
+ task = file_distribution_api_client.partial_update(
+ distribution.pulp_href, {"publication": ""}
+ ).task
+ monitor_task(task)
+
+ # and then delete the repo version
+ task = file_repository_version_api_client.delete(repo.latest_version_href).task
+ monitor_task(task)
+ with pytest.raises(ApiException) as e:
+ file_repository_version_api_client.read(repo.latest_version_href)
+ assert e.value.status == 404
@pytest.mark.parallel
@@ -735,6 +793,7 @@ def test_clear_all_units_repo_version(
def test_repo_version_retention(
file_repository_api_client,
file_repository_version_api_client,
+ file_repository_content,
file_content_api_client,
file_publication_api_client,
file_repository_factory,
@@ -745,14 +804,7 @@ def test_repo_version_retention(
):
"""Test retain_repo_versions for repositories."""
# Setup
- remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- base_repo = file_repository_factory()
- task = file_repository_api_client.sync(base_repo.pulp_href, {"remote": remote.pulp_href}).task
- monitor_task(task)
- base_repo = file_repository_api_client.read(base_repo.pulp_href)
- assert base_repo.latest_version_href[-2] == "1"
- contents = file_content_api_client.list(repository_version=base_repo.latest_version_href)
- assert contents.count == 3
+ contents = file_repository_content
# Test repo version retention.
repo = file_repository_factory(retain_repo_versions=1)
@@ -825,6 +877,60 @@ def test_repo_version_retention(
assert len(manifest_files) == contents.count
[email protected]
+def test_repo_versions_protected_from_cleanup(
+ file_repository_api_client,
+ file_repository_version_api_client,
+ file_repository_content,
+ file_publication_api_client,
+ file_repository_factory,
+ file_distribution_factory,
+ gen_object_with_cleanup,
+ monitor_task,
+):
+ """Test that distributed repo versions are protected from retain_repo_versions."""
+
+ def _modify_and_validate(repo, content, expected_version, expected_total):
+ task = file_repository_api_client.modify(
+ repo.pulp_href, {"add_content_units": [content.pulp_href]}
+ ).task
+ monitor_task(task)
+
+ repo = file_repository_api_client.read(repo.pulp_href)
+ assert repo.latest_version_href[-2] == expected_version
+
+ versions = file_repository_version_api_client.list(repo.pulp_href)
+ assert versions.count == expected_total
+
+ return repo
+
+ # Setup
+ contents = file_repository_content
+ repo = file_repository_factory(retain_repo_versions=1)
+
+ # Publish and distribute version 0
+ publication = gen_object_with_cleanup(
+ file_publication_api_client, {"repository_version": repo.latest_version_href}
+ )
+ file_distribution_factory(publication=publication.pulp_href)
+
+ # Version 0 is protected since it's distributed
+ repo = _modify_and_validate(repo, contents.results[0], "1", 2)
+
+ # Create a new publication and distribution which protects version 1 from deletion
+ file_distribution_factory(repository=repo.pulp_href)
+ publication = gen_object_with_cleanup(
+ file_publication_api_client, {"repository_version": repo.latest_version_href}
+ )
+ file_distribution_factory(publication=publication.pulp_href)
+
+ # Create version 2 and there should be 3 versions now (2 protected)
+ repo = _modify_and_validate(repo, contents.results[1], "2", 3)
+
+ # Version 2 will be removed since we're creating version 3 and it's not protected
+ _modify_and_validate(repo, contents.results[2], "3", 3)
+
+
@pytest.mark.parallel
def test_content_in_repository_version_view(
file_repository_api_client,
| Protect (published) repository versions from deletion
Repositories may be automatically deleted if the retain-repo-versions setting is used. If a certain repository version is published (e.g. rpm publication + distribution), the published rpm repository will stop working. This is unexpected (at least to me).
Please add a setting to protect a repository version from being deleted (either automatically or manually). From my point of view, published repository version shall be protected automatically.
Discussion:
https://discourse.pulpproject.org/t/should-publications-keep-certain-repository-versions-from-being-removed/448
| > If a certain repository version is published (e.g. rpm publication + distribution), the published rpm repository will stop working.
I think of these repo versions as a ***distributed*** repo versions. Most users will probably want to cleanup/delete published repo versions (that aren't distributed) and their publications. Or at least that's what we'd want. I'd probably suggest the title of this issue be updated based on this distinction.
We really want to start using `retain_repo_versions` but this issue prevents us from doing so. I almost think this is maybe more of a bug than a feature.
First, the current behavior means that `retain_repo_versions` is unsafe and thus unusable. Even if you set it to a high number such as 100, it could still potentially delete a distributed repo version. A user could come along and add 101 packages into a repo one-by-one thus deleting the any distributed repo versions. Now ideally this wouldn't happen but we've had users do things like this.
The second thing I'd say is that this unexpected for most users. The discourse post seems to indicate this.
A setting would work for us but I'd suggest fixing it for everyone since as I stated, I think `retain_repo_versions` is unusable.
Also, our preference would be for `retain_repo_versions` to continue also cleanup publications so long as those publications aren't being distributed. Otherwise, we need some way to cleanup not-distributed publications in bulk.
We discussed this at [PulpCon](https://hackmd.io/@pulp/pulpcon_2023) and I'll try to summarize the consensus we reached:
- Having the automatic retain_repo_versions code clean up distributed repo versions (ie repo versions that are published and are being distributed by a distribution) is undesirable and unexpected.
- Solution is to add code to exclude distributed repo versions from consideration when cleaning up repo versions. Only repo versions that are not distributed are cleaned up.
- Number of repo verisons for a repo may exceed retain_repo_versions
- Possible future enhancements: add this check to when repo versions are manually deleted via the API (with a force delete flag?) | 2023-11-21T15:12:50 |
pulp/pulpcore | 4,767 | pulp__pulpcore-4767 | [
"4766"
] | 6b2f38b01e8e56cded0c7b5b60474b55ebd7afcc | diff --git a/pulpcore/app/models/base.py b/pulpcore/app/models/base.py
--- a/pulpcore/app/models/base.py
+++ b/pulpcore/app/models/base.py
@@ -148,16 +148,20 @@ def get_pulp_type(cls):
def get_model_for_pulp_type(cls, pulp_type):
return cls._pulp_model_map[pulp_type]
- def save(self, *args, **kwargs):
+ @property
+ def detail_model(self):
+ return self._pulp_model_map[self.pulp_type]
+
+ def __init__(self, *args, **kwargs):
# instances of "detail" models that subclass MasterModel are exposed
# on instances of MasterModel by the string stored in that model's TYPE attr.
# Storing this pulp_type in a column on the MasterModel next to makes it trivial
# to filter for specific detail model types across master's relations.
# Prepend the TYPE defined on a detail model with a django app label.
# If a plugin sets the type field themselves, it's used as-is.
+ super().__init__(*args, **kwargs)
if not self.pulp_type:
self.pulp_type = self.get_pulp_type()
- return super().save(*args, **kwargs)
def cast(self):
"""Return the "Detail" model instance of this master-detail object.
diff --git a/pulpcore/app/viewsets/custom_filters.py b/pulpcore/app/viewsets/custom_filters.py
--- a/pulpcore/app/viewsets/custom_filters.py
+++ b/pulpcore/app/viewsets/custom_filters.py
@@ -355,12 +355,12 @@ def filter(self, qs, value):
for dist in qs.exclude(publication=None).values("publication__repository_version", "pk"):
versions_distributions[dist["publication__repository_version"]].append(dist["pk"])
- for dist in qs.exclude(repository_version=None).values("repository_version", "pk"):
- if not dist.cast().SERVE_FROM_PUBLICATION:
+ for dist in qs.exclude(repository_version=None).values("repository_version", "pulp_type"):
+ if not dist.detail_model.SERVE_FROM_PUBLICATION:
versions_distributions[dist["repository_version"]].append(dist["pk"])
for dist in qs.exclude(repository=None).prefetch_related("repository__versions"):
- if dist.cast().SERVE_FROM_PUBLICATION:
+ if dist.detail_model.SERVE_FROM_PUBLICATION:
versions = dist.repository.versions.values_list("pk", flat=True)
publications = Publication.objects.filter(
repository_version__in=versions, complete=True
| Race in DistributionWithContentFilter can lead to 500
The only possible explanation I can find is that cast is messing up the related prefetched models, while acutally not be needed.
All we need is the class of the distribution. Not the detail instance.
``` python
for dist in qs.exclude(repository=None).prefetch_related("repository__versions"):
if dist.cast().SERVE_FROM_PUBLICATION:
versions = dist.repository.versions.values_list("pk", flat=True)
....
```
```
pulp [c19d4bd0-3b4c-4d3a-87fb-9cd2b5c2edd2]: django.request:ERROR: Internal Server Error: /pulp/api/v3/distributions/file/file/
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/viewsets.py", line 125, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/mixins.py", line 38, in list
queryset = self.filter_queryset(self.get_queryset())
File "/usr/local/lib/python3.8/site-packages/rest_framework/generics.py", line 150, in filter_queryset
queryset = backend().filter_queryset(self.request, queryset, self)
File "/usr/local/lib/python3.8/site-packages/django_filters/rest_framework/backends.py", line 74, in filter_queryset
return filterset.qs
File "/usr/local/lib/python3.8/site-packages/django_filters/filterset.py", line 230, in qs
qs = self.filter_queryset(qs)
File "/usr/local/lib/python3.8/site-packages/django_filters/filterset.py", line 213, in filter_queryset
queryset = self.filters[name].filter(queryset, value)
File "/usr/local/lib/python3.8/site-packages/pulpcore/app/viewsets/custom_filters.py", line 424, in filter
versions = dist.repository.versions.values_list("pk", flat=True)
AttributeError: 'NoneType' object has no attribute 'versions'
```
| 2023-11-23T12:28:35 |
||
pulp/pulpcore | 4,792 | pulp__pulpcore-4792 | [
"4777"
] | 7a8ad5a4bb1a7a3aa126c625982d29ba4f6f89ce | diff --git a/pulpcore/app/serializers/exporter.py b/pulpcore/app/serializers/exporter.py
--- a/pulpcore/app/serializers/exporter.py
+++ b/pulpcore/app/serializers/exporter.py
@@ -1,6 +1,6 @@
import os
-import re
from gettext import gettext as _
+import re
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -19,6 +19,16 @@
from pulpcore.constants import FS_EXPORT_CHOICES, FS_EXPORT_METHODS
+def parse_human_readable_file_size(size: str):
+ # based on https://stackoverflow.com/a/42865957/2002471
+ units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
+ size = size.upper()
+ if not re.match(r" ", size):
+ size = re.sub(r"([KMGT]?B)", r" \1", size)
+ number, unit = [string.strip() for string in size.split()]
+ return int(float(number) * units[unit])
+
+
class ExporterSerializer(ModelSerializer):
"""
Base serializer for Exporters.
@@ -208,23 +218,13 @@ def validate(self, data):
)
return super().validate(data)
- @staticmethod
- def _parse_size(size):
+ def validate_chunk_size(self, chunk_size):
try:
- # based on https://stackoverflow.com/a/42865957/2002471
- units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
- size = size.upper()
- if not re.match(r" ", size):
- size = re.sub(r"([KMGT]?B)", r" \1", size)
- number, unit = [string.strip() for string in size.split()]
- return int(float(number) * units[unit])
+ the_size = parse_human_readable_file_size(chunk_size)
except ValueError:
raise serializers.ValidationError(
- _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(size)
+ _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(chunk_size)
)
-
- def validate_chunk_size(self, chunk_size):
- the_size = self._parse_size(chunk_size)
if the_size <= 0:
raise serializers.ValidationError(
_("Chunk size {} is not greater than zero!").format(the_size)
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -422,12 +422,10 @@ def pulp_export(exporter_pk, params):
os.remove(pathname)
raise
# compute the hashes
- global_hash = hasher()
paths = sorted([str(Path(p)) for p in glob(tarfile_fp + ".*")])
for a_file in paths:
- a_hash = compute_file_hash(a_file, hasher=hasher(), cumulative_hash=global_hash)
+ a_hash = compute_file_hash(a_file, hasher=hasher())
rslts[a_file] = a_hash
- tarfile_hash = global_hash.hexdigest()
else:
# write into the file
@@ -450,23 +448,20 @@ def pulp_export(exporter_pk, params):
# write outputfile/hash info to a file 'next to' the output file(s)
output_file_info_path = tarfile_fp.replace(".tar", "-toc.json")
with open(output_file_info_path, "w") as outfile:
- if the_export.validated_chunk_size:
- chunk_size = the_export.validated_chunk_size
- else:
- chunk_size = 0
- chunk_toc = {
+ table_of_contents = {
"meta": {
- "chunk_size": chunk_size,
- "file": os.path.basename(tarfile_fp),
- "global_hash": tarfile_hash,
"checksum_type": checksum_type,
},
"files": {},
}
+
+ if the_export.validated_chunk_size:
+ table_of_contents["meta"]["chunk_size"] = the_export.validated_chunk_size
+
# Build a toc with just filenames (not the path on the exporter-machine)
for a_path in rslts.keys():
- chunk_toc["files"][os.path.basename(a_path)] = rslts[a_path]
- json.dump(chunk_toc, outfile)
+ table_of_contents["files"][os.path.basename(a_path)] = rslts[a_path]
+ json.dump(table_of_contents, outfile)
# store toc info
toc_hash = compute_file_hash(output_file_info_path)
diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -76,12 +76,17 @@ def __init__(self, toc_path):
raise ValidationError(_("Missing 'files' or 'meta' keys in table-of-contents!"))
toc_dir = os.path.dirname(toc_path)
- self.chunk_size = int(self.toc["meta"]["chunk_size"])
# sorting-by-filename is REALLY IMPORTANT here
# keys are of the form <base-export-name>.00..<base-export-name>.NN,
# and must be reassembled IN ORDER
self.chunk_names = sorted(self.toc["files"].keys())
self.chunk_paths = [os.path.join(toc_dir, chunk_name) for chunk_name in self.chunk_names]
+ self.chunk_size = int(self.toc["meta"].get("chunk_size", 0))
+ if not self.chunk_size:
+ assert (
+ len(self.toc["files"]) == 1
+ ), "chunk_size must exist and be non-zero if more than one chunk exists"
+ self.chunk_size = os.path.getsize(self.chunk_paths[0])
def __enter__(self):
assert not hasattr(self, "chunks"), "ChunkedFile is not reentrant."
| Division by zero during Pulp import
**Version**
3.36.0+
**Describe the bug**
During a Katello test run, the following exception was encountered
```
{"traceback"=>" File \"/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py\", line 61, in _execute_task\n result = func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 453, in pulp_import\n with tarfile.open(path, \"r\", fileobj=fp) as tar:\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib64/python3.11/tarfile.py\", line 1815, in open\n fileobj.seek(saved_pos)\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 132, in seek\n self.chunk = target // self.chunk_size\n ~~~~~~~^^~~~~~~~~~~~~~~~~\n", "description"=>"integer division or modulo by zero"} (Katello::Errors::Pulp3Error)
```
I'm not 100% certain, but I suspect the cause here is that if chunks aren't used during the export, chunk_size is set to 0:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L456C31-L456C31
and ChunkedFile reads that value here:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L79
and we're using ChunkedFile even in the non-chunked case so long as a TOC file was provided.
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L335-L336
**To Reproduce**
I would expect it to be reproducible if performing a non-chunked export, and providing a TOC path for the import.
**Expected behavior**
We should never see an integer division by zero error.
**Additional context**
| We'll also want to expand our tests to cover this
Does this mean that we can provide a toc file that makes a non-chunked import look like one with a single chunk, only the specified chunk-size is wrong? | 2023-11-29T05:42:50 |
|
pulp/pulpcore | 4,793 | pulp__pulpcore-4793 | [
"4452"
] | f5cdd59055ebfd2e27ddf6499df89f41b7fef20e | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -48,10 +48,6 @@ class DownloaderFactory:
http://aiohttp.readthedocs.io/en/stable/client_quickstart.html#timeouts Behaviorally, it should
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
-
- Also for http and https urls, even though HTTP 1.1 is used, the TCP connection is setup and
- closed with each request. This is done for compatibility reasons due to various issues related
- to session continuation implementation in various servers.
"""
def __init__(self, remote, downloader_overrides=None):
@@ -102,7 +98,7 @@ def _make_aiohttp_session_from_remote(self):
Returns:
:class:`aiohttp.ClientSession`
"""
- tcp_conn_opts = {"force_close": True}
+ tcp_conn_opts = {}
sslcontext = None
if self._remote.ca_cert:
@@ -133,17 +129,17 @@ def _make_aiohttp_session_from_remote(self):
headers["User-Agent"] = f"{headers['User-Agent']}, {user_agent_header}"
headers.extend(header_dict)
- conn = aiohttp.TCPConnector(**tcp_conn_opts)
- total = self._remote.total_timeout
- sock_connect = self._remote.sock_connect_timeout
- sock_read = self._remote.sock_read_timeout
- connect = self._remote.connect_timeout
-
timeout = aiohttp.ClientTimeout(
- total=total, sock_connect=sock_connect, sock_read=sock_read, connect=connect
+ total=self._remote.total_timeout,
+ sock_connect=self._remote.sock_connect_timeout,
+ sock_read=self._remote.sock_read_timeout,
+ connect=self._remote.connect_timeout,
)
return aiohttp.ClientSession(
- connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
+ connector=aiohttp.TCPConnector(**tcp_conn_opts),
+ timeout=timeout,
+ headers=headers,
+ requote_redirect_url=False,
)
def build(self, url, **kwargs):
diff --git a/pulpcore/download/http.py b/pulpcore/download/http.py
--- a/pulpcore/download/http.py
+++ b/pulpcore/download/http.py
@@ -71,10 +71,6 @@ class HttpDownloader(BaseDownloader):
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
- If a session is not provided, the one created will force TCP connection closure after each
- request. This is done for compatibility reasons due to various issues related to session
- continuation implementation in various servers.
-
`aiohttp.ClientSession` objects allows you to configure options that will apply to all
downloaders using that session such as auth, timeouts, headers, etc. For more info on these
options see the `aiohttp.ClientSession` docs for more information:
@@ -165,7 +161,7 @@ def __init__(
self._close_session_on_finalize = False
else:
timeout = aiohttp.ClientTimeout(total=None, sock_connect=600, sock_read=600)
- conn = aiohttp.TCPConnector({"force_close": True})
+ conn = aiohttp.TCPConnector()
self.session = aiohttp.ClientSession(
connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
)
| Pulpcore prevents the use of keep-alive and may exhaust port numbers in large syncs
**Version**
Probably all.
**Describe the bug**
Pulp set's force_close on all downloaders and all aiohttp sessions. This prevents any keep-alive functionality and can lead to an exhaustion of tcp ports for large syncs. Some suboptimal network configuration and you end up with servers closing connections, because they believe they are still in time_wait state.
**Steps to reproduce**
Unclear. But maybe a proxy is involved, multiplying the connection creation dilemma.
**Expected behavior**
Pulp should use HTTP/1.1 features when available.
**Additional context**
Creating millions of tcp connections may even be a performance bottleneck.
| 2023-11-29T10:16:37 |
||
pulp/pulpcore | 4,794 | pulp__pulpcore-4794 | [
"4452"
] | 57227a892e61d329f69dab12e71eff8c1ae8b864 | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -48,10 +48,6 @@ class DownloaderFactory:
http://aiohttp.readthedocs.io/en/stable/client_quickstart.html#timeouts Behaviorally, it should
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
-
- Also for http and https urls, even though HTTP 1.1 is used, the TCP connection is setup and
- closed with each request. This is done for compatibility reasons due to various issues related
- to session continuation implementation in various servers.
"""
def __init__(self, remote, downloader_overrides=None):
@@ -102,7 +98,7 @@ def _make_aiohttp_session_from_remote(self):
Returns:
:class:`aiohttp.ClientSession`
"""
- tcp_conn_opts = {"force_close": True}
+ tcp_conn_opts = {}
sslcontext = None
if self._remote.ca_cert:
@@ -133,17 +129,17 @@ def _make_aiohttp_session_from_remote(self):
headers["User-Agent"] = f"{headers['User-Agent']}, {user_agent_header}"
headers.extend(header_dict)
- conn = aiohttp.TCPConnector(**tcp_conn_opts)
- total = self._remote.total_timeout
- sock_connect = self._remote.sock_connect_timeout
- sock_read = self._remote.sock_read_timeout
- connect = self._remote.connect_timeout
-
timeout = aiohttp.ClientTimeout(
- total=total, sock_connect=sock_connect, sock_read=sock_read, connect=connect
+ total=self._remote.total_timeout,
+ sock_connect=self._remote.sock_connect_timeout,
+ sock_read=self._remote.sock_read_timeout,
+ connect=self._remote.connect_timeout,
)
return aiohttp.ClientSession(
- connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
+ connector=aiohttp.TCPConnector(**tcp_conn_opts),
+ timeout=timeout,
+ headers=headers,
+ requote_redirect_url=False,
)
def build(self, url, **kwargs):
diff --git a/pulpcore/download/http.py b/pulpcore/download/http.py
--- a/pulpcore/download/http.py
+++ b/pulpcore/download/http.py
@@ -71,10 +71,6 @@ class HttpDownloader(BaseDownloader):
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
- If a session is not provided, the one created will force TCP connection closure after each
- request. This is done for compatibility reasons due to various issues related to session
- continuation implementation in various servers.
-
`aiohttp.ClientSession` objects allows you to configure options that will apply to all
downloaders using that session such as auth, timeouts, headers, etc. For more info on these
options see the `aiohttp.ClientSession` docs for more information:
@@ -165,7 +161,7 @@ def __init__(
self._close_session_on_finalize = False
else:
timeout = aiohttp.ClientTimeout(total=None, sock_connect=600, sock_read=600)
- conn = aiohttp.TCPConnector({"force_close": True})
+ conn = aiohttp.TCPConnector()
self.session = aiohttp.ClientSession(
connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
)
| Pulpcore prevents the use of keep-alive and may exhaust port numbers in large syncs
**Version**
Probably all.
**Describe the bug**
Pulp set's force_close on all downloaders and all aiohttp sessions. This prevents any keep-alive functionality and can lead to an exhaustion of tcp ports for large syncs. Some suboptimal network configuration and you end up with servers closing connections, because they believe they are still in time_wait state.
**Steps to reproduce**
Unclear. But maybe a proxy is involved, multiplying the connection creation dilemma.
**Expected behavior**
Pulp should use HTTP/1.1 features when available.
**Additional context**
Creating millions of tcp connections may even be a performance bottleneck.
| 2023-11-29T10:16:37 |
||
pulp/pulpcore | 4,796 | pulp__pulpcore-4796 | [
"4452"
] | dde5656d5854e353196b96b58c7385c2305a407a | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -48,10 +48,6 @@ class DownloaderFactory:
http://aiohttp.readthedocs.io/en/stable/client_quickstart.html#timeouts Behaviorally, it should
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
-
- Also for http and https urls, even though HTTP 1.1 is used, the TCP connection is setup and
- closed with each request. This is done for compatibility reasons due to various issues related
- to session continuation implementation in various servers.
"""
def __init__(self, remote, downloader_overrides=None):
@@ -102,7 +98,7 @@ def _make_aiohttp_session_from_remote(self):
Returns:
:class:`aiohttp.ClientSession`
"""
- tcp_conn_opts = {"force_close": True}
+ tcp_conn_opts = {}
sslcontext = None
if self._remote.ca_cert:
@@ -133,17 +129,17 @@ def _make_aiohttp_session_from_remote(self):
headers["User-Agent"] = f"{headers['User-Agent']}, {user_agent_header}"
headers.extend(header_dict)
- conn = aiohttp.TCPConnector(**tcp_conn_opts)
- total = self._remote.total_timeout
- sock_connect = self._remote.sock_connect_timeout
- sock_read = self._remote.sock_read_timeout
- connect = self._remote.connect_timeout
-
timeout = aiohttp.ClientTimeout(
- total=total, sock_connect=sock_connect, sock_read=sock_read, connect=connect
+ total=self._remote.total_timeout,
+ sock_connect=self._remote.sock_connect_timeout,
+ sock_read=self._remote.sock_read_timeout,
+ connect=self._remote.connect_timeout,
)
return aiohttp.ClientSession(
- connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
+ connector=aiohttp.TCPConnector(**tcp_conn_opts),
+ timeout=timeout,
+ headers=headers,
+ requote_redirect_url=False,
)
def build(self, url, **kwargs):
diff --git a/pulpcore/download/http.py b/pulpcore/download/http.py
--- a/pulpcore/download/http.py
+++ b/pulpcore/download/http.py
@@ -71,10 +71,6 @@ class HttpDownloader(BaseDownloader):
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
- If a session is not provided, the one created will force TCP connection closure after each
- request. This is done for compatibility reasons due to various issues related to session
- continuation implementation in various servers.
-
`aiohttp.ClientSession` objects allows you to configure options that will apply to all
downloaders using that session such as auth, timeouts, headers, etc. For more info on these
options see the `aiohttp.ClientSession` docs for more information:
@@ -165,7 +161,7 @@ def __init__(
self._close_session_on_finalize = False
else:
timeout = aiohttp.ClientTimeout(total=None, sock_connect=600, sock_read=600)
- conn = aiohttp.TCPConnector({"force_close": True})
+ conn = aiohttp.TCPConnector()
self.session = aiohttp.ClientSession(
connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
)
| Pulpcore prevents the use of keep-alive and may exhaust port numbers in large syncs
**Version**
Probably all.
**Describe the bug**
Pulp set's force_close on all downloaders and all aiohttp sessions. This prevents any keep-alive functionality and can lead to an exhaustion of tcp ports for large syncs. Some suboptimal network configuration and you end up with servers closing connections, because they believe they are still in time_wait state.
**Steps to reproduce**
Unclear. But maybe a proxy is involved, multiplying the connection creation dilemma.
**Expected behavior**
Pulp should use HTTP/1.1 features when available.
**Additional context**
Creating millions of tcp connections may even be a performance bottleneck.
| 2023-11-29T10:16:37 |
||
pulp/pulpcore | 4,798 | pulp__pulpcore-4798 | [
"4452"
] | 48d9567ec4a81bf81961de6e94ef89248f75b11c | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -48,10 +48,6 @@ class DownloaderFactory:
http://aiohttp.readthedocs.io/en/stable/client_quickstart.html#timeouts Behaviorally, it should
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
-
- Also for http and https urls, even though HTTP 1.1 is used, the TCP connection is setup and
- closed with each request. This is done for compatibility reasons due to various issues related
- to session continuation implementation in various servers.
"""
def __init__(self, remote, downloader_overrides=None):
@@ -102,7 +98,7 @@ def _make_aiohttp_session_from_remote(self):
Returns:
:class:`aiohttp.ClientSession`
"""
- tcp_conn_opts = {"force_close": True}
+ tcp_conn_opts = {}
sslcontext = None
if self._remote.ca_cert:
@@ -133,17 +129,17 @@ def _make_aiohttp_session_from_remote(self):
headers["User-Agent"] = f"{headers['User-Agent']}, {user_agent_header}"
headers.extend(header_dict)
- conn = aiohttp.TCPConnector(**tcp_conn_opts)
- total = self._remote.total_timeout
- sock_connect = self._remote.sock_connect_timeout
- sock_read = self._remote.sock_read_timeout
- connect = self._remote.connect_timeout
-
timeout = aiohttp.ClientTimeout(
- total=total, sock_connect=sock_connect, sock_read=sock_read, connect=connect
+ total=self._remote.total_timeout,
+ sock_connect=self._remote.sock_connect_timeout,
+ sock_read=self._remote.sock_read_timeout,
+ connect=self._remote.connect_timeout,
)
return aiohttp.ClientSession(
- connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
+ connector=aiohttp.TCPConnector(**tcp_conn_opts),
+ timeout=timeout,
+ headers=headers,
+ requote_redirect_url=False,
)
def build(self, url, **kwargs):
diff --git a/pulpcore/download/http.py b/pulpcore/download/http.py
--- a/pulpcore/download/http.py
+++ b/pulpcore/download/http.py
@@ -71,10 +71,6 @@ class HttpDownloader(BaseDownloader):
allow for an active download to be arbitrarily long, while still detecting dead or closed
sessions even when TCPKeepAlive is disabled.
- If a session is not provided, the one created will force TCP connection closure after each
- request. This is done for compatibility reasons due to various issues related to session
- continuation implementation in various servers.
-
`aiohttp.ClientSession` objects allows you to configure options that will apply to all
downloaders using that session such as auth, timeouts, headers, etc. For more info on these
options see the `aiohttp.ClientSession` docs for more information:
@@ -165,7 +161,7 @@ def __init__(
self._close_session_on_finalize = False
else:
timeout = aiohttp.ClientTimeout(total=None, sock_connect=600, sock_read=600)
- conn = aiohttp.TCPConnector({"force_close": True})
+ conn = aiohttp.TCPConnector()
self.session = aiohttp.ClientSession(
connector=conn, timeout=timeout, headers=headers, requote_redirect_url=False
)
| Pulpcore prevents the use of keep-alive and may exhaust port numbers in large syncs
**Version**
Probably all.
**Describe the bug**
Pulp set's force_close on all downloaders and all aiohttp sessions. This prevents any keep-alive functionality and can lead to an exhaustion of tcp ports for large syncs. Some suboptimal network configuration and you end up with servers closing connections, because they believe they are still in time_wait state.
**Steps to reproduce**
Unclear. But maybe a proxy is involved, multiplying the connection creation dilemma.
**Expected behavior**
Pulp should use HTTP/1.1 features when available.
**Additional context**
Creating millions of tcp connections may even be a performance bottleneck.
| 2023-11-29T10:16:39 |
||
pulp/pulpcore | 4,836 | pulp__pulpcore-4836 | [
"4835"
] | 9e542d51c9a2b5522ec77e5869ced7f77971ef03 | diff --git a/pulpcore/app/migrations/0116_alter_remoteartifact_md5_alter_remoteartifact_sha1_and_more.py b/pulpcore/app/migrations/0116_alter_remoteartifact_md5_alter_remoteartifact_sha1_and_more.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/migrations/0116_alter_remoteartifact_md5_alter_remoteartifact_sha1_and_more.py
@@ -0,0 +1,43 @@
+# Generated by Django 4.2.7 on 2023-12-04 15:58
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('core', '0115_compositecontentguard'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='md5',
+ field=models.CharField(db_index=True, max_length=32, null=True),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='sha1',
+ field=models.CharField(db_index=True, max_length=40, null=True),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='sha224',
+ field=models.CharField(db_index=True, max_length=56, null=True),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='sha256',
+ field=models.CharField(db_index=True, max_length=64, null=True),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='sha384',
+ field=models.CharField(db_index=True, max_length=96, null=True),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='sha512',
+ field=models.CharField(db_index=True, max_length=128, null=True),
+ ),
+ ]
diff --git a/pulpcore/app/models/content.py b/pulpcore/app/models/content.py
--- a/pulpcore/app/models/content.py
+++ b/pulpcore/app/models/content.py
@@ -713,12 +713,12 @@ class RemoteArtifact(BaseModel, QueryMixin):
url = models.TextField(validators=[validators.URLValidator])
size = models.BigIntegerField(null=True)
- md5 = models.CharField(max_length=32, null=True)
- sha1 = models.CharField(max_length=40, null=True)
- sha224 = models.CharField(max_length=56, null=True)
- sha256 = models.CharField(max_length=64, null=True)
- sha384 = models.CharField(max_length=96, null=True)
- sha512 = models.CharField(max_length=128, null=True)
+ md5 = models.CharField(max_length=32, null=True, db_index=True)
+ sha1 = models.CharField(max_length=40, null=True, db_index=True)
+ sha224 = models.CharField(max_length=56, null=True, db_index=True)
+ sha256 = models.CharField(max_length=64, null=True, db_index=True)
+ sha384 = models.CharField(max_length=96, null=True, db_index=True)
+ sha512 = models.CharField(max_length=128, null=True, db_index=True)
content_artifact = models.ForeignKey(ContentArtifact, on_delete=models.CASCADE)
remote = models.ForeignKey("Remote", on_delete=models.CASCADE)
| Missing database indices for the checksum fields in `RemoteArtifact`
**Version**
pulpcore 3.21+
**Describe the bug**
The checksum fields in the `RemoteArtifact` model are missing a `db_index` compared to the `Artifact` model.
**Expected behavior**
To improve query performance it would be great to have access to a index here as well.
| 2023-12-04T16:02:12 |
||
pulp/pulpcore | 4,847 | pulp__pulpcore-4847 | [
"4845"
] | 6ad7976d03b8674f9f6446e6ff7ee5899fb5c427 | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -1,5 +1,6 @@
import enum
import json
+import time
from functools import wraps
@@ -185,7 +186,10 @@ def make_response(self, key, base_key):
return None
entry = json.loads(entry)
response_type = entry.pop("type", None)
- if not response_type or response_type not in self.RESPONSE_TYPES:
+ expires = entry.pop("expires", None)
+ if (not response_type or response_type not in self.RESPONSE_TYPES) or (
+ expires and expires < time.time()
+ ):
# Bad entry, delete from cache
self.delete(key, base_key)
return None
@@ -198,6 +202,9 @@ def make_entry(self, key, base_key, handler, args, kwargs, expires=DEFAULT_EXPIR
"""Gets the response for the request and try to turn it into a cacheable entry"""
response = handler(*args, **kwargs)
entry = {"headers": dict(response.headers), "status": response.status_code}
+ if expires:
+ # Redis TTL is not sufficient: https://github.com/pulp/pulpcore/issues/4845
+ entry["expires"] = expires + time.time()
response.headers["X-PULP-CACHE"] = "MISS"
if isinstance(response, HttpResponseRedirect):
entry["redirect_to"] = str(response.headers["Location"])
@@ -366,7 +373,10 @@ async def make_response(self, key, base_key):
entry["body"] = bytes.fromhex(binary)
response_type = entry.pop("type", None)
- if not response_type or response_type not in self.RESPONSE_TYPES:
+ expires = entry.pop("expires", None)
+ if (not response_type or response_type not in self.RESPONSE_TYPES) or (
+ expires and expires < time.time()
+ ):
# Bad entry, delete from cache
self.delete(key, base_key)
return None
@@ -382,6 +392,9 @@ async def make_entry(self, key, base_key, handler, args, kwargs, expires=DEFAULT
response = e
entry = {"headers": dict(response.headers), "status": response.status}
+ if expires:
+ # Redis TTL is not sufficient: https://github.com/pulp/pulpcore/issues/4845
+ entry["expires"] = expires + time.time()
response.headers.update({"X-PULP-CACHE": "MISS"})
if isinstance(response, FileResponse):
entry["path"] = str(response._path)
| Intermittent 403s when using Redis + Azure Storage backend
**Version**
`pulpcore 3.40.4`
**The bug**
Cloud storage options often have a timeframe that a given signed URL is valid for. Let's suppose that I have set [AZURE_URL_EXPIRATION_SECS](https://django-storages.readthedocs.io/en/latest/backends/azure.html#settings) = 3600 (an hour), which I have.
Pulp's Redis cache has a [default TTL](https://docs.pulpproject.org/pulpcore/configuration/settings.html#cache-settings) of 600 seconds (ten minutes). Let's suppose that I have not changed that, which is true.
A naive developer (me) would assume that since ten minutes is much smaller than an hour there will be no issue with enabling the Redis cache and that users would never get redirected to an expired Azure Storage URL. This is false. The issue is with how pulp-content interacts with Redis.
Pulp-content caches responses (which in this case are always redirect urls to Azure Storage) in hashes with `HSET`. The hash's overall key will be related to the _whole repo_ (or more accurately the distribution, but whatever), something like `yumrepos/sherr-test`. Then individual url/response pairs will get set as key/values of that hash, where an example key might be `/yumrepos/sherr-test/repodata/repomd.xml:GET`. I assume this is so that it's easy to wipe out all data related to a repo when a new version gets published.
The Redis TTL (10 minutes) gets set on that distribution-level hash object. So one would assume that all cached responses related to that distribution will get wiped every 10 minutes, even if some of them are much younger than that. **However**, pulp-content [resets the TTL for the hash every time it caches a new response](https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L88).
This means that as long as users are requesting _some new file_ from the repo before each reset 10 minute TTL is reached, this hash object will persist in Redis _indefinitely_. And after an hour those earlier-cached Azure Storage URLs are in fact expired, and users requesting those files will get 403s.
[You cannot set](https://stackoverflow.com/questions/16545321/how-to-expire-the-hset-child-key-in-redis) a TTL on individual child keys of hashes in Redis. So it seems to me that the options for fixing this are:
1. Don't use hashes, and `SET` a top-level key in Redis for every url/response. This makes invalidating the cache after a publication harder.
2. Attempt to only set the TTL on the hash one time, when it is first created. This would set the behavior back to "all info for the whole repo is wiped every 10 minutes, even if some responses are much younger", which was probably the original intention.
3. Store an additional field on each child key that pulp-content can evaluate to see if _this particular_ response is past the Redis TTL, and refresh it if so.
I don't have any strong opinions on which of those is the correct thing to do, but the current behavior of always resetting the TTL is definitely wrong.
**Additional context**
I am aware that this sounds very similar to the issue @vonsch reported in #3077. However his debugging sounds like it _might actually_ be a different issue, with `django_storages` returning about-to-expire S3 urls to a correctly-behaving `redis` cache, while in this case it's actually entirely Redis' fault, or rather pulp-content's usage of it. If they come back and say that it is in fact the same issue then I have no problem with merging this issue into that one.
| Good find! I would go with the third option. Setting the additional field in the cache entry here (https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L200) and adding an extra check here (https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L187-L188) should handle this. The same change will have to be mirrored across the `SyncContentCache` and `AsyncContentCache`. | 2023-12-06T19:38:34 |
|
pulp/pulpcore | 4,851 | pulp__pulpcore-4851 | [
"4751"
] | 5f6fe99851eec0fd175be01d832ce2d6aeb0119e | diff --git a/pulp_file/pytest_plugin.py b/pulp_file/pytest_plugin.py
--- a/pulp_file/pytest_plugin.py
+++ b/pulp_file/pytest_plugin.py
@@ -1,69 +1,77 @@
import os
import uuid
import subprocess
+import warnings
from collections import defaultdict
from pathlib import Path
import aiofiles
import pytest
from aiohttp import web
-from pulpcore.client.pulp_file import (
- AcsFileApi,
- ApiClient,
- ContentFilesApi,
- DistributionsFileApi,
- PublicationsFileApi,
- RemotesFileApi,
- RepositoriesFileApi,
- RepositoriesFileVersionsApi,
-)
-
-from pulpcore.tests.functional.utils import generate_iso, generate_manifest
+
+from pulpcore.tests.functional.utils import BindingsNamespace, generate_iso, generate_manifest
# Api Bindings fixtures
@pytest.fixture(scope="session")
-def file_client(_api_client_set, bindings_cfg):
- api_client = ApiClient(bindings_cfg)
- _api_client_set.add(api_client)
- yield api_client
- _api_client_set.remove(api_client)
+def file_bindings(_api_client_set, bindings_cfg):
+ """
+ A namespace providing preconfigured pulp_file api clients.
+
+ e.g. `file_bindings.RepositoriesFileApi.list()`.
+ """
+ from pulpcore.client import pulp_file as file_bindings_module
+
+ file_client = file_bindings_module.ApiClient(bindings_cfg)
+ _api_client_set.add(file_client)
+ yield BindingsNamespace(file_bindings_module, file_client)
+ _api_client_set.remove(file_client)
+
+
+# TODO Deprecate all the api_client fixtures below.
@pytest.fixture(scope="session")
-def file_acs_api_client(file_client):
- return AcsFileApi(file_client)
+def file_acs_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.AcsFileApi
@pytest.fixture(scope="session")
-def file_content_api_client(file_client):
- return ContentFilesApi(file_client)
+def file_content_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.ContentFilesApi
@pytest.fixture(scope="session")
-def file_distribution_api_client(file_client):
- return DistributionsFileApi(file_client)
+def file_distribution_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.DistributionsFileApi
@pytest.fixture(scope="session")
-def file_publication_api_client(file_client):
- return PublicationsFileApi(file_client)
+def file_publication_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.PublicationsFileApi
@pytest.fixture(scope="session")
-def file_repository_api_client(file_client):
- return RepositoriesFileApi(file_client)
+def file_repository_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.RepositoriesFileApi
@pytest.fixture(scope="session")
-def file_repository_version_api_client(file_client):
- return RepositoriesFileVersionsApi(file_client)
+def file_repository_version_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.RepositoriesFileVersionsApi
@pytest.fixture(scope="session")
-def file_remote_api_client(file_client):
- return RemotesFileApi(file_client)
+def file_remote_api_client(file_bindings):
+ warnings.warn("This fixture is deprecated. Use `file_bindings` instead.", DeprecationWarning)
+ return file_bindings.RemotesFileApi
# Factory fixtures
@@ -75,11 +83,13 @@ def file_random_content_unit(file_content_unit_with_name_factory):
@pytest.fixture
-def file_content_unit_with_name_factory(file_content_api_client, random_artifact, monitor_task):
+def file_content_unit_with_name_factory(file_bindings, random_artifact, monitor_task):
def _file_content_unit_with_name_factory(name):
artifact_attrs = {"artifact": random_artifact.pulp_href, "relative_path": name}
- return file_content_api_client.read(
- monitor_task(file_content_api_client.create(**artifact_attrs).task).created_resources[0]
+ return file_bindings.ContentFilesApi.read(
+ monitor_task(
+ file_bindings.ContentFilesApi.create(**artifact_attrs).task
+ ).created_resources[0]
)
return _file_content_unit_with_name_factory
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -10,8 +10,6 @@
import uuid
import warnings
-import trustme
-import proxy
import pytest
from aiohttp import web
@@ -20,60 +18,16 @@
from packaging.version import parse as parse_version
from time import sleep
from yarl import URL
-from opentelemetry.proto.trace.v1.trace_pb2 import TracesData
from pulpcore.tests.functional.utils import (
SLEEP_TIME,
TASK_TIMEOUT,
+ BindingsNamespace,
PulpTaskError,
PulpTaskGroupError,
add_recording_route,
)
-from pulpcore.client.pulpcore import (
- Configuration,
- AccessPoliciesApi,
- ApiClient,
- ApiException,
- ArtifactsApi,
- ContentApi,
- ContentguardsApi,
- ContentguardsRbacApi,
- ContentguardsCompositeApi,
- ContentguardsContentRedirectApi,
- ContentguardsHeaderApi,
- DomainsApi,
- DistributionsApi,
- ExportersPulpApi,
- ExportersPulpExportsApi,
- ExportersFilesystemApi,
- ExportersFilesystemExportsApi,
- GroupsApi,
- GroupsRolesApi,
- GroupsUsersApi,
- ImportersPulpApi,
- ImportersPulpImportsApi,
- ImportersPulpImportCheckApi,
- OrphansCleanupApi,
- PublicationsApi,
- RemotesApi,
- RepairApi,
- RepositoriesApi,
- RepositoryVersionsApi,
- RepositoriesReclaimSpaceApi,
- RolesApi,
- SigningServicesApi,
- StatusApi,
- TaskGroupsApi,
- TasksApi,
- TaskSchedulesApi,
- UploadsApi,
- UpstreamPulpsApi,
- UsersApi,
- UsersRolesApi,
- WorkersApi,
-)
-
from .gpg_ascii_armor_signing_service import (
_ascii_armored_detached_signing_service_name,
ascii_armored_detached_signing_service,
@@ -95,19 +49,6 @@ def __init__(self, awaitable):
self.awaitable = awaitable
-def get_bindings_config():
- api_protocol = os.environ.get("API_PROTOCOL", "https")
- api_host = os.environ.get("API_HOST", "pulp")
- api_port = os.environ.get("API_PORT", "443")
- configuration = Configuration(
- host=f"{api_protocol}://{api_host}:{api_port}",
- username=os.environ.get("ADMIN_USERNAME", "admin"),
- password=os.environ.get("ADMIN_PASSWORD", "password"),
- )
- configuration.safe_chars_for_path_param = "/"
- return configuration
-
-
def pytest_configure(config):
config.addinivalue_line(
"markers",
@@ -139,12 +80,23 @@ class FixturesConfig:
return FixturesConfig()
-# API Clients
+# API Bindings fixtures
@pytest.fixture(scope="session")
def bindings_cfg():
- return get_bindings_config()
+ from pulpcore.client.pulpcore import Configuration
+
+ api_protocol = os.environ.get("API_PROTOCOL", "https")
+ api_host = os.environ.get("API_HOST", "pulp")
+ api_port = os.environ.get("API_PORT", "443")
+ configuration = Configuration(
+ host=f"{api_protocol}://{api_host}:{api_port}",
+ username=os.environ.get("ADMIN_USERNAME", "admin"),
+ password=os.environ.get("ADMIN_PASSWORD", "password"),
+ )
+ configuration.safe_chars_for_path_param = "/"
+ return configuration
@pytest.fixture(scope="session")
@@ -169,202 +121,334 @@ def _patch_cid_user_agent(_api_client_set, cid, monkeypatch):
@pytest.fixture(scope="session")
-def pulpcore_client(_api_client_set, bindings_cfg):
- api_client = ApiClient(bindings_cfg)
- _api_client_set.add(api_client)
- yield api_client
- _api_client_set.remove(api_client)
+def pulpcore_bindings(_api_client_set, bindings_cfg):
+ """
+ A namespace providing preconfigured pulpcore api clients.
+
+ e.g. `pulpcore_bindings.WorkersApi.list()`.
+ """
+ from pulpcore.client import pulpcore as pulpcore_bindings_module
+
+ pulpcore_client = pulpcore_bindings_module.ApiClient(bindings_cfg)
+ _api_client_set.add(pulpcore_client)
+ yield BindingsNamespace(pulpcore_bindings_module, pulpcore_client)
+ _api_client_set.remove(pulpcore_client)
+
+
+# TODO Deprecate all the api_client fixtures below.
@pytest.fixture(scope="session")
-def access_policies_api_client(pulpcore_client):
- return AccessPoliciesApi(pulpcore_client)
+def pulpcore_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings.client` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.client
@pytest.fixture(scope="session")
-def tasks_api_client(pulpcore_client):
- return TasksApi(pulpcore_client)
+def access_policies_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.AccessPoliciesApi
@pytest.fixture(scope="session")
-def task_groups_api_client(pulpcore_client):
- return TaskGroupsApi(pulpcore_client)
+def tasks_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.TasksApi
@pytest.fixture(scope="session")
-def workers_api_client(pulpcore_client):
- return WorkersApi(pulpcore_client)
+def task_groups_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.TaskGroupsApi
@pytest.fixture(scope="session")
-def artifacts_api_client(pulpcore_client):
- return ArtifactsApi(pulpcore_client)
+def workers_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.WorkersApi
@pytest.fixture(scope="session")
-def uploads_api_client(pulpcore_client):
- return UploadsApi(pulpcore_client)
+def artifacts_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ArtifactsApi
@pytest.fixture(scope="session")
-def task_schedules_api_client(pulpcore_client):
- return TaskSchedulesApi(pulpcore_client)
+def uploads_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.UploadsApi
@pytest.fixture(scope="session")
-def status_api_client(pulpcore_client):
- return StatusApi(pulpcore_client)
+def task_schedules_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.TaskSchedulesApi
@pytest.fixture(scope="session")
-def groups_api_client(pulpcore_client):
- return GroupsApi(pulpcore_client)
+def status_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.StatusApi
@pytest.fixture(scope="session")
-def groups_users_api_client(pulpcore_client):
- return GroupsUsersApi(pulpcore_client)
+def groups_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.GroupsApi
@pytest.fixture(scope="session")
-def groups_roles_api_client(pulpcore_client):
- return GroupsRolesApi(pulpcore_client)
+def groups_users_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.GroupsUsersApi
+
+
[email protected](scope="session")
+def groups_roles_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.GroupsRolesApi
@pytest.fixture(scope="session")
-def users_api_client(pulpcore_client):
- return UsersApi(pulpcore_client)
+def users_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.UsersApi
@pytest.fixture(scope="session")
-def users_roles_api_client(pulpcore_client):
- return UsersRolesApi(pulpcore_client)
+def users_roles_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.UsersRolesApi
@pytest.fixture(scope="session")
-def roles_api_client(pulpcore_client):
+def roles_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
"Provies the pulp core Roles API client object."
- return RolesApi(pulpcore_client)
+ return pulpcore_bindings.RolesApi
@pytest.fixture(scope="session")
-def content_api_client(pulpcore_client):
- return ContentApi(pulpcore_client)
+def content_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentApi
@pytest.fixture(scope="session")
-def domains_api_client(pulpcore_client):
- return DomainsApi(pulpcore_client)
+def domains_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.DomainsApi
@pytest.fixture(scope="session")
-def distributions_api_client(pulpcore_client):
- return DistributionsApi(pulpcore_client)
+def distributions_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.DistributionsApi
@pytest.fixture(scope="session")
-def remotes_api_client(pulpcore_client):
- return RemotesApi(pulpcore_client)
+def remotes_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.RemotesApi
@pytest.fixture(scope="session")
-def repositories_api_client(pulpcore_client):
- return RepositoriesApi(pulpcore_client)
+def repositories_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.RepositoriesApi
@pytest.fixture(scope="session")
-def repository_versions_api_client(pulpcore_client):
- return RepositoryVersionsApi(pulpcore_client)
+def repository_versions_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.RepositoryVersionsApi
@pytest.fixture(scope="session")
-def publications_api_client(pulpcore_client):
- return PublicationsApi(pulpcore_client)
+def publications_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.PublicationsApi
@pytest.fixture(scope="session")
-def exporters_pulp_api_client(pulpcore_client):
- return ExportersPulpApi(pulpcore_client)
+def exporters_pulp_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ExportersPulpApi
@pytest.fixture(scope="session")
-def exporters_pulp_exports_api_client(pulpcore_client):
- return ExportersPulpExportsApi(pulpcore_client)
+def exporters_pulp_exports_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ExportersPulpExportsApi
@pytest.fixture(scope="session")
-def exporters_filesystem_api_client(pulpcore_client):
- return ExportersFilesystemApi(pulpcore_client)
+def exporters_filesystem_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ExportersFilesystemApi
@pytest.fixture(scope="session")
-def exporters_filesystem_exports_api_client(pulpcore_client):
- return ExportersFilesystemExportsApi(pulpcore_client)
+def exporters_filesystem_exports_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ExportersFilesystemExportsApi
@pytest.fixture(scope="session")
-def importers_pulp_api_client(pulpcore_client):
- return ImportersPulpApi(pulpcore_client)
+def importers_pulp_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ImportersPulpApi
@pytest.fixture(scope="session")
-def importers_pulp_imports_api_client(pulpcore_client):
- return ImportersPulpImportsApi(pulpcore_client)
+def importers_pulp_imports_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ImportersPulpImportsApi
@pytest.fixture(scope="session")
-def importers_pulp_imports_check_api_client(pulpcore_client):
- return ImportersPulpImportCheckApi(pulpcore_client)
+def importers_pulp_imports_check_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ImportersPulpImportCheckApi
@pytest.fixture(scope="session")
-def signing_service_api_client(pulpcore_client):
- return SigningServicesApi(pulpcore_client)
+def signing_service_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.SigningServicesApi
@pytest.fixture(scope="session")
-def content_guards_api_client(pulpcore_client):
- return ContentguardsApi(pulpcore_client)
+def content_guards_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentguardsApi
@pytest.fixture(scope="session")
-def rbac_contentguard_api_client(pulpcore_client):
- return ContentguardsRbacApi(pulpcore_client)
+def rbac_contentguard_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentguardsRbacApi
@pytest.fixture(scope="session")
-def redirect_contentguard_api_client(pulpcore_client):
- return ContentguardsContentRedirectApi(pulpcore_client)
+def redirect_contentguard_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentguardsContentRedirectApi
@pytest.fixture(scope="session")
-def header_contentguard_api_client(pulpcore_client):
- return ContentguardsHeaderApi(pulpcore_client)
+def header_contentguard_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentguardsHeaderApi
@pytest.fixture(scope="session")
-def composite_contentguard_api_client(pulpcore_client):
- return ContentguardsCompositeApi(pulpcore_client)
+def composite_contentguard_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.ContentguardsCompositeApi
@pytest.fixture(scope="session")
-def orphans_cleanup_api_client(pulpcore_client):
- return OrphansCleanupApi(pulpcore_client)
+def orphans_cleanup_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.OrphansCleanupApi
@pytest.fixture(scope="session")
-def repositories_reclaim_space_api_client(pulpcore_client):
- return RepositoriesReclaimSpaceApi(pulpcore_client)
+def repositories_reclaim_space_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.RepositoriesReclaimSpaceApi
@pytest.fixture(scope="session")
-def repair_api_client(pulpcore_client):
- return RepairApi(pulpcore_client)
+def repair_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.RepairApi
@pytest.fixture(scope="session")
-def upstream_pulp_api_client(pulpcore_client):
- return UpstreamPulpsApi(pulpcore_client)
+def upstream_pulp_api_client(pulpcore_bindings):
+ warnings.warn(
+ "This fixture is deprecated. Use `pulpcore_bindings` instead.", DeprecationWarning
+ )
+ return pulpcore_bindings.UpstreamPulpsApi
# Threaded local fixture servers
@@ -514,8 +598,15 @@ def _gen_fixture_server(fixtures_root, ssl_ctx):
# Proxy Fixtures
[email protected](scope="session")
+def _proxy_module():
+ import proxy
+
+ return proxy
+
+
@pytest.fixture
-def http_proxy(fixtures_cfg, unused_port):
+def http_proxy(_proxy_module, fixtures_cfg, unused_port):
host = fixtures_cfg.aiohttp_fixtures_origin
port = unused_port()
proxypy_args = [
@@ -529,12 +620,12 @@ def http_proxy(fixtures_cfg, unused_port):
proxy_data = ProxyData(host=host, port=port)
- with proxy.Proxy(input_args=proxypy_args):
+ with _proxy_module.Proxy(input_args=proxypy_args):
yield proxy_data
@pytest.fixture
-def http_proxy_with_auth(fixtures_cfg, unused_port):
+def http_proxy_with_auth(_proxy_module, fixtures_cfg, unused_port):
host = fixtures_cfg.aiohttp_fixtures_origin
port = unused_port()
@@ -554,12 +645,12 @@ def http_proxy_with_auth(fixtures_cfg, unused_port):
proxy_data = ProxyData(host=host, port=port, username=username, password=password)
- with proxy.Proxy(input_args=proxypy_args):
+ with _proxy_module.Proxy(input_args=proxypy_args):
yield proxy_data
@pytest.fixture
-def https_proxy(fixtures_cfg, unused_port, proxy_tls_certificate_pem_path):
+def https_proxy(_proxy_module, fixtures_cfg, unused_port, proxy_tls_certificate_pem_path):
host = fixtures_cfg.aiohttp_fixtures_origin
port = unused_port()
@@ -578,7 +669,7 @@ def https_proxy(fixtures_cfg, unused_port, proxy_tls_certificate_pem_path):
proxy_data = ProxyData(host=host, port=port, ssl=True) # TODO update me
- with proxy.Proxy(input_args=proxypy_args):
+ with _proxy_module.Proxy(input_args=proxypy_args):
yield proxy_data
@@ -610,8 +701,15 @@ def __init__(self, *, host, port, username=None, password=None, ssl=False):
@pytest.fixture(scope="session")
-def tls_certificate_authority():
- return trustme.CA()
+def _trustme_module():
+ import trustme
+
+ return trustme
+
+
[email protected](scope="session")
+def tls_certificate_authority(_trustme_module):
+ return _trustme_module.CA()
@pytest.fixture
@@ -630,8 +728,8 @@ def tls_certificate(fixtures_cfg, tls_certificate_authority):
@pytest.fixture(scope="session")
-def proxy_tls_certificate_authority():
- return trustme.CA()
+def proxy_tls_certificate_authority(_trustme_module):
+ return _trustme_module.CA()
@pytest.fixture
@@ -651,8 +749,8 @@ def proxy_tls_certificate_pem_path(proxy_tls_certificate):
@pytest.fixture(scope="session")
-def client_tls_certificate_authority():
- return trustme.CA()
+def client_tls_certificate_authority(_trustme_module):
+ return _trustme_module.CA()
@pytest.fixture
@@ -839,7 +937,7 @@ def delete_orphans_pre(request, orphans_cleanup_api_client, monitor_task):
@pytest.fixture(scope="session")
-def monitor_task(tasks_api_client, pulp_domain_enabled):
+def monitor_task(pulpcore_bindings, pulp_domain_enabled):
"""
Wait for a task to reach a final state.
@@ -852,8 +950,8 @@ def _monitor_task(task_href, timeout=TASK_TIMEOUT):
task_timeout = int(timeout / SLEEP_TIME)
for dummy in range(task_timeout):
try:
- task = tasks_api_client.read(task_href)
- except ApiException as e:
+ task = pulpcore_bindings.TasksApi.read(task_href)
+ except pulpcore_bindings.ApiException as e:
if pulp_domain_enabled and e.status == 404:
# Task's domain has been deleted, nothing to show anymore
return {}
@@ -874,7 +972,7 @@ def _monitor_task(task_href, timeout=TASK_TIMEOUT):
@pytest.fixture(scope="session")
-def monitor_task_group(task_groups_api_client):
+def monitor_task_group(pulpcore_bindings):
"""
Wait for a task group to reach a final state.
@@ -886,7 +984,7 @@ def monitor_task_group(task_groups_api_client):
def _monitor_task_group(task_group_href, timeout=TASK_TIMEOUT):
task_timeout = int(timeout / SLEEP_TIME)
for dummy in range(task_timeout):
- task_group = task_groups_api_client.read(task_group_href)
+ task_group = pulpcore_bindings.TaskGroupsApi.read(task_group_href)
if (task_group.waiting + task_group.running + task_group.canceling) == 0:
break
diff --git a/pulpcore/tests/functional/api/pulp_file/test_acs.py b/pulpcore/tests/functional/api/pulp_file/test_acs.py
--- a/pulpcore/tests/functional/api/pulp_file/test_acs.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_acs.py
@@ -91,7 +91,7 @@ def test_acs_validation_and_update(
@pytest.mark.parallel
def test_acs_sync(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_acs_api_client,
basic_manifest_path,
gen_object_with_cleanup,
@@ -122,7 +122,9 @@ def test_acs_sync(
# Sync the repository
repository_sync_data = RepositorySyncURL(remote=main_remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data).task)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, repository_sync_data).task
+ )
# Assert that only the PULP_MANIFEST was downloaded from the main remote
assert len(main_server.requests_record) == 1
@@ -143,7 +145,7 @@ def test_acs_sync(
@pytest.mark.parallel
def test_acs_sync_with_paths(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_acs_api_client,
basic_manifest_path,
large_manifest_path,
@@ -180,7 +182,9 @@ def test_acs_sync_with_paths(
# Sync the repository
repository_sync_data = RepositorySyncURL(remote=main_remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data).task)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, repository_sync_data).task
+ )
# Assert that only the PULP_MANIFEST was downloaded from the main remote
assert len(main_server.requests_record) == 1
@@ -202,7 +206,7 @@ def test_acs_sync_with_paths(
@pytest.mark.parallel
def test_serving_acs_content(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_acs_api_client,
file_distribution_factory,
basic_manifest_path,
@@ -235,14 +239,16 @@ def test_serving_acs_content(
# Turn on auto-publish on the repository
monitor_task(
- file_repository_api_client.partial_update(
+ file_bindings.RepositoriesFileApi.partial_update(
file_repo.pulp_href, {"autopublish": True, "remote": main_remote.pulp_href}
).task
)
# Sync the repository
repository_sync_data = RepositorySyncURL(remote=main_remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data).task)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, repository_sync_data).task
+ )
# Assert that only the PULP_MANIFEST was downloaded from the main remote
assert len(main_server.requests_record) == 1
diff --git a/pulpcore/tests/functional/api/pulp_file/test_auto_publish.py b/pulpcore/tests/functional/api/pulp_file/test_auto_publish.py
--- a/pulpcore/tests/functional/api/pulp_file/test_auto_publish.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_auto_publish.py
@@ -17,7 +17,7 @@ def file_repo_with_auto_publish(file_repository_factory):
def test_auto_publish_and_distribution(
file_repo_with_auto_publish,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
file_publication_api_client,
basic_manifest_path,
gen_object_with_cleanup,
@@ -28,7 +28,7 @@ def test_auto_publish_and_distribution(
):
"""Tests auto-publish and auto-distribution"""
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
distribution = gen_object_with_cleanup(
file_distribution_api_client,
{"name": "foo", "base_path": "bar/foo", "repository": repo.pulp_href},
@@ -46,8 +46,8 @@ def test_auto_publish_and_distribution(
# Sync from the remote
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(repo.pulp_href, body).task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(repo.pulp_href, body).task)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# Assert that a new repository version was created and a publication was created
assert repo.latest_version_href.endswith("/versions/1/")
@@ -69,11 +69,11 @@ def test_auto_publish_and_distribution(
# Add a new content unit to the repository and assert that a publication gets created and the
# new content unit is in it
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [file_random_content_unit.pulp_href]}
).task
)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
files_in_second_publication = get_files_in_manifest(
"{}{}".format(distribution.base_url, publication.manifest)
)
diff --git a/pulpcore/tests/functional/api/pulp_file/test_bad_sync.py b/pulpcore/tests/functional/api/pulp_file/test_bad_sync.py
--- a/pulpcore/tests/functional/api/pulp_file/test_bad_sync.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_bad_sync.py
@@ -9,7 +9,7 @@
@pytest.fixture
def perform_sync(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_remote_api_client,
gen_object_with_cleanup,
monitor_task,
@@ -23,7 +23,7 @@ def _perform_sync(url, policy="immediate"):
remote = gen_object_with_cleanup(file_remote_api_client, remote_data)
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
return file_repo
yield _perform_sync
diff --git a/pulpcore/tests/functional/api/pulp_file/test_crud_content_unit.py b/pulpcore/tests/functional/api/pulp_file/test_crud_content_unit.py
--- a/pulpcore/tests/functional/api/pulp_file/test_crud_content_unit.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_crud_content_unit.py
@@ -57,7 +57,7 @@ def test_same_sha256_same_relative_path_no_repo(
def test_same_sha256_same_relative_path_repo_specified(
random_artifact,
file_content_api_client,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
gen_user,
file_repository_factory,
@@ -81,7 +81,7 @@ def test_same_sha256_same_relative_path_repo_specified(
content1 = file_content_api_client.read(monitor_task(response1.task).created_resources[1])
content2 = file_content_api_client.read(monitor_task(response2.task).created_resources[0])
assert content1.pulp_href == content2.pulp_href
- repo1 = file_repository_api_client.read(repo1.pulp_href)
+ repo1 = file_bindings.RepositoriesFileApi.read(repo1.pulp_href)
assert repo1.latest_version_href.endswith("/versions/1/")
version = file_repository_version_api_client.read(repo1.latest_version_href)
@@ -94,7 +94,7 @@ def test_same_sha256_same_relative_path_repo_specified(
content3 = file_content_api_client.read(monitor_task(ctask3).created_resources[1])
assert content3.pulp_href == content1.pulp_href
- repo2 = file_repository_api_client.read(repo2.pulp_href)
+ repo2 = file_bindings.RepositoriesFileApi.read(repo2.pulp_href)
assert repo2.latest_version_href.endswith("/versions/1/")
version = file_repository_version_api_client.read(repo2.latest_version_href)
@@ -124,7 +124,7 @@ def test_second_content_unit_with_same_rel_path_replaces_the_first(
file_content_api_client,
gen_object_with_cleanup,
file_repository_version_api_client,
- file_repository_api_client,
+ file_bindings,
):
latest_repo_version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert latest_repo_version.number == 0
@@ -136,7 +136,7 @@ def test_second_content_unit_with_same_rel_path_replaces_the_first(
}
gen_object_with_cleanup(file_content_api_client, **artifact_attrs)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
latest_repo_version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert latest_repo_version.content_summary.present["file.file"]["count"] == 1
assert latest_repo_version.number == 1
@@ -144,7 +144,7 @@ def test_second_content_unit_with_same_rel_path_replaces_the_first(
artifact_attrs["artifact"] = random_artifact_factory().pulp_href
gen_object_with_cleanup(file_content_api_client, **artifact_attrs)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
latest_repo_version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert latest_repo_version.content_summary.present["file.file"]["count"] == 1
assert latest_repo_version.number == 2
@@ -157,7 +157,7 @@ def test_cannot_create_repo_version_with_two_relative_paths_the_same(
file_content_api_client,
gen_object_with_cleanup,
file_repository_version_api_client,
- file_repository_api_client,
+ file_bindings,
monitor_task,
):
latest_repo_version = file_repository_version_api_client.read(file_repo.latest_version_href)
@@ -178,22 +178,22 @@ def test_cannot_create_repo_version_with_two_relative_paths_the_same(
data = {"add_content_units": [first_content_unit.pulp_href, second_content_unit.pulp_href]}
with pytest.raises(PulpTaskError):
- response = file_repository_api_client.modify(file_repo.pulp_href, data)
+ response = file_bindings.RepositoriesFileApi.modify(file_repo.pulp_href, data)
monitor_task(response.task)
@pytest.mark.parallel
-def test_bad_inputs_to_modify_endpoint(file_repo, file_repository_api_client, needs_pulp_plugin):
+def test_bad_inputs_to_modify_endpoint(file_repo, file_bindings, needs_pulp_plugin):
needs_pulp_plugin("core", min="3.23.0.dev")
with pytest.raises(ApiException):
- file_repository_api_client.modify(file_repo.pulp_href, [{}])
+ file_bindings.RepositoriesFileApi.modify(file_repo.pulp_href, [{}])
with pytest.raises(ApiException):
- file_repository_api_client.modify(file_repo.pulp_href, {"a": "b"})
+ file_bindings.RepositoriesFileApi.modify(file_repo.pulp_href, {"a": "b"})
with pytest.raises(ApiException):
- file_repository_api_client.modify(file_repo.pulp_href, ["/content/"])
+ file_bindings.RepositoriesFileApi.modify(file_repo.pulp_href, ["/content/"])
@pytest.mark.parallel
diff --git a/pulpcore/tests/functional/api/pulp_file/test_domains.py b/pulpcore/tests/functional/api/pulp_file/test_domains.py
--- a/pulpcore/tests/functional/api/pulp_file/test_domains.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_domains.py
@@ -16,7 +16,7 @@
def test_object_creation(
domains_api_client,
gen_object_with_cleanup,
- file_repository_api_client,
+ file_bindings,
file_remote_factory,
basic_manifest_path,
):
@@ -30,22 +30,24 @@ def test_object_creation(
domain_name = domain.name
repo_body = {"name": str(uuid.uuid4())}
- repo = gen_object_with_cleanup(file_repository_api_client, repo_body, pulp_domain=domain_name)
+ repo = gen_object_with_cleanup(
+ file_bindings.RepositoriesFileApi, repo_body, pulp_domain=domain_name
+ )
assert f"{domain_name}/api/v3/" in repo.pulp_href
- repos = file_repository_api_client.list(pulp_domain=domain_name)
+ repos = file_bindings.RepositoriesFileApi.list(pulp_domain=domain_name)
assert repos.count == 1
assert repo.pulp_href == repos.results[0].pulp_href
# Will list repos on default domain
- default_repos = file_repository_api_client.list(name=repo.name)
+ default_repos = file_bindings.RepositoriesFileApi.list(name=repo.name)
assert default_repos.count == 0
# Try to create an object w/ cross domain relations
default_remote = file_remote_factory(manifest_path=basic_manifest_path, policy="immediate")
with pytest.raises(ApiException) as e:
repo_body = {"name": str(uuid.uuid4()), "remote": default_remote.pulp_href}
- file_repository_api_client.create(repo_body, pulp_domain=domain.name)
+ file_bindings.RepositoriesFileApi.create(repo_body, pulp_domain=domain.name)
assert e.value.status == 400
# What key should this error be under? non-field-errors seems wrong
assert json.loads(e.value.body) == {
@@ -54,7 +56,7 @@ def test_object_creation(
with pytest.raises(ApiException) as e:
sync_body = {"remote": default_remote.pulp_href}
- file_repository_api_client.sync(repo.pulp_href, sync_body)
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, sync_body)
assert e.value.status == 400
assert json.loads(e.value.body) == {
"non_field_errors": [f"Objects must all be apart of the {domain_name} domain."]
@@ -180,7 +182,7 @@ def test_content_upload(
@pytest.mark.parallel
def test_content_promotion(
domains_api_client,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
basic_manifest_path,
file_remote_factory,
@@ -203,13 +205,13 @@ def test_content_promotion(
manifest_path=basic_manifest_path, policy="immediate", pulp_domain=domain.name
)
repo_body = {"name": str(uuid.uuid4()), "remote": remote.pulp_href}
- repo = file_repository_api_client.create(repo_body, pulp_domain=domain.name)
+ repo = file_bindings.RepositoriesFileApi.create(repo_body, pulp_domain=domain.name)
- task = file_repository_api_client.sync(repo.pulp_href, {}).task
+ task = file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {}).task
response = monitor_task(task)
assert len(response.created_resources) == 1
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == "1"
# Publish task
@@ -244,7 +246,7 @@ def test_content_promotion(
assert results.error is None
# Cleanup to delete the domain
- task = file_repository_api_client.delete(repo.pulp_href).task
+ task = file_bindings.RepositoriesFileApi.delete(repo.pulp_href).task
monitor_task(task)
body = {"orphan_protection_time": 0}
task = orphans_cleanup_api_client.cleanup(body, pulp_domain=domain.name).task
@@ -252,9 +254,7 @@ def test_content_promotion(
@pytest.mark.parallel
-def test_domain_rbac(
- domains_api_client, gen_user, gen_object_with_cleanup, file_repository_api_client
-):
+def test_domain_rbac(domains_api_client, gen_user, gen_object_with_cleanup, file_bindings):
"""Test domain level-roles."""
body = {
"name": str(uuid.uuid4()),
@@ -269,30 +269,32 @@ def test_domain_rbac(
user_b = gen_user(username="b", domain_roles=[(file_creator, domain.pulp_href)])
# Create two repos in different domains w/ admin user
- gen_object_with_cleanup(file_repository_api_client, {"name": str(uuid.uuid4())})
+ gen_object_with_cleanup(file_bindings.RepositoriesFileApi, {"name": str(uuid.uuid4())})
gen_object_with_cleanup(
- file_repository_api_client, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
+ file_bindings.RepositoriesFileApi, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
)
with user_b:
repo = gen_object_with_cleanup(
- file_repository_api_client, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
+ file_bindings.RepositoriesFileApi, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
)
- repos = file_repository_api_client.list(pulp_domain=domain.name)
+ repos = file_bindings.RepositoriesFileApi.list(pulp_domain=domain.name)
assert repos.count == 1
assert repos.results[0].pulp_href == repo.pulp_href
# Try to create a repository in default domain
with pytest.raises(ApiException) as e:
- file_repository_api_client.create({"name": str(uuid.uuid4())})
+ file_bindings.RepositoriesFileApi.create({"name": str(uuid.uuid4())})
assert e.value.status == 403
with user_a:
- repos = file_repository_api_client.list(pulp_domain=domain.name)
+ repos = file_bindings.RepositoriesFileApi.list(pulp_domain=domain.name)
assert repos.count == 2
# Try to read repos in the default domain
- repos = file_repository_api_client.list()
+ repos = file_bindings.RepositoriesFileApi.list()
assert repos.count == 0
# Try to create a repo
with pytest.raises(ApiException) as e:
- file_repository_api_client.create({"name": str(uuid.uuid4())}, pulp_domain=domain.name)
+ file_bindings.RepositoriesFileApi.create(
+ {"name": str(uuid.uuid4())}, pulp_domain=domain.name
+ )
assert e.value.status == 403
diff --git a/pulpcore/tests/functional/api/pulp_file/test_download_policies.py b/pulpcore/tests/functional/api/pulp_file/test_download_policies.py
--- a/pulpcore/tests/functional/api/pulp_file/test_download_policies.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_download_policies.py
@@ -45,7 +45,7 @@ def test_download_policy(
file_repo,
file_remote_ssl_factory,
file_remote_api_client,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_publication_api_client,
file_distribution_api_client,
@@ -64,7 +64,7 @@ def test_download_policy(
remote = file_remote_ssl_factory(
manifest_path=range_header_manifest_path, policy=download_policy
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.latest_version_href.endswith("/versions/0/")
# Check what content and artifacts are in the fixture repository
@@ -72,8 +72,8 @@ def test_download_policy(
# Sync from the remote and assert that a new repository version is created
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.latest_version_href.endswith("/versions/1/")
version = file_repository_version_api_client.read(file_repo.latest_version_href)
@@ -82,8 +82,8 @@ def test_download_policy(
# Sync again and assert that nothing changes
latest_version_href = file_repo.latest_version_href
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert latest_version_href == file_repo.latest_version_href
version = file_repository_version_api_client.read(file_repo.latest_version_href)
@@ -234,6 +234,6 @@ def test_download_policy(
assert remote.policy == "immediate"
# Sync from the remote and assert that artifacts are downloaded
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
for f in expected_files:
assert len(artifacts_api_client.list(sha256=f[1]).results) == 1
diff --git a/pulpcore/tests/functional/api/pulp_file/test_labels.py b/pulpcore/tests/functional/api/pulp_file/test_labels.py
--- a/pulpcore/tests/functional/api/pulp_file/test_labels.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_labels.py
@@ -15,7 +15,7 @@ def test_create_repo_with_labels(file_repository_factory):
@pytest.mark.parallel
-def test_set_unset_all_labels(file_repo, file_repository_api_client, monitor_task):
+def test_set_unset_all_labels(file_repo, file_bindings, monitor_task):
"""Set and unset labels from a repository."""
assert file_repo.pulp_labels == {}
@@ -23,23 +23,25 @@ def test_set_unset_all_labels(file_repo, file_repository_api_client, monitor_tas
# Set some labels
labels = {"key_a": "label_a"}
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"pulp_labels": labels}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"pulp_labels": labels}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == labels
# Unset all labels
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"pulp_labels": {}}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"pulp_labels": {}}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == {}
@pytest.mark.parallel
-def test_add_remove_label_keys(
- file_repo, file_repository_api_client, file_repository_factory, monitor_task
-):
+def test_add_remove_label_keys(file_repo, file_bindings, file_repository_factory, monitor_task):
"""Add and Remove labels by key."""
# Set some initial labels
@@ -49,26 +51,28 @@ def test_add_remove_label_keys(
# Add a new key
labels["key_b"] = "label_b"
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"pulp_labels": labels}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"pulp_labels": labels}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == labels
# Remove the original key
del labels["key_a"]
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"pulp_labels": labels}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"pulp_labels": labels}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == labels
@pytest.mark.parallel
-def test_update_existing_label_value(
- file_repository_api_client, file_repository_factory, monitor_task
-):
+def test_update_existing_label_value(file_bindings, file_repository_factory, monitor_task):
"""Update an existing label."""
# Set some initial labels
@@ -78,15 +82,17 @@ def test_update_existing_label_value(
# Modify the value of an existing key
labels["key_a"] = "label_b"
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"pulp_labels": labels}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"pulp_labels": labels}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == labels
@pytest.mark.parallel
-def test_model_partial_update(file_repository_factory, file_repository_api_client, monitor_task):
+def test_model_partial_update(file_repository_factory, file_bindings, monitor_task):
"""Test that labels aren't unset accidentally with PATCH calls of other fields."""
# Set some initial labels
@@ -95,10 +101,12 @@ def test_model_partial_update(file_repository_factory, file_repository_api_clien
# Update the name only
monitor_task(
- file_repository_api_client.partial_update(file_repo.pulp_href, {"name": str(uuid4())}).task
+ file_bindings.RepositoriesFileApi.partial_update(
+ file_repo.pulp_href, {"name": str(uuid4())}
+ ).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.pulp_labels == labels
@@ -128,7 +136,7 @@ def test_invalid_labels(file_repository_factory):
@pytest.mark.parallel
-def test_label_select(file_repository_factory, file_repository_api_client):
+def test_label_select(file_repository_factory, file_bindings):
"""Test lots of select types."""
key1 = str(uuid4()).replace("-", "") # We can only have alphanumerics
key2 = str(uuid4()).replace("-", "") # We can only have alphanumerics
@@ -141,59 +149,63 @@ def test_label_select(file_repository_factory, file_repository_api_client):
file_repository_factory(name=str(uuid4()), pulp_labels={})
- results = file_repository_api_client.list(pulp_label_select=f"{key1}=production").results
+ results = file_bindings.RepositoriesFileApi.list(pulp_label_select=f"{key1}=production").results
assert len(results) == 1
- results = file_repository_api_client.list(pulp_label_select=f"{key1}!=production").results
+ results = file_bindings.RepositoriesFileApi.list(
+ pulp_label_select=f"{key1}!=production"
+ ).results
assert len(results) == 1
- results = file_repository_api_client.list(pulp_label_select=key1).results
+ results = file_bindings.RepositoriesFileApi.list(pulp_label_select=key1).results
assert len(results) == 2
- results = file_repository_api_client.list(pulp_label_select=f"{key1}~prod").results
+ results = file_bindings.RepositoriesFileApi.list(pulp_label_select=f"{key1}~prod").results
assert len(results) == 1
- results = file_repository_api_client.list(
+ results = file_bindings.RepositoriesFileApi.list(
pulp_label_select=f"{key1}=production,{key2}=true"
).results
assert len(results) == 1
- results = file_repository_api_client.list(
+ results = file_bindings.RepositoriesFileApi.list(
pulp_label_select=f"{key1}=production,{key2}!=false"
).results
assert len(results) == 1
- results = file_repository_api_client.list(pulp_label_select=f"!{key1},{key2}=false").results
+ results = file_bindings.RepositoriesFileApi.list(
+ pulp_label_select=f"!{key1},{key2}=false"
+ ).results
assert len(results) == 0
@pytest.mark.parallel
-def test_empty_blank_filter(file_repository_factory, file_repository_api_client):
+def test_empty_blank_filter(file_repository_factory, file_bindings):
"""Test filtering values with a blank string."""
key = str(uuid4()).replace("-", "") # We can only have alphanumerics
labels = {key: ""}
file_repository_factory(name=str(uuid4()), pulp_labels=labels)
- results = file_repository_api_client.list(pulp_label_select=f"{key}=").results
+ results = file_bindings.RepositoriesFileApi.list(pulp_label_select=f"{key}=").results
assert len(results) == 1
- results = file_repository_api_client.list(pulp_label_select=f"{key}~").results
+ results = file_bindings.RepositoriesFileApi.list(pulp_label_select=f"{key}~").results
assert len(results) == 1
@pytest.mark.parallel
-def test_invalid_label_select(file_repository_api_client):
+def test_invalid_label_select(file_bindings):
"""Test removing all labels."""
with pytest.raises(ApiException) as e_info:
- file_repository_api_client.list(pulp_label_select="").results
+ file_bindings.RepositoriesFileApi.list(pulp_label_select="").results
assert e_info.value.status == 400
with pytest.raises(ApiException) as e_info:
- file_repository_api_client.list(pulp_label_select="!environment=production").results
+ file_bindings.RepositoriesFileApi.list(pulp_label_select="!environment=production").results
assert e_info.value.status == 400
with pytest.raises(ApiException) as e_info:
- file_repository_api_client.list(pulp_label_select="=bad filter").results
+ file_bindings.RepositoriesFileApi.list(pulp_label_select="=bad filter").results
assert e_info.value.status == 400
diff --git a/pulpcore/tests/functional/api/pulp_file/test_mime_types.py b/pulpcore/tests/functional/api/pulp_file/test_mime_types.py
--- a/pulpcore/tests/functional/api/pulp_file/test_mime_types.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_mime_types.py
@@ -11,7 +11,7 @@
@pytest.mark.parallel
def test_content_types(
file_distribution_api_client,
- file_repository_api_client,
+ file_bindings,
file_repo_with_auto_publish,
file_content_unit_with_name_factory,
gen_object_with_cleanup,
@@ -38,7 +38,7 @@ def test_content_types(
units_to_add = list(map(lambda f: f.pulp_href, files.values()))
data = RepositoryAddRemoveContent(add_content_units=units_to_add)
monitor_task(
- file_repository_api_client.modify(file_repo_with_auto_publish.pulp_href, data).task
+ file_bindings.RepositoriesFileApi.modify(file_repo_with_auto_publish.pulp_href, data).task
)
data = FileFileDistribution(
diff --git a/pulpcore/tests/functional/api/pulp_file/test_publish.py b/pulpcore/tests/functional/api/pulp_file/test_publish.py
--- a/pulpcore/tests/functional/api/pulp_file/test_publish.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_publish.py
@@ -16,7 +16,7 @@
def test_crd_publications(
file_repo,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
file_publication_api_client,
basic_manifest_path,
gen_object_with_cleanup,
@@ -29,19 +29,19 @@ def test_crd_publications(
# Sync from the remote
initial_repo_version = file_repo.latest_version_href
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- first_repo_version_href = file_repository_api_client.read(
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ first_repo_version_href = file_bindings.RepositoriesFileApi.read(
file_repo.pulp_href
).latest_version_href
assert first_repo_version_href.endswith("/versions/1/")
# Add a new content unit to the repository and assert that a new repository version is created
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"add_content_units": [file_random_content_unit.pulp_href]}
).task
)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.latest_version_href.endswith("/versions/2/")
# Create a Publication using a repository and assert that its repository_version is the latest
@@ -72,7 +72,7 @@ def test_crd_publications(
publication = file_publication_api_client.read(publication.pulp_href)
# Read a publication by its href providing specific field list.
- config = file_repository_api_client.api_client.configuration
+ config = file_bindings.RepositoriesFileApi.api_client.configuration
auth = BasicAuth(login=config.username, password=config.password)
full_href = urljoin(config.host, publication.pulp_href)
for fields in [
diff --git a/pulpcore/tests/functional/api/pulp_file/test_pulp_export.py b/pulpcore/tests/functional/api/pulp_file/test_pulp_export.py
--- a/pulpcore/tests/functional/api/pulp_file/test_pulp_export.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_pulp_export.py
@@ -76,7 +76,7 @@ def _pulp_export_factory(exporter, body=None):
@pytest.fixture
def three_synced_repositories(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
write_3_iso_file_fixture_data_factory,
@@ -90,12 +90,12 @@ def three_synced_repositories(
]
repositories = [file_repository_factory(remote=remote.pulp_href) for remote in remotes]
sync_tasks = [
- file_repository_api_client.sync(repository.pulp_href, {}).task
+ file_bindings.RepositoriesFileApi.sync(repository.pulp_href, {}).task
for repository in repositories
]
[monitor_task(task) for task in sync_tasks]
repositories = [
- file_repository_api_client.read(repository.pulp_href) for repository in repositories
+ file_bindings.RepositoriesFileApi.read(repository.pulp_href) for repository in repositories
]
return repositories
diff --git a/pulpcore/tests/functional/api/pulp_file/test_rbac.py b/pulpcore/tests/functional/api/pulp_file/test_rbac.py
--- a/pulpcore/tests/functional/api/pulp_file/test_rbac.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_rbac.py
@@ -40,78 +40,99 @@ def _try_action(user, client, action, outcome, *args, **kwargs):
return _try_action
-def test_basic_actions(gen_users, file_repository_api_client, try_action, file_repo):
+def test_basic_actions(gen_users, file_bindings, try_action, file_repo):
"""Test list, read, create, update and delete apis."""
alice, bob, charlie = gen_users("filerepository")
- a_list = try_action(alice, file_repository_api_client, "list", 200)
+ a_list = try_action(alice, file_bindings.RepositoriesFileApi, "list", 200)
assert a_list.count >= 1
- b_list = try_action(bob, file_repository_api_client, "list", 200)
- c_list = try_action(charlie, file_repository_api_client, "list", 200)
+ b_list = try_action(bob, file_bindings.RepositoriesFileApi, "list", 200)
+ c_list = try_action(charlie, file_bindings.RepositoriesFileApi, "list", 200)
assert (b_list.count, c_list.count) == (0, 0)
# Create testing
- try_action(alice, file_repository_api_client, "create", 403, {"name": str(uuid.uuid4())})
- repo = try_action(bob, file_repository_api_client, "create", 201, {"name": str(uuid.uuid4())})
- try_action(charlie, file_repository_api_client, "create", 403, {"name": str(uuid.uuid4())})
+ try_action(alice, file_bindings.RepositoriesFileApi, "create", 403, {"name": str(uuid.uuid4())})
+ repo = try_action(
+ bob, file_bindings.RepositoriesFileApi, "create", 201, {"name": str(uuid.uuid4())}
+ )
+ try_action(
+ charlie, file_bindings.RepositoriesFileApi, "create", 403, {"name": str(uuid.uuid4())}
+ )
# View testing
- try_action(alice, file_repository_api_client, "read", 200, repo.pulp_href)
- try_action(bob, file_repository_api_client, "read", 200, repo.pulp_href)
- try_action(charlie, file_repository_api_client, "read", 404, repo.pulp_href)
+ try_action(alice, file_bindings.RepositoriesFileApi, "read", 200, repo.pulp_href)
+ try_action(bob, file_bindings.RepositoriesFileApi, "read", 200, repo.pulp_href)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "read", 404, repo.pulp_href)
# Update testing
update_args = [repo.pulp_href, {"name": str(uuid.uuid4())}]
- try_action(alice, file_repository_api_client, "partial_update", 403, *update_args)
- try_action(bob, file_repository_api_client, "partial_update", 202, *update_args)
- try_action(charlie, file_repository_api_client, "partial_update", 404, *update_args)
+ try_action(alice, file_bindings.RepositoriesFileApi, "partial_update", 403, *update_args)
+ try_action(bob, file_bindings.RepositoriesFileApi, "partial_update", 202, *update_args)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "partial_update", 404, *update_args)
# Delete testing
- try_action(alice, file_repository_api_client, "delete", 403, repo.pulp_href)
- try_action(charlie, file_repository_api_client, "delete", 404, repo.pulp_href)
- try_action(bob, file_repository_api_client, "delete", 202, repo.pulp_href)
+ try_action(alice, file_bindings.RepositoriesFileApi, "delete", 403, repo.pulp_href)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "delete", 404, repo.pulp_href)
+ try_action(bob, file_bindings.RepositoriesFileApi, "delete", 202, repo.pulp_href)
@pytest.mark.parallel
-def test_role_management(
- gen_users, file_repository_api_client, file_repository_factory, try_action
-):
+def test_role_management(gen_users, file_bindings, file_repository_factory, try_action):
"""Check that role management apis."""
alice, bob, charlie = gen_users("filerepository")
with bob:
href = file_repository_factory().pulp_href
# Permission check testing
- aperm_response = try_action(alice, file_repository_api_client, "my_permissions", 200, href)
+ aperm_response = try_action(
+ alice, file_bindings.RepositoriesFileApi, "my_permissions", 200, href
+ )
assert aperm_response.permissions == []
- bperm_response = try_action(bob, file_repository_api_client, "my_permissions", 200, href)
+ bperm_response = try_action(bob, file_bindings.RepositoriesFileApi, "my_permissions", 200, href)
assert len(bperm_response.permissions) > 0
- try_action(charlie, file_repository_api_client, "my_permissions", 404, href)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "my_permissions", 404, href)
# Add "viewer" role testing
nested_role = {"users": [charlie.username], "role": "file.filerepository_viewer"}
- try_action(alice, file_repository_api_client, "add_role", 403, href, nested_role=nested_role)
- try_action(charlie, file_repository_api_client, "add_role", 404, href, nested_role=nested_role)
- try_action(bob, file_repository_api_client, "add_role", 201, href, nested_role=nested_role)
+ try_action(
+ alice, file_bindings.RepositoriesFileApi, "add_role", 403, href, nested_role=nested_role
+ )
+ try_action(
+ charlie, file_bindings.RepositoriesFileApi, "add_role", 404, href, nested_role=nested_role
+ )
+ try_action(
+ bob, file_bindings.RepositoriesFileApi, "add_role", 201, href, nested_role=nested_role
+ )
# Permission check testing again
- cperm_response = try_action(charlie, file_repository_api_client, "my_permissions", 200, href)
+ cperm_response = try_action(
+ charlie, file_bindings.RepositoriesFileApi, "my_permissions", 200, href
+ )
assert len(cperm_response.permissions) == 1
# Remove "viewer" role testing
- try_action(alice, file_repository_api_client, "remove_role", 403, href, nested_role=nested_role)
try_action(
- charlie, file_repository_api_client, "remove_role", 403, href, nested_role=nested_role
+ alice, file_bindings.RepositoriesFileApi, "remove_role", 403, href, nested_role=nested_role
+ )
+ try_action(
+ charlie,
+ file_bindings.RepositoriesFileApi,
+ "remove_role",
+ 403,
+ href,
+ nested_role=nested_role,
+ )
+ try_action(
+ bob, file_bindings.RepositoriesFileApi, "remove_role", 201, href, nested_role=nested_role
)
- try_action(bob, file_repository_api_client, "remove_role", 201, href, nested_role=nested_role)
# Permission check testing one more time
- try_action(charlie, file_repository_api_client, "my_permissions", 404, href)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "my_permissions", 404, href)
def test_content_apis(
gen_users,
file_content_api_client,
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
file_fixture_server,
@@ -131,7 +152,9 @@ def test_content_apis(
alice, bob, charlie = gen_users(["filerepository"])
repo = file_repository_factory()
remote = file_remote_factory(manifest_path=basic_manifest_path, policy="on_demand")
- monitor_task(file_repository_api_client.sync(repo.pulp_href, {"remote": remote.pulp_href}).task)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {"remote": remote.pulp_href}).task
+ )
aresponse = try_action(alice, file_content_api_client, "list", 200)
bresponse = try_action(bob, file_content_api_client, "list", 200)
@@ -141,7 +164,7 @@ def test_content_apis(
assert bresponse.count == cresponse.count == 0
nested_role = {"users": [charlie.username], "role": "file.filerepository_viewer"}
- file_repository_api_client.add_role(repo.pulp_href, nested_role)
+ file_bindings.RepositoriesFileApi.add_role(repo.pulp_href, nested_role)
cresponse = try_action(charlie, file_content_api_client, "list", 200)
assert cresponse.count > bresponse.count
@@ -154,14 +177,14 @@ def test_content_apis(
try_action(charlie, file_content_api_client, "create", 403, "1.iso", **body)
nested_role = {"users": [charlie.username], "role": "file.filerepository_owner"}
- file_repository_api_client.add_role(repo.pulp_href, nested_role)
+ file_bindings.RepositoriesFileApi.add_role(repo.pulp_href, nested_role)
try_action(charlie, file_content_api_client, "create", 202, "1.iso", **body)
@pytest.mark.parallel
def test_repository_apis(
gen_users,
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
file_remote_api_client,
@@ -175,13 +198,13 @@ def test_repository_apis(
repo = file_repository_factory()
bob_remote = file_remote_factory(manifest_path=basic_manifest_path, policy="on_demand")
body = {"remote": bob_remote.pulp_href}
- try_action(alice, file_repository_api_client, "sync", 403, repo.pulp_href, body)
- try_action(bob, file_repository_api_client, "sync", 202, repo.pulp_href, body)
- try_action(charlie, file_repository_api_client, "sync", 404, repo.pulp_href, body)
+ try_action(alice, file_bindings.RepositoriesFileApi, "sync", 403, repo.pulp_href, body)
+ try_action(bob, file_bindings.RepositoriesFileApi, "sync", 202, repo.pulp_href, body)
+ try_action(charlie, file_bindings.RepositoriesFileApi, "sync", 404, repo.pulp_href, body)
# Modify tests
- try_action(alice, file_repository_api_client, "modify", 403, repo.pulp_href, {})
- try_action(bob, file_repository_api_client, "modify", 202, repo.pulp_href, {})
- try_action(charlie, file_repository_api_client, "modify", 404, repo.pulp_href, {})
+ try_action(alice, file_bindings.RepositoriesFileApi, "modify", 403, repo.pulp_href, {})
+ try_action(bob, file_bindings.RepositoriesFileApi, "modify", 202, repo.pulp_href, {})
+ try_action(charlie, file_bindings.RepositoriesFileApi, "modify", 404, repo.pulp_href, {})
@pytest.mark.parallel
diff --git a/pulpcore/tests/functional/api/pulp_file/test_remote_settings.py b/pulpcore/tests/functional/api/pulp_file/test_remote_settings.py
--- a/pulpcore/tests/functional/api/pulp_file/test_remote_settings.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_remote_settings.py
@@ -8,10 +8,10 @@
def _run_basic_sync_and_assert(
- remote, file_repo, file_repository_api_client, file_content_api_client, monitor_task
+ remote, file_repo, file_bindings, file_content_api_client, monitor_task
):
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
# Check content is present, but no artifacts are there
content_response = file_content_api_client.list(
@@ -29,7 +29,7 @@ def _run_basic_sync_and_assert(
def test_http_sync_no_ssl(
file_remote_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -42,7 +42,7 @@ def test_http_sync_no_ssl(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -52,7 +52,7 @@ def test_http_sync_no_ssl(
def test_http_sync_ssl_tls_validation_off(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -67,7 +67,7 @@ def test_http_sync_ssl_tls_validation_off(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -77,7 +77,7 @@ def test_http_sync_ssl_tls_validation_off(
def test_http_sync_ssl_tls_validation_on(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -92,7 +92,7 @@ def test_http_sync_ssl_tls_validation_on(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -102,7 +102,7 @@ def test_http_sync_ssl_tls_validation_on(
def test_http_sync_ssl_tls_validation_defaults_to_on(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -118,7 +118,7 @@ def test_http_sync_ssl_tls_validation_defaults_to_on(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -128,7 +128,7 @@ def test_http_sync_ssl_tls_validation_defaults_to_on(
def test_http_sync_ssl_with_client_cert_req(
file_remote_client_cert_req_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -143,7 +143,7 @@ def test_http_sync_ssl_with_client_cert_req(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -153,7 +153,7 @@ def test_http_sync_ssl_with_client_cert_req(
def test_ondemand_to_immediate_sync(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -168,7 +168,7 @@ def test_ondemand_to_immediate_sync(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -180,7 +180,7 @@ def test_ondemand_to_immediate_sync(
_run_basic_sync_and_assert(
remote_immediate,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -192,7 +192,7 @@ def test_header_for_sync(
tls_certificate_authority_cert,
file_remote_api_client,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
gen_object_with_cleanup,
basic_manifest_path,
@@ -220,7 +220,7 @@ def test_header_for_sync(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
diff --git a/pulpcore/tests/functional/api/pulp_file/test_sync.py b/pulpcore/tests/functional/api/pulp_file/test_sync.py
--- a/pulpcore/tests/functional/api/pulp_file/test_sync.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_sync.py
@@ -14,7 +14,7 @@
def test_sync_file_protocol_handler(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_remote_api_client,
gen_object_with_cleanup,
@@ -33,12 +33,12 @@ def test_sync_file_protocol_handler(
files = set(os.listdir("/tmp/file/"))
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
# test that all the files are still present
assert set(os.listdir("/tmp/file/")) == files
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert file_repo.latest_version_href.endswith("/versions/1/")
version = file_repository_version_api_client.read(file_repo.latest_version_href)
@@ -50,7 +50,7 @@ def test_sync_file_protocol_handler(
def test_mirrored_sync(
file_repo,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
basic_manifest_path,
monitor_task,
):
@@ -58,7 +58,9 @@ def test_mirrored_sync(
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href, mirror=True)
- sync_response = file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, repository_sync_data
+ )
task = monitor_task(sync_response.task)
# Check that all the appropriate resources were created
@@ -72,7 +74,7 @@ def test_invalid_url(
file_repo,
gen_object_with_cleanup,
file_remote_api_client,
- file_repository_api_client,
+ file_bindings,
monitor_task,
):
"""Sync a repository using a remote url that does not exist."""
@@ -85,18 +87,18 @@ def test_invalid_url(
body = RepositorySyncURL(remote=remote.pulp_href)
with pytest.raises(PulpTaskError):
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
@pytest.mark.parallel
def test_invalid_file(
- file_repo, file_repository_api_client, invalid_manifest_path, file_remote_factory, monitor_task
+ file_repo, file_bindings, invalid_manifest_path, file_remote_factory, monitor_task
):
"""Sync a repository using an invalid file repository."""
remote = file_remote_factory(manifest_path=invalid_manifest_path, policy="immediate")
body = RepositorySyncURL(remote=remote.pulp_href)
with pytest.raises(PulpTaskError):
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
@pytest.mark.parallel
@@ -104,7 +106,7 @@ def test_duplicate_file_sync(
file_repo,
file_remote_factory,
duplicate_filename_paths,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
monitor_task,
):
@@ -112,8 +114,8 @@ def test_duplicate_file_sync(
remote2 = file_remote_factory(manifest_path=duplicate_filename_paths[1], policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert version.content_summary.present["file.file"]["count"] == 3
@@ -121,8 +123,8 @@ def test_duplicate_file_sync(
assert file_repo.latest_version_href.endswith("/1/")
body = RepositorySyncURL(remote=remote2.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert version.content_summary.present["file.file"]["count"] == 3
@@ -135,7 +137,7 @@ def test_filepath_includes_commas(
file_repo,
file_remote_factory,
manifest_path_with_commas,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
monitor_task,
):
@@ -143,8 +145,8 @@ def test_filepath_includes_commas(
remote = file_remote_factory(manifest_path=manifest_path_with_commas, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
version = file_repository_version_api_client.read(file_repo.latest_version_href)
assert version.content_summary.present["file.file"]["count"] == 3
diff --git a/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py b/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py
--- a/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py
+++ b/pulpcore/tests/functional/api/pulp_file/test_telemetry_collection.py
@@ -9,7 +9,7 @@
def test_get_requests(
file_distribution_api_client,
- file_repository_api_client,
+ file_bindings,
file_repo_with_auto_publish,
file_content_unit_with_name_factory,
gen_object_with_cleanup,
@@ -26,7 +26,7 @@ def test_get_requests(
units_to_add = list(map(lambda f: f.pulp_href, content_units))
data = RepositoryAddRemoveContent(add_content_units=units_to_add)
monitor_task(
- file_repository_api_client.modify(file_repo_with_auto_publish.pulp_href, data).task
+ file_bindings.RepositoriesFileApi.modify(file_repo_with_auto_publish.pulp_href, data).task
)
data = FileFileDistribution(
diff --git a/pulpcore/tests/functional/api/test_access_policy.py b/pulpcore/tests/functional/api/test_access_policy.py
--- a/pulpcore/tests/functional/api/test_access_policy.py
+++ b/pulpcore/tests/functional/api/test_access_policy.py
@@ -4,87 +4,87 @@
@pytest.mark.parallel
-def test_access_policy_cannot_be_created(access_policies_api_client):
+def test_access_policy_cannot_be_created(pulpcore_bindings):
"""Test that only plugin writers can ship a new AccessPolicy."""
- assert not hasattr(access_policies_api_client, "create")
+ assert not hasattr(pulpcore_bindings.AccessPoliciesApi, "create")
@pytest.mark.parallel
-def test_access_policy_default_policies(access_policies_api_client):
+def test_access_policy_default_policies(pulpcore_bindings):
"""Test that the default policies from pulpcore are installed."""
- groups_response = access_policies_api_client.list(viewset_name="groups")
+ groups_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="groups")
assert groups_response.count == 1
- groups_users_response = access_policies_api_client.list(viewset_name="groups/users")
+ groups_users_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="groups/users")
assert groups_users_response.count == 1
- tasks_response = access_policies_api_client.list(viewset_name="tasks")
+ tasks_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="tasks")
assert tasks_response.count == 1
-def test_statements_attr_can_be_modified(access_policies_api_client):
+def test_statements_attr_can_be_modified(pulpcore_bindings):
"""Test that `AccessPolicy.statements` can be modified"""
- tasks_response = access_policies_api_client.list(viewset_name="tasks")
+ tasks_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="tasks")
tasks_href = tasks_response.results[0].pulp_href
- task_access_policy = access_policies_api_client.read(tasks_href)
+ task_access_policy = pulpcore_bindings.AccessPoliciesApi.read(tasks_href)
original_statements = task_access_policy.statements
assert not task_access_policy.customized
assert original_statements != []
- access_policies_api_client.partial_update(tasks_href, {"statements": []})
- task_access_policy = access_policies_api_client.read(tasks_href)
+ pulpcore_bindings.AccessPoliciesApi.partial_update(tasks_href, {"statements": []})
+ task_access_policy = pulpcore_bindings.AccessPoliciesApi.read(tasks_href)
assert task_access_policy.customized
assert task_access_policy.statements == []
- access_policies_api_client.reset(tasks_href)
- task_access_policy = access_policies_api_client.read(tasks_href)
+ pulpcore_bindings.AccessPoliciesApi.reset(tasks_href)
+ task_access_policy = pulpcore_bindings.AccessPoliciesApi.read(tasks_href)
assert not task_access_policy.customized
assert task_access_policy.statements == original_statements
-def test_creation_hooks_attr_can_be_modified(access_policies_api_client):
+def test_creation_hooks_attr_can_be_modified(pulpcore_bindings):
"""Test that `AccessPolicy.creation_hooks` can be modified"""
- groups_response = access_policies_api_client.list(viewset_name="groups")
+ groups_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="groups")
groups_href = groups_response.results[0].pulp_href
- groups_access_policy = access_policies_api_client.read(groups_href)
+ groups_access_policy = pulpcore_bindings.AccessPoliciesApi.read(groups_href)
original_creation_hooks = groups_access_policy.creation_hooks
assert not groups_access_policy.customized
assert original_creation_hooks != []
- access_policies_api_client.partial_update(groups_href, {"creation_hooks": []})
- groups_access_policy = access_policies_api_client.read(groups_href)
+ pulpcore_bindings.AccessPoliciesApi.partial_update(groups_href, {"creation_hooks": []})
+ groups_access_policy = pulpcore_bindings.AccessPoliciesApi.read(groups_href)
assert groups_access_policy.customized
assert groups_access_policy.creation_hooks == []
- access_policies_api_client.reset(groups_href)
- groups_access_policy = access_policies_api_client.read(groups_href)
+ pulpcore_bindings.AccessPoliciesApi.reset(groups_href)
+ groups_access_policy = pulpcore_bindings.AccessPoliciesApi.read(groups_href)
assert not groups_access_policy.customized
assert groups_access_policy.creation_hooks == original_creation_hooks
@pytest.mark.parallel
-def test_customized_is_read_only(access_policies_api_client):
+def test_customized_is_read_only(pulpcore_bindings):
"""Test that the `AccessPolicy.customized` attribute is read only"""
- tasks_response = access_policies_api_client.list(viewset_name="tasks")
+ tasks_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="tasks")
tasks_href = tasks_response.results[0].pulp_href
- task_access_policy = access_policies_api_client.read(tasks_href)
+ task_access_policy = pulpcore_bindings.AccessPoliciesApi.read(tasks_href)
- response = access_policies_api_client.partial_update(
+ response = pulpcore_bindings.AccessPoliciesApi.partial_update(
tasks_href, {"customized": not task_access_policy.customized}
)
assert response.customized == task_access_policy.customized
@pytest.mark.parallel
-def test_viewset_name_is_read_only(access_policies_api_client):
+def test_viewset_name_is_read_only(pulpcore_bindings):
"""Test that the `AccessPolicy.viewset_name` attribute is read only"""
- tasks_response = access_policies_api_client.list(viewset_name="tasks")
+ tasks_response = pulpcore_bindings.AccessPoliciesApi.list(viewset_name="tasks")
tasks_href = tasks_response.results[0].pulp_href
- task_access_policy = access_policies_api_client.read(tasks_href)
+ task_access_policy = pulpcore_bindings.AccessPoliciesApi.read(tasks_href)
- response = access_policies_api_client.partial_update(
+ response = pulpcore_bindings.AccessPoliciesApi.partial_update(
tasks_href, {"viewset_name": "not-a-real-name"}
)
assert response.viewset_name == task_access_policy.viewset_name
diff --git a/pulpcore/tests/functional/api/test_api_docs.py b/pulpcore/tests/functional/api/test_api_docs.py
--- a/pulpcore/tests/functional/api/test_api_docs.py
+++ b/pulpcore/tests/functional/api/test_api_docs.py
@@ -9,33 +9,33 @@ def pulp_docs_url(pulp_api_v3_url):
@pytest.mark.parallel
-def test_valid_credentials(pulpcore_client, pulp_docs_url):
+def test_valid_credentials(pulpcore_bindings, pulp_docs_url):
"""Get API documentation with valid credentials.
Assert the API documentation is returned.
"""
- response = pulpcore_client.request("GET", pulp_docs_url)
+ response = pulpcore_bindings.client.request("GET", pulp_docs_url)
assert response.status == 200
@pytest.mark.parallel
-def test_no_credentials(pulpcore_client, pulp_docs_url, anonymous_user):
+def test_no_credentials(pulpcore_bindings, pulp_docs_url, anonymous_user):
"""Get API documentation with no credentials.
Assert the API documentation is returned.
"""
with anonymous_user:
- response = pulpcore_client.request("GET", pulp_docs_url)
+ response = pulpcore_bindings.client.request("GET", pulp_docs_url)
assert response.status == 200
@pytest.mark.parallel
-def test_http_method(pulpcore_client, pulp_docs_url):
+def test_http_method(pulpcore_bindings, pulp_docs_url):
"""Get API documentation with an HTTP method other than GET.
Assert an error is returned.
"""
with pytest.raises(ApiException) as e:
- pulpcore_client.request("POST", pulp_docs_url)
+ pulpcore_bindings.client.request("POST", pulp_docs_url)
assert e.value.status == 405
diff --git a/pulpcore/tests/functional/api/test_correlation_id.py b/pulpcore/tests/functional/api/test_correlation_id.py
--- a/pulpcore/tests/functional/api/test_correlation_id.py
+++ b/pulpcore/tests/functional/api/test_correlation_id.py
@@ -1,7 +1,7 @@
-def test_correlation_id(cid, tasks_api_client, orphans_cleanup_api_client, monitor_task):
+def test_correlation_id(cid, pulpcore_bindings, monitor_task):
"""Test that a correlation can be passed as a header and logged."""
- response, status, headers = orphans_cleanup_api_client.cleanup_with_http_info({})
+ response, status, headers = pulpcore_bindings.OrphansCleanupApi.cleanup_with_http_info({})
monitor_task(response.task)
- task = tasks_api_client.read(response.task)
+ task = pulpcore_bindings.TasksApi.read(response.task)
assert headers["Correlation-ID"] == cid
assert task.logging_cid == cid
diff --git a/pulpcore/tests/functional/api/test_crud_domains.py b/pulpcore/tests/functional/api/test_crud_domains.py
--- a/pulpcore/tests/functional/api/test_crud_domains.py
+++ b/pulpcore/tests/functional/api/test_crud_domains.py
@@ -128,7 +128,7 @@ def test_active_domain_deletion(domains_api_client, rbac_contentguard_api_client
@pytest.mark.parallel
def test_orphan_domain_deletion(
domains_api_client,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
gen_object_with_cleanup,
monitor_task,
@@ -145,7 +145,7 @@ def test_orphan_domain_deletion(
domain = gen_object_with_cleanup(domains_api_client, body)
repository = gen_object_with_cleanup(
- file_repository_api_client, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
+ file_bindings.RepositoriesFileApi, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
)
new_file = tmp_path / "new_file"
new_file.write_text("Test file")
@@ -166,7 +166,7 @@ def test_orphan_domain_deletion(
assert e.value.task.state == "failed"
# Delete the repository
- file_repository_api_client.delete(repository.pulp_href)
+ file_bindings.RepositoriesFileApi.delete(repository.pulp_href)
# Now succeed in deleting the domain
response = domains_api_client.delete(domain.pulp_href)
diff --git a/pulpcore/tests/functional/api/test_replication.py b/pulpcore/tests/functional/api/test_replication.py
--- a/pulpcore/tests/functional/api/test_replication.py
+++ b/pulpcore/tests/functional/api/test_replication.py
@@ -11,7 +11,7 @@
def test_replication(
domain_factory,
bindings_cfg,
- upstream_pulp_api_client,
+ pulpcore_bindings,
monitor_task_group,
pulp_settings,
gen_object_with_cleanup,
@@ -35,10 +35,10 @@ def test_replication(
"password": bindings_cfg.password,
}
upstream_pulp = gen_object_with_cleanup(
- upstream_pulp_api_client, upstream_pulp_body, pulp_domain=non_default_domain.name
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=non_default_domain.name
)
# Run the replicate task and assert that all tasks successfully complete.
- response = upstream_pulp_api_client.replicate(upstream_pulp.pulp_href)
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
task_group = monitor_task_group(response.task_group)
for task in task_group.tasks:
assert task.state == "completed"
@@ -48,11 +48,10 @@ def test_replication(
def test_replication_with_wrong_ca_cert(
domain_factory,
bindings_cfg,
- upstream_pulp_api_client,
+ pulpcore_bindings,
monitor_task_group,
pulp_settings,
gen_object_with_cleanup,
- tasks_api_client,
):
# This test assures that setting ca_cert on an Upstream Pulp causes that CA bundle to be used
# to verify the certificate presented by the Upstream Pulp's REST API. The replication tasks
@@ -103,21 +102,23 @@ def test_replication_with_wrong_ca_cert(
""",
}
upstream_pulp = gen_object_with_cleanup(
- upstream_pulp_api_client, upstream_pulp_body, pulp_domain=non_default_domain.name
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=non_default_domain.name
)
# Run the replicate task and assert that it fails with SSLError
with pytest.raises(PulpTaskGroupError) as e:
- response = upstream_pulp_api_client.replicate(upstream_pulp.pulp_href)
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
monitor_task_group(response.task_group)
- task = tasks_api_client.read(e.value.task_group.tasks[0].pulp_href)
+ task = pulpcore_bindings.TasksApi.read(e.value.task_group.tasks[0].pulp_href)
assert "SSLError" in task.error["description"]
# Update Upstream Pulp with tls_validation=False
- upstream_pulp_api_client.partial_update(upstream_pulp.pulp_href, {"tls_validation": False})
+ pulpcore_bindings.UpstreamPulpsApi.partial_update(
+ upstream_pulp.pulp_href, {"tls_validation": False}
+ )
# Run the replicate task again and assert that all tasks successfully complete.
- response = upstream_pulp_api_client.replicate(upstream_pulp.pulp_href)
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
task_group = monitor_task_group(response.task_group)
for task in task_group.tasks:
assert task.state == "completed"
@@ -166,7 +167,7 @@ def test_replicate_rbac(
try_action,
domain_factory,
bindings_cfg,
- upstream_pulp_api_client,
+ pulpcore_bindings,
pulp_settings,
gen_object_with_cleanup,
):
@@ -185,23 +186,29 @@ def test_replicate_rbac(
"pulp_label_select": str(uuid.uuid4()),
}
upstream_pulp = gen_object_with_cleanup(
- upstream_pulp_api_client, upstream_pulp_body, pulp_domain=non_default_domain.name
+ pulpcore_bindings.UpstreamPulpsApi,
+ upstream_pulp_body,
+ pulp_domain=non_default_domain.name,
)
# Assert that Alice (upstream pulp viewer) gets a 403
- try_action(alice, upstream_pulp_api_client, "replicate", 403, upstream_pulp.pulp_href)
+ try_action(alice, pulpcore_bindings.UpstreamPulpsApi, "replicate", 403, upstream_pulp.pulp_href)
# Assert that B (upstream pulp owner) gets a 202
- try_action(bob, upstream_pulp_api_client, "replicate", 202, upstream_pulp.pulp_href)
+ try_action(bob, pulpcore_bindings.UpstreamPulpsApi, "replicate", 202, upstream_pulp.pulp_href)
# Assert that Charlie (no role) get a 404
- try_action(charlie, upstream_pulp_api_client, "replicate", 404, upstream_pulp.pulp_href)
+ try_action(
+ charlie, pulpcore_bindings.UpstreamPulpsApi, "replicate", 404, upstream_pulp.pulp_href
+ )
# Assert that Dean can run replication
- try_action(dean, upstream_pulp_api_client, "replicate", 202, upstream_pulp.pulp_href)
+ try_action(dean, pulpcore_bindings.UpstreamPulpsApi, "replicate", 202, upstream_pulp.pulp_href)
# Assert that Dean can view the upstream pulp
- try_action(dean, upstream_pulp_api_client, "read", 200, upstream_pulp.pulp_href)
+ try_action(dean, pulpcore_bindings.UpstreamPulpsApi, "read", 200, upstream_pulp.pulp_href)
# Assert that Dean can't update the upstream pulp
- try_action(dean, upstream_pulp_api_client, "partial_update", 403, upstream_pulp.pulp_href, {})
+ try_action(
+ dean, pulpcore_bindings.UpstreamPulpsApi, "partial_update", 403, upstream_pulp.pulp_href, {}
+ )
diff --git a/pulpcore/tests/functional/api/test_repos.py b/pulpcore/tests/functional/api/test_repos.py
--- a/pulpcore/tests/functional/api/test_repos.py
+++ b/pulpcore/tests/functional/api/test_repos.py
@@ -11,7 +11,7 @@
@pytest.mark.parallel
def test_repository_content_filters(
file_content_api_client,
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
gen_object_with_cleanup,
@@ -24,57 +24,57 @@ def test_repository_content_filters(
repo_manifest_path = write_3_iso_file_fixture_data_factory(str(uuid4()))
remote = file_remote_factory(manifest_path=repo_manifest_path, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- task_response = file_repository_api_client.sync(repo.pulp_href, body).task
+ task_response = file_bindings.RepositoriesFileApi.sync(repo.pulp_href, body).task
version_href = monitor_task(task_response).created_resources[0]
content = file_content_api_client.list(repository_version_added=version_href).results[0]
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# filter repo by the content
- results = file_repository_api_client.list(with_content=content.pulp_href).results
+ results = file_bindings.RepositoriesFileApi.list(with_content=content.pulp_href).results
assert results == [repo]
- results = file_repository_api_client.list(latest_with_content=content.pulp_href).results
+ results = file_bindings.RepositoriesFileApi.list(latest_with_content=content.pulp_href).results
assert results == [repo]
# remove the content
- response = file_repository_api_client.modify(
+ response = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href,
{"remove_content_units": [content.pulp_href]},
)
monitor_task(response.task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# the repo still has the content unit
- results = file_repository_api_client.list(with_content=content.pulp_href).results
+ results = file_bindings.RepositoriesFileApi.list(with_content=content.pulp_href).results
assert results == [repo]
# but not in its latest version anymore
- results = file_repository_api_client.list(latest_with_content=content.pulp_href).results
+ results = file_bindings.RepositoriesFileApi.list(latest_with_content=content.pulp_href).results
assert results == []
@pytest.mark.parallel
-def test_repository_name_regex_filters(file_repository_factory, file_repository_api_client):
+def test_repository_name_regex_filters(file_repository_factory, file_bindings):
"""Test repository's name regex filters."""
uuid = uuid4()
repo = file_repository_factory(name=f"{uuid}-regex-test-repo")
pattern = f"^{uuid}-regex-test.*$"
- results = file_repository_api_client.list(name__regex=pattern).results
+ results = file_bindings.RepositoriesFileApi.list(name__regex=pattern).results
assert results == [repo]
# upper case pattern
- results = file_repository_api_client.list(name__regex=pattern.upper()).results
+ results = file_bindings.RepositoriesFileApi.list(name__regex=pattern.upper()).results
assert repo not in results
# upper case pattern with iregex
- results = file_repository_api_client.list(name__iregex=pattern.upper()).results
+ results = file_bindings.RepositoriesFileApi.list(name__iregex=pattern.upper()).results
assert results == [repo]
@pytest.mark.parallel
def test_repo_size(
file_repo,
- file_repository_api_client,
+ file_bindings,
file_remote_factory,
basic_manifest_path,
random_artifact_factory,
@@ -84,8 +84,8 @@ def test_repo_size(
# Sync repository with on_demand
remote = file_remote_factory(manifest_path=basic_manifest_path, policy="on_demand")
body = {"remote": remote.pulp_href}
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
- file_repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
cmd = (
"pulpcore-manager",
@@ -114,7 +114,7 @@ def test_repo_size(
# Resync with immediate
remote = file_remote_factory(manifest_path=basic_manifest_path, policy="immediate")
body = {"remote": remote.pulp_href}
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
run = subprocess.run(cmd, capture_output=True, check=True)
out = json.loads(run.stdout)
diff --git a/pulpcore/tests/functional/api/test_status.py b/pulpcore/tests/functional/api/test_status.py
--- a/pulpcore/tests/functional/api/test_status.py
+++ b/pulpcore/tests/functional/api/test_status.py
@@ -99,7 +99,7 @@ def test_post_authenticated(
test_path,
pulp_api_v3_path,
status_api_client,
- pulpcore_client,
+ pulpcore_bindings,
pulp_api_v3_url,
received_otel_span,
):
@@ -114,7 +114,7 @@ def test_post_authenticated(
# Try anyway to POST to /status/
status_url = f"{pulp_api_v3_url}status/"
with pytest.raises(ApiException) as e:
- pulpcore_client.request("POST", status_url, headers={"User-Agent": test_path})
+ pulpcore_bindings.client.request("POST", status_url, headers={"User-Agent": test_path})
assert e.value.status == 405
assert received_otel_span(
@@ -130,7 +130,7 @@ def test_post_authenticated(
@pytest.mark.parallel
def test_storage_per_domain(
status_api_client,
- pulpcore_client,
+ pulpcore_bindings,
pulp_api_v3_url,
domain_factory,
random_artifact_factory,
@@ -139,13 +139,13 @@ def test_storage_per_domain(
domain = domain_factory()
# Status endpoint is not exposed at domain url in API spec to prevent duplicates, call manually
status_url = f"{pulp_api_v3_url}status/".replace("default", domain.name)
- status_response = pulpcore_client.request("GET", status_url)
- domain_status = pulpcore_client.deserialize(status_response, "StatusResponse")
+ status_response = pulpcore_bindings.client.request("GET", status_url)
+ domain_status = pulpcore_bindings.client.deserialize(status_response, "StatusResponse")
assert domain_status.storage.used == 0
random_artifact_factory(size=1, pulp_domain=domain.name)
- status_response = pulpcore_client.request("GET", status_url)
- domain_status = pulpcore_client.deserialize(status_response, "StatusResponse")
+ status_response = pulpcore_bindings.client.request("GET", status_url)
+ domain_status = pulpcore_bindings.client.deserialize(status_response, "StatusResponse")
assert domain_status.storage.used == 1
diff --git a/pulpcore/tests/functional/api/test_tasking.py b/pulpcore/tests/functional/api/test_tasking.py
--- a/pulpcore/tests/functional/api/test_tasking.py
+++ b/pulpcore/tests/functional/api/test_tasking.py
@@ -13,10 +13,10 @@
@pytest.fixture(scope="session")
-def dispatch_task(pulpcore_client):
+def dispatch_task(pulpcore_bindings):
def _dispatch_task(*args, **kwargs):
- cid = pulpcore_client.default_headers.get("Correlation-ID") or str(uuid4())
- username = pulpcore_client.configuration.username
+ cid = pulpcore_bindings.client.default_headers.get("Correlation-ID") or str(uuid4())
+ username = pulpcore_bindings.client.configuration.username
commands = (
"from django_guid import set_guid; "
"from pulpcore.tasking.tasks import dispatch; "
@@ -81,7 +81,7 @@ def test_multi_resource_locking(dispatch_task, monitor_task):
@pytest.mark.parallel
-def test_delete_cancel_waiting_task(dispatch_task, tasks_api_client):
+def test_delete_cancel_waiting_task(dispatch_task, pulpcore_bindings):
# Queue one task after a long running one
resource = str(uuid4())
blocking_task_href = dispatch_task(
@@ -91,18 +91,18 @@ def test_delete_cancel_waiting_task(dispatch_task, tasks_api_client):
"pulpcore.app.tasks.test.sleep", args=(0,), exclusive_resources=[resource]
)
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "waiting"
# Try to delete first
with pytest.raises(ApiException) as ctx:
- tasks_api_client.delete(task_href)
+ pulpcore_bindings.TasksApi.delete(task_href)
assert ctx.value.status == 409
# Now cancel the task
- task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ task = pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
# cancel the blocking task
- tasks_api_client.tasks_cancel(blocking_task_href, {"state": "canceled"})
+ pulpcore_bindings.TasksApi.tasks_cancel(blocking_task_href, {"state": "canceled"})
if task.state == "canceling":
assert task.started_at is None
@@ -112,7 +112,7 @@ def test_delete_cancel_waiting_task(dispatch_task, tasks_api_client):
if task.state != "canceling":
break
time.sleep(1)
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "canceled"
assert task.started_at is None
@@ -120,11 +120,11 @@ def test_delete_cancel_waiting_task(dispatch_task, tasks_api_client):
@pytest.mark.parallel
-def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
+def test_delete_cancel_running_task(dispatch_task, pulpcore_bindings):
task_href = dispatch_task("pulpcore.app.tasks.test.sleep", args=(600,))
for i in range(10):
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
if task.state == "running":
break
time.sleep(1)
@@ -133,11 +133,11 @@ def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
# Try to delete first
with pytest.raises(ApiException) as ctx:
- tasks_api_client.delete(task_href)
+ pulpcore_bindings.TasksApi.delete(task_href)
assert ctx.value.status == 409
# Now cancel the task
- task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ task = pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
if task.state == "canceling":
assert task.started_at is not None
@@ -147,7 +147,7 @@ def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
if task.state != "canceling":
break
time.sleep(1)
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "canceled"
assert task.started_at is not None
@@ -155,24 +155,24 @@ def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
@pytest.mark.parallel
-def test_cancel_delete_finished_task(tasks_api_client, dispatch_task, monitor_task):
+def test_cancel_delete_finished_task(pulpcore_bindings, dispatch_task, monitor_task):
task_href = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,))
monitor_task(task_href)
# Try to cancel first
with pytest.raises(ApiException) as ctx:
- tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
assert ctx.value.status == 409
# Now delete the task
- tasks_api_client.delete(task_href)
+ pulpcore_bindings.TasksApi.delete(task_href)
@pytest.mark.parallel
-def test_cancel_nonexistent_task(pulp_api_v3_path, tasks_api_client):
+def test_cancel_nonexistent_task(pulp_api_v3_path, pulpcore_bindings):
task_href = f"{pulp_api_v3_path}tasks/{uuid4()}/"
with pytest.raises(ApiException) as ctx:
- tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
assert ctx.value.status == 404
@@ -225,71 +225,71 @@ def test_retrieve_task_with_minimal_fields(task, bindings_cfg):
@pytest.mark.parallel
-def test_retrieve_task_using_invalid_worker(tasks_api_client):
+def test_retrieve_task_using_invalid_worker(pulpcore_bindings):
"""Expects to raise an exception when using invalid worker value as filter."""
with pytest.raises(ApiException) as ctx:
- tasks_api_client.list(worker=str(uuid4()))
+ pulpcore_bindings.TasksApi.list(worker=str(uuid4()))
assert ctx.value.status == 400
@pytest.mark.parallel
-def test_retrieve_task_using_valid_worker(task, tasks_api_client):
+def test_retrieve_task_using_valid_worker(task, pulpcore_bindings):
"""Expects to retrieve a task using a valid worker URI as filter."""
- response = tasks_api_client.list(worker=task.worker)
+ response = pulpcore_bindings.TasksApi.list(worker=task.worker)
assert response.results and response.count
@pytest.mark.parallel
-def test_retrieve_task_using_invalid_date(tasks_api_client):
+def test_retrieve_task_using_invalid_date(pulpcore_bindings):
"""Expects to raise an exception when using invalid dates as filters"""
with pytest.raises(ApiException) as ctx:
- tasks_api_client.list(finished_at=str(uuid4()), started_at=str(uuid4()))
+ pulpcore_bindings.TasksApi.list(finished_at=str(uuid4()), started_at=str(uuid4()))
assert ctx.value.status == 400
@pytest.mark.parallel
-def test_retrieve_task_using_valid_date(task, tasks_api_client):
+def test_retrieve_task_using_valid_date(task, pulpcore_bindings):
"""Expects to retrieve a task using a valid date."""
- response = tasks_api_client.list(started_at=task.started_at)
+ response = pulpcore_bindings.TasksApi.list(started_at=task.started_at)
assert response.results and response.count
@pytest.mark.parallel
-def test_search_task_by_name(task, tasks_api_client):
+def test_search_task_by_name(task, pulpcore_bindings):
task_name = task.name
- search_results = tasks_api_client.list(name=task.name).results
+ search_results = pulpcore_bindings.TasksApi.list(name=task.name).results
assert search_results
assert all([task.name == task_name for task in search_results])
@pytest.mark.parallel
-def test_search_task_using_an_invalid_name(tasks_api_client):
+def test_search_task_using_an_invalid_name(pulpcore_bindings):
"""Expect to return an empty results array when searching using an invalid
task name.
"""
- search_results = tasks_api_client.list(name=str(uuid4()))
+ search_results = pulpcore_bindings.TasksApi.list(name=str(uuid4()))
assert not search_results.results and not search_results.count
@pytest.mark.parallel
-def test_filter_tasks_using_worker__in_filter(tasks_api_client, dispatch_task, monitor_task):
+def test_filter_tasks_using_worker__in_filter(pulpcore_bindings, dispatch_task, monitor_task):
task1_href = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,))
task2_href = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,))
task1 = monitor_task(task1_href)
task2 = monitor_task(task2_href)
- search_results = tasks_api_client.list(worker__in=(task1.worker, task2.worker))
+ search_results = pulpcore_bindings.TasksApi.list(worker__in=(task1.worker, task2.worker))
tasks_hrefs = [task.pulp_href for task in search_results.results]
@@ -297,22 +297,22 @@ def test_filter_tasks_using_worker__in_filter(tasks_api_client, dispatch_task, m
assert task2_href in tasks_hrefs
-def test_cancel_gooey_task(tasks_api_client, dispatch_task, monitor_task):
+def test_cancel_gooey_task(pulpcore_bindings, dispatch_task, monitor_task):
task_href = dispatch_task("pulpcore.app.tasks.test.gooey_task", args=(60,))
for i in range(10):
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
if task.state == "running":
break
time.sleep(1)
- task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ task = pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
if task.state == "canceling":
for i in range(30):
if task.state != "canceling":
break
time.sleep(1)
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "canceled"
@@ -339,12 +339,12 @@ def test_task_created_by(dispatch_task, monitor_task, gen_user, anonymous_user):
@pytest.mark.parallel
-def test_task_version_prevent_pickup(dispatch_task, tasks_api_client):
+def test_task_version_prevent_pickup(dispatch_task, pulpcore_bindings):
task1 = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,), versions={"core": "4.0.0"})
task2 = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,), versions={"catdog": "0.0.0"})
time.sleep(5)
for task_href in [task1, task2]:
- task = tasks_api_client.read(task_href)
+ task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "waiting"
- tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+ pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
diff --git a/pulpcore/tests/functional/api/test_upload.py b/pulpcore/tests/functional/api/test_upload.py
--- a/pulpcore/tests/functional/api/test_upload.py
+++ b/pulpcore/tests/functional/api/test_upload.py
@@ -44,7 +44,6 @@ def pulpcore_upload_chunks(
uploads_api_client,
artifacts_api_client,
gen_object_with_cleanup,
- tasks_api_client,
monitor_task,
):
"""Upload file in chunks."""
diff --git a/pulpcore/tests/functional/api/using_plugin/test_content_access.py b/pulpcore/tests/functional/api/using_plugin/test_content_access.py
--- a/pulpcore/tests/functional/api/using_plugin/test_content_access.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_content_access.py
@@ -18,7 +18,7 @@ def test_file_remote_on_demand(
file_fixtures_root,
file_repo_with_auto_publish,
file_remote_api_client,
- file_repository_api_client,
+ file_bindings,
gen_object_with_cleanup,
monitor_task,
):
@@ -32,8 +32,10 @@ def test_file_remote_on_demand(
remote = gen_object_with_cleanup(file_remote_api_client, kwargs)
# Sync from the remote
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo_with_auto_publish.pulp_href, body).task)
- repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo_with_auto_publish.pulp_href, body).task
+ )
+ repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
# Create a distribution from the publication
distribution = file_distribution_factory(repository=repo.pulp_href)
# attempt to download_file() a file
diff --git a/pulpcore/tests/functional/api/using_plugin/test_content_cache.py b/pulpcore/tests/functional/api/using_plugin/test_content_cache.py
--- a/pulpcore/tests/functional/api/using_plugin/test_content_cache.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_content_cache.py
@@ -17,7 +17,7 @@ def test_full_workflow(
file_repo_with_auto_publish,
basic_manifest_path,
file_remote_factory,
- file_repository_api_client,
+ file_bindings,
file_publication_api_client,
file_distribution_api_client,
file_content_api_client,
@@ -40,8 +40,10 @@ def _check_cache(url):
# Sync from the remote and assert that a new repository version is created
remote = file_remote_factory(manifest_path=basic_manifest_path, policy="immediate")
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo_with_auto_publish.pulp_href, body).task)
- repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo_with_auto_publish.pulp_href, body).task
+ )
+ repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
assert repo.latest_version_href.endswith("/versions/1/")
body = FileFilePublication(repository=repo.pulp_href)
@@ -85,7 +87,7 @@ def _check_cache(url):
relative_path="1.iso", repository_version=repo.latest_version_href
).results[0]
body = RepositoryAddRemoveContent(remove_content_units=[cfile.pulp_href])
- response = monitor_task(file_repository_api_client.modify(repo.pulp_href, body).task)
+ response = monitor_task(file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task)
pub3 = file_publication_api_client.read(response.created_resources[1])
files = ["", "", "PULP_MANIFEST", "PULP_MANIFEST", "2.iso", "2.iso"]
for i, file in enumerate(files):
@@ -119,7 +121,7 @@ def _check_cache(url):
assert (200, "HIT" if i % 2 == 1 else "MISS") == _check_cache(url), file
# Tests that deleting a repository invalidates the cache"""
- monitor_task(file_repository_api_client.delete(repo.pulp_href).task)
+ monitor_task(file_bindings.RepositoriesFileApi.delete(repo.pulp_href).task)
files = ["", "PULP_MANIFEST", "2.iso"]
for file in files:
url = urljoin(distro.base_url, file)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_content_delivery.py b/pulpcore/tests/functional/api/using_plugin/test_content_delivery.py
--- a/pulpcore/tests/functional/api/using_plugin/test_content_delivery.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_content_delivery.py
@@ -16,7 +16,7 @@ def test_delete_remote_on_demand(
file_repo_with_auto_publish,
file_remote_ssl_factory,
file_remote_api_client,
- file_repository_api_client,
+ file_bindings,
basic_manifest_path,
monitor_task,
file_distribution_factory,
@@ -26,8 +26,10 @@ def test_delete_remote_on_demand(
# Sync from the remote
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo_with_auto_publish.pulp_href, body).task)
- repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo_with_auto_publish.pulp_href, body).task
+ )
+ repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
# Create a distribution pointing to the repository
distribution = file_distribution_factory(repository=repo.pulp_href)
@@ -45,7 +47,7 @@ def test_delete_remote_on_demand(
# Recreate the remote and sync into the repository using it
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(repo.pulp_href, body).task)
# Assert that files can now be downloaded from the distribution
content_unit_url = urljoin(distribution.base_url, expected_file_list[0][0])
@@ -59,7 +61,7 @@ def test_delete_remote_on_demand(
def test_remote_artifact_url_update(
file_repo_with_auto_publish,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
basic_manifest_path,
basic_manifest_only_path,
monitor_task,
@@ -70,8 +72,10 @@ def test_remote_artifact_url_update(
# Sync from the remote
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo_with_auto_publish.pulp_href, body).task)
- repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo_with_auto_publish.pulp_href, body).task
+ )
+ repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
# Create a distribution from the publication
distribution = file_distribution_factory(repository=repo.pulp_href)
@@ -90,7 +94,9 @@ def test_remote_artifact_url_update(
# Sync from the remote and assert that content can now be downloaded
body = RepositorySyncURL(remote=remote2.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo_with_auto_publish.pulp_href, body).task)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo_with_auto_publish.pulp_href, body).task
+ )
content_unit_url = urljoin(distribution.base_url, expected_file_list[0][0])
downloaded_file = download_file(content_unit_url)
actual_checksum = hashlib.sha256(downloaded_file.body).hexdigest()
diff --git a/pulpcore/tests/functional/api/using_plugin/test_content_promotion.py b/pulpcore/tests/functional/api/using_plugin/test_content_promotion.py
--- a/pulpcore/tests/functional/api/using_plugin/test_content_promotion.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_content_promotion.py
@@ -12,7 +12,7 @@
def test_content_promotion(
file_repo_with_auto_publish,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
file_publication_api_client,
file_distribution_factory,
basic_manifest_path,
@@ -20,7 +20,7 @@ def test_content_promotion(
):
# Create a repository, publication, and 2 distributions
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- file_repo = file_repository_api_client.read(file_repo_with_auto_publish.pulp_href)
+ file_repo = file_bindings.RepositoriesFileApi.read(file_repo_with_auto_publish.pulp_href)
# Check what content and artifacts are in the fixture repository
expected_files = get_files_in_manifest(remote.url)
@@ -28,7 +28,7 @@ def test_content_promotion(
# Sync from the remote and assert that a new repository version is created
body = RepositorySyncURL(remote=remote.pulp_href)
created = monitor_task(
- file_repository_api_client.sync(file_repo.pulp_href, body).task
+ file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task
).created_resources
pub = file_publication_api_client.read(created[1])
diff --git a/pulpcore/tests/functional/api/using_plugin/test_contentguard.py b/pulpcore/tests/functional/api/using_plugin/test_contentguard.py
--- a/pulpcore/tests/functional/api/using_plugin/test_contentguard.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_contentguard.py
@@ -12,7 +12,7 @@
@pytest.mark.parallel
def test_rbac_content_guard_full_workflow(
- rbac_contentguard_api_client,
+ pulpcore_bindings,
groups_api_client,
groups_users_api_client,
file_distribution_api_client,
@@ -55,7 +55,9 @@ def _assert_access(authorized_users):
# Check that RBAC ContentGuard can be created and assigned to a distribution
with creator_user:
- guard = gen_object_with_cleanup(rbac_contentguard_api_client, {"name": distro.name})
+ guard = gen_object_with_cleanup(
+ pulpcore_bindings.ContentguardsRbacApi, {"name": distro.name}
+ )
body = PatchedfileFileDistribution(content_guard=guard.pulp_href)
monitor_task(file_distribution_api_client.partial_update(distro.pulp_href, body).task)
distro = file_distribution_api_client.read(distro.pulp_href)
@@ -70,29 +72,29 @@ def _assert_access(authorized_users):
"role": "core.rbaccontentguard_downloader",
}
with creator_user:
- rbac_contentguard_api_client.add_role(distro.content_guard, body)
+ pulpcore_bindings.ContentguardsRbacApi.add_role(distro.content_guard, body)
_assert_access([creator_user, user_b, user_a, pulp_admin_user])
# Use the /remove/ endpoint to remove users permission to access distribution
with creator_user:
- rbac_contentguard_api_client.remove_role(distro.content_guard, body)
+ pulpcore_bindings.ContentguardsRbacApi.remove_role(distro.content_guard, body)
_assert_access([creator_user, pulp_admin_user])
# Use the /add/ endpoint to add group
body = {"groups": [group.name], "role": "core.rbaccontentguard_downloader"}
with creator_user:
- rbac_contentguard_api_client.add_role(distro.content_guard, body)
+ pulpcore_bindings.ContentguardsRbacApi.add_role(distro.content_guard, body)
_assert_access([creator_user, user_b, user_a, pulp_admin_user])
# Use the /remove/ endpoint to remove group
with creator_user:
- rbac_contentguard_api_client.remove_role(distro.content_guard, body)
+ pulpcore_bindings.ContentguardsRbacApi.remove_role(distro.content_guard, body)
_assert_access([creator_user, pulp_admin_user])
@pytest.mark.parallel
def test_header_contentguard_workflow(
- header_contentguard_api_client,
+ pulpcore_bindings,
gen_user,
file_distribution_factory,
gen_object_with_cleanup,
@@ -109,7 +111,7 @@ def test_header_contentguard_workflow(
with creator_user:
guard = gen_object_with_cleanup(
- header_contentguard_api_client,
+ pulpcore_bindings.ContentguardsHeaderApi,
{"name": distro.name, "header_name": "x-header", "header_value": "123456"},
)
body = PatchedfileFileDistribution(content_guard=guard.pulp_href)
@@ -139,7 +141,7 @@ def test_header_contentguard_workflow(
with creator_user:
guard = gen_object_with_cleanup(
- header_contentguard_api_client,
+ pulpcore_bindings.ContentguardsHeaderApi,
{
"name": distro.name,
"header_name": header_name,
@@ -165,9 +167,7 @@ def test_header_contentguard_workflow(
def test_composite_contentguard_crud(
- composite_contentguard_api_client,
- redirect_contentguard_api_client,
- header_contentguard_api_client,
+ pulpcore_bindings,
gen_user,
gen_object_with_cleanup,
):
@@ -183,44 +183,44 @@ def test_composite_contentguard_crud(
with creator_user:
# Create RedirectContentGuard, HeaderContentGuard
hcg = gen_object_with_cleanup(
- header_contentguard_api_client,
+ pulpcore_bindings.ContentguardsHeaderApi,
{"name": str(uuid.uuid4()), "header_name": "x-header", "header_value": "123456"},
)
rcg = gen_object_with_cleanup(
- redirect_contentguard_api_client,
+ pulpcore_bindings.ContentguardsContentRedirectApi,
{"name": str(uuid.uuid4()), "description": "test_composite_contentguard_crud"},
)
# Create CCG1, no guards; evaluate
ccg1 = gen_object_with_cleanup(
- composite_contentguard_api_client,
+ pulpcore_bindings.ContentguardsCompositeApi,
{"name": str(uuid.uuid4()), "description": "test_composite_contentguard_crud"},
)
assert not ccg1.guards
- ccg1 = composite_contentguard_api_client.read(ccg1.pulp_href)
+ ccg1 = pulpcore_bindings.ContentguardsCompositeApi.read(ccg1.pulp_href)
assert not ccg1.guards
# Update CCG1, RCG, evaluate, expect 1
body = PatchedCompositeContentGuard(guards=[rcg.pulp_href])
- ccg1 = composite_contentguard_api_client.partial_update(ccg1.pulp_href, body)
+ ccg1 = pulpcore_bindings.ContentguardsCompositeApi.partial_update(ccg1.pulp_href, body)
assert ccg1.guards
assert len(ccg1.guards) == 1
# Update CCG1, HCG, evaluate, expect 1
body = PatchedCompositeContentGuard(guards=[hcg.pulp_href])
- ccg1 = composite_contentguard_api_client.partial_update(ccg1.pulp_href, body)
+ ccg1 = pulpcore_bindings.ContentguardsCompositeApi.partial_update(ccg1.pulp_href, body)
assert ccg1.guards
assert len(ccg1.guards) == 1
# Update CCG1, [RCG, HCG], evaluate, expect 2
body = PatchedCompositeContentGuard(guards=[rcg.pulp_href, hcg.pulp_href])
- ccg1 = composite_contentguard_api_client.partial_update(ccg1.pulp_href, body)
+ ccg1 = pulpcore_bindings.ContentguardsCompositeApi.partial_update(ccg1.pulp_href, body)
assert ccg1.guards
assert len(ccg1.guards) == 2
# Create CCG2, [RCG, HCG], evaluate
ccg2 = gen_object_with_cleanup(
- composite_contentguard_api_client,
+ pulpcore_bindings.ContentguardsCompositeApi,
{
"name": str(uuid.uuid4()),
"description": "test_composite_contentguard_crud",
@@ -231,20 +231,18 @@ def test_composite_contentguard_crud(
assert len(ccg2.guards) == 2
# List CCGs, expect 2
- list_response = composite_contentguard_api_client.list()
+ list_response = pulpcore_bindings.ContentguardsCompositeApi.list()
assert list_response.count == 2
# Delete CCG1
- composite_contentguard_api_client.delete(ccg1.pulp_href)
+ pulpcore_bindings.ContentguardsCompositeApi.delete(ccg1.pulp_href)
# List CCG, expect 1
- list_response = composite_contentguard_api_client.list()
+ list_response = pulpcore_bindings.ContentguardsCompositeApi.list()
assert list_response.count == 1
def test_composite_contentguard_permissions(
- composite_contentguard_api_client,
- redirect_contentguard_api_client,
- header_contentguard_api_client,
+ pulpcore_bindings,
gen_user,
gen_object_with_cleanup,
monitor_task,
@@ -265,17 +263,17 @@ def test_composite_contentguard_permissions(
with creator_user:
# Create RedirectContentGuard, HeaderContentGuard
hcg = gen_object_with_cleanup(
- header_contentguard_api_client,
+ pulpcore_bindings.ContentguardsHeaderApi,
{"name": str(uuid.uuid4()), "header_name": "x-header", "header_value": "123456"},
)
rcg = gen_object_with_cleanup(
- redirect_contentguard_api_client,
+ pulpcore_bindings.ContentguardsContentRedirectApi,
{"name": str(uuid.uuid4()), "description": "test_composite_contentguard_permissions"},
)
# Create CCG1, no guards
ccg1 = gen_object_with_cleanup(
- composite_contentguard_api_client,
+ pulpcore_bindings.ContentguardsCompositeApi,
{"name": str(uuid.uuid4()), "description": "test_composite_contentguard_permissions"},
)
@@ -297,7 +295,7 @@ def test_composite_contentguard_permissions(
# update CCG with RCG
body = PatchedCompositeContentGuard(guards=[rcg.pulp_href])
- ccg1 = composite_contentguard_api_client.partial_update(ccg1.pulp_href, body)
+ ccg1 = pulpcore_bindings.ContentguardsCompositeApi.partial_update(ccg1.pulp_href, body)
# attempt dist-access, expect 403 (1 guard, forbids)
response = get_from_url(distro.base_url)
@@ -305,7 +303,7 @@ def test_composite_contentguard_permissions(
# Create HeaderContentGuard, update CCG with [RCG, HCG]
body = PatchedCompositeContentGuard(guards=[rcg.pulp_href, hcg.pulp_href])
- composite_contentguard_api_client.partial_update(ccg1.pulp_href, body)
+ pulpcore_bindings.ContentguardsCompositeApi.partial_update(ccg1.pulp_href, body)
# attempt dist-access, expect 403 (2 guards, both forbid)
response = get_from_url(distro.base_url)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_crud_repos.py b/pulpcore/tests/functional/api/using_plugin/test_crud_repos.py
--- a/pulpcore/tests/functional/api/using_plugin/test_crud_repos.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_crud_repos.py
@@ -14,7 +14,7 @@
@pytest.mark.parallel
def test_crud_repo_full_workflow(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
basic_manifest_path,
@@ -25,7 +25,7 @@ def test_crud_repo_full_workflow(
# Try to create another with the same name
with pytest.raises(ApiException) as e:
- file_repository_api_client.create({"name": repo.name})
+ file_bindings.RepositoriesFileApi.create({"name": repo.name})
assert e.value.status == 400
error_body = json.loads(e.value.body)
@@ -33,13 +33,13 @@ def test_crud_repo_full_workflow(
assert "This field must be unique." in error_body["name"]
# Test reading the repository
- read_repo = file_repository_api_client.read(repo.pulp_href).to_dict()
+ read_repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href).to_dict()
for key, val in repo.to_dict().items():
assert key in read_repo
assert getattr(repo, key) == read_repo[key]
# Read a repository by its href providing specific field list.
- config = file_repository_api_client.api_client.configuration
+ config = file_bindings.RepositoriesFileApi.api_client.configuration
auth = BasicAuth(login=config.username, password=config.password)
full_href = urljoin(config.host, repo.pulp_href)
for fields in [
@@ -57,25 +57,27 @@ def test_crud_repo_full_workflow(
assert "name" not in response_fields
# Read the repository by its name.
- page = file_repository_api_client.list(name=repo.name)
+ page = file_bindings.RepositoriesFileApi.list(name=repo.name)
assert len(page.results) == 1
for key, val in repo.to_dict().items():
assert getattr(page.results[0], key) == val
# Ensure name is displayed when listing repositories.
- for read_repo in file_repository_api_client.list().results:
+ for read_repo in file_bindings.RepositoriesFileApi.list().results:
assert read_repo.name is not None
def _do_update_attr(attr, partial=False):
"""Update a repository attribute."""
body = {} if partial else repo.to_dict()
- function = getattr(file_repository_api_client, "partial_update" if partial else "update")
+ function = getattr(
+ file_bindings.RepositoriesFileApi, "partial_update" if partial else "update"
+ )
string = str(uuid4())
body[attr] = string
response = function(repo.pulp_href, body)
monitor_task(response.task)
# verify the update
- read_repo = file_repository_api_client.read(repo.pulp_href)
+ read_repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert string == getattr(read_repo, attr)
# Update a repository's name using HTTP PUT.
@@ -95,33 +97,33 @@ def _do_update_attr(attr, partial=False):
# verify that syncing with no remote raises an error
with pytest.raises(ApiException):
- file_repository_api_client.sync(repo.pulp_href, {})
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {})
# test setting the remote on the repo
- response = file_repository_api_client.partial_update(
+ response = file_bindings.RepositoriesFileApi.partial_update(
repo.pulp_href, {"remote": remote.pulp_href}
)
monitor_task(response.task)
# test syncing without a remote
- response = file_repository_api_client.sync(repo.pulp_href, {})
+ response = file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {})
monitor_task(response.task)
- read_repo = file_repository_api_client.read(repo.pulp_href)
+ read_repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert read_repo.latest_version_href == f"{repo.pulp_href}versions/1/"
# Delete a repository.
- response = file_repository_api_client.delete(repo.pulp_href)
+ response = file_bindings.RepositoriesFileApi.delete(repo.pulp_href)
monitor_task(response.task)
# verify the delete
with pytest.raises(ApiException):
- file_repository_api_client.read(repo.pulp_href)
+ file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# Attempt to create repository passing extraneous invalid parameter.
# Assert response returns an error 400 including ["Unexpected field"].
with pytest.raises(ApiException) as e:
- file_repository_api_client.create({"name": str(uuid4()), "foo": "bar"})
+ file_bindings.RepositoriesFileApi.create({"name": str(uuid4()), "foo": "bar"})
assert e.value.status == 400
error_body = json.loads(e.value.body)
@@ -327,7 +329,7 @@ def raise_for_invalid_request(remote_attrs):
@pytest.mark.parallel
def test_repository_remote_filter(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
gen_object_with_cleanup,
file_remote_factory,
@@ -346,30 +348,30 @@ def test_repository_remote_filter(
name_in = [repo1.name, repo2.name, repo3.name, repo4.name]
# Check that name__in filter is working
- response = file_repository_api_client.list(name__in=name_in)
+ response = file_bindings.RepositoriesFileApi.list(name__in=name_in)
assert response.count == 4
# Test that supplying a specific remote only returns repositories with that remote
- response = file_repository_api_client.list(remote=remote1.pulp_href)
+ response = file_bindings.RepositoriesFileApi.list(remote=remote1.pulp_href)
assert response.count == 1
assert response.results[0].pulp_href == repo2.pulp_href
- response = file_repository_api_client.list(remote=remote2.pulp_href)
+ response = file_bindings.RepositoriesFileApi.list(remote=remote2.pulp_href)
assert response.count == 2
assert {r.pulp_href for r in response.results} == {repo3.pulp_href, repo4.pulp_href}
- response = file_repository_api_client.list(remote=remote3.pulp_href)
+ response = file_bindings.RepositoriesFileApi.list(remote=remote3.pulp_href)
assert response.count == 0
# Test that supplying 'null' will only show repositories without a remote
- response = file_repository_api_client.list(remote="null", name__in=name_in)
+ response = file_bindings.RepositoriesFileApi.list(remote="null", name__in=name_in)
assert response.count == 1
assert response.results[0].pulp_href == repo1.pulp_href
# Test that supplying a base URI of a remote will show all repositories with similar remotes
# Using a constant here would be nice, but our URIs are dependent on the machine's settings
BASE_URI = remote1.pulp_href[:-37]
- response = file_repository_api_client.list(remote=BASE_URI, name__in=name_in)
+ response = file_bindings.RepositoriesFileApi.list(remote=BASE_URI, name__in=name_in)
assert response.count == 3
assert {r.pulp_href for r in response.results} == {
repo2.pulp_href,
diff --git a/pulpcore/tests/functional/api/using_plugin/test_distributions.py b/pulpcore/tests/functional/api/using_plugin/test_distributions.py
--- a/pulpcore/tests/functional/api/using_plugin/test_distributions.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_distributions.py
@@ -16,7 +16,7 @@ def test_crud_publication_distribution(
file_content_api_client,
file_repo,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_publication_api_client,
basic_manifest_path,
@@ -28,17 +28,17 @@ def test_crud_publication_distribution(
# Create a remote and sync from it to create the first repository version
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
# Remove content to create two more repository versions
- first_repo_version_href = file_repository_api_client.read(
+ first_repo_version_href = file_bindings.RepositoriesFileApi.read(
file_repo.pulp_href
).latest_version_href
v1_content = file_content_api_client.list(repository_version=first_repo_version_href).results
for i in range(2):
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"remove_content_units": [v1_content[i].pulp_href]}
).task
)
@@ -165,7 +165,7 @@ def test_distribution_filtering(
file_distribution_api_client,
file_remote_factory,
file_random_content_unit,
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_publication_api_client,
gen_object_with_cleanup,
@@ -179,7 +179,7 @@ def generate_repo_with_content():
repo_manifest_path = write_3_iso_file_fixture_data_factory(str(uuid4()))
remote = file_remote_factory(manifest_path=repo_manifest_path, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- task_response = file_repository_api_client.sync(repo.pulp_href, body).task
+ task_response = file_bindings.RepositoriesFileApi.sync(repo.pulp_href, body).task
version_href = monitor_task(task_response).created_resources[0]
content = file_content_api_client.list(repository_version_added=version_href).results[0]
return repo, content
@@ -226,7 +226,7 @@ def generate_repo_with_content():
# add new content to the first repository to see whether the distribution filtering correctly
# traverses to the latest publication concerning the repository under the question that should
# contain the content
- response = file_repository_api_client.modify(
+ response = file_bindings.RepositoriesFileApi.modify(
repo1.pulp_href,
{"remove_content_units": [], "add_content_units": [content2.pulp_href]},
)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_filesystemexport.py b/pulpcore/tests/functional/api/using_plugin/test_filesystemexport.py
--- a/pulpcore/tests/functional/api/using_plugin/test_filesystemexport.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_filesystemexport.py
@@ -101,7 +101,7 @@ def test_delete_exporter(exporters_filesystem_api_client, monitor_task):
@pytest.fixture
def publications(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_factory,
file_publication_api_client,
@@ -115,7 +115,7 @@ def publications(
remote = file_remote_factory(manifest_path=basic_manifest_path, policy="immediate")
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(repo.pulp_href, repository_sync_data)
monitor_task(sync_response.task)
publication = file_publication_api_client.list(repository=repo.pulp_href).results[0]
diff --git a/pulpcore/tests/functional/api/using_plugin/test_labels.py b/pulpcore/tests/functional/api/using_plugin/test_labels.py
--- a/pulpcore/tests/functional/api/using_plugin/test_labels.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_labels.py
@@ -2,8 +2,8 @@
@pytest.fixture
-def label_access_policy(access_policies_api_client):
- orig_access_policy = access_policies_api_client.list(
+def label_access_policy(pulpcore_bindings):
+ orig_access_policy = pulpcore_bindings.AccessPoliciesApi.list(
viewset_name="repositories/file/file"
).results[0]
new_statements = orig_access_policy.statements.copy()
@@ -18,32 +18,32 @@ def label_access_policy(access_policies_api_client):
"principal": "authenticated",
}
)
- access_policies_api_client.partial_update(
+ pulpcore_bindings.AccessPoliciesApi.partial_update(
orig_access_policy.pulp_href, {"statements": new_statements}
)
yield
if orig_access_policy.customized:
- access_policies_api_client.partial_update(
+ pulpcore_bindings.AccessPoliciesApi.partial_update(
orig_access_policy.pulp_href, {"statements": orig_access_policy.statements}
)
else:
- access_policies_api_client.reset(orig_access_policy.pulp_href)
+ pulpcore_bindings.AccessPoliciesApi.reset(orig_access_policy.pulp_href)
@pytest.mark.parallel
-def test_set_label(label_access_policy, file_repository_api_client, file_repository_factory):
+def test_set_label(label_access_policy, file_bindings, file_repository_factory):
repository = file_repository_factory()
assert repository.pulp_labels == {}
- file_repository_api_client.set_label(repository.pulp_href, {"key": "a", "value": None})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "b", "value": ""})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "c", "value": "val1"})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "d", "value": "val2"})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "e", "value": "val3"})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "c", "value": "val4"})
- file_repository_api_client.set_label(repository.pulp_href, {"key": "d", "value": None})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "a", "value": None})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "b", "value": ""})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "c", "value": "val1"})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "d", "value": "val2"})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "e", "value": "val3"})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "c", "value": "val4"})
+ file_bindings.RepositoriesFileApi.set_label(repository.pulp_href, {"key": "d", "value": None})
- repository = file_repository_api_client.read(repository.pulp_href)
+ repository = file_bindings.RepositoriesFileApi.read(repository.pulp_href)
assert repository.pulp_labels == {
"a": None,
"b": "",
@@ -52,9 +52,9 @@ def test_set_label(label_access_policy, file_repository_api_client, file_reposit
"e": "val3",
}
- file_repository_api_client.unset_label(repository.pulp_href, {"key": "e"})
+ file_bindings.RepositoriesFileApi.unset_label(repository.pulp_href, {"key": "e"})
- repository = file_repository_api_client.read(repository.pulp_href)
+ repository = file_bindings.RepositoriesFileApi.read(repository.pulp_href)
assert repository.pulp_labels == {
"a": None,
"b": "",
diff --git a/pulpcore/tests/functional/api/using_plugin/test_orphans.py b/pulpcore/tests/functional/api/using_plugin/test_orphans.py
--- a/pulpcore/tests/functional/api/using_plugin/test_orphans.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_orphans.py
@@ -4,23 +4,14 @@
from pulpcore.app import settings
-from pulpcore.client.pulpcore import OrphansApi
-from pulpcore.client.pulpcore.exceptions import ApiException as CoreApiException
-from pulpcore.client.pulp_file.exceptions import ApiException
-
-
[email protected]
-def orphans_api_client(pulpcore_client):
- # This API is deprecated. Use it only to test its own functionality.
- return OrphansApi(pulpcore_client)
-
def test_orphans_delete(
random_artifact,
file_random_content_unit,
artifacts_api_client,
file_content_api_client,
- orphans_api_client,
+ pulpcore_bindings,
+ file_bindings,
monitor_task,
):
# Verify that the system contains the orphan content unit and the orphan artifact.
@@ -36,11 +27,11 @@ def test_orphans_delete(
assert os.path.exists(artifact_path2) is True
# Delete orphans using deprecated API
- monitor_task(orphans_api_client.delete().task)
+ monitor_task(pulpcore_bindings.OrphansApi.delete().task)
# Assert that the content unit and artifact are gone
if settings.ORPHAN_PROTECTION_TIME == 0:
- with pytest.raises(ApiException) as exc:
+ with pytest.raises(file_bindings.ApiException) as exc:
file_content_api_client.read(file_random_content_unit.pulp_href)
assert exc.value.status == 404
if settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem":
@@ -53,11 +44,12 @@ def test_orphans_cleanup(
file_random_content_unit,
artifacts_api_client,
file_content_api_client,
- orphans_cleanup_api_client,
+ pulpcore_bindings,
+ file_bindings,
monitor_task,
):
# Cleanup orphans with a nonzero orphan_protection_time
- monitor_task(orphans_cleanup_api_client.cleanup({"orphan_protection_time": 10}).task)
+ monitor_task(pulpcore_bindings.OrphansCleanupApi.cleanup({"orphan_protection_time": 10}).task)
# Verify that the system contains the orphan content unit and the orphan artifact.
content_unit = file_content_api_client.read(file_random_content_unit.pulp_href)
@@ -72,10 +64,10 @@ def test_orphans_cleanup(
assert os.path.exists(artifact_path2) is True
# Cleanup orphans with a zero orphan_protection_time
- monitor_task(orphans_cleanup_api_client.cleanup({"orphan_protection_time": 0}).task)
+ monitor_task(pulpcore_bindings.OrphansCleanupApi.cleanup({"orphan_protection_time": 0}).task)
# Assert that the content unit and the artifact are gone
- with pytest.raises(ApiException) as exc:
+ with pytest.raises(file_bindings.ApiException) as exc:
file_content_api_client.read(file_random_content_unit.pulp_href)
assert exc.value.status == 404
if settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem":
@@ -86,28 +78,29 @@ def test_orphans_cleanup(
def test_cleanup_specific_orphans(
file_content_unit_with_name_factory,
file_content_api_client,
- orphans_cleanup_api_client,
+ pulpcore_bindings,
+ file_bindings,
monitor_task,
):
content_unit_1 = file_content_unit_with_name_factory("1.iso")
content_unit_2 = file_content_unit_with_name_factory("2.iso")
cleanup_dict = {"content_hrefs": [content_unit_1.pulp_href], "orphan_protection_time": 0}
- monitor_task(orphans_cleanup_api_client.cleanup(cleanup_dict).task)
+ monitor_task(pulpcore_bindings.OrphansCleanupApi.cleanup(cleanup_dict).task)
# Assert that content_unit_2 is gone and content_unit_1 is present
- with pytest.raises(ApiException) as exc:
+ with pytest.raises(file_bindings.ApiException) as exc:
file_content_api_client.read(content_unit_1.pulp_href)
assert exc.value.status == 404
assert file_content_api_client.read(content_unit_2.pulp_href).pulp_href
# Test whether the `content_hrefs` param raises a ValidationError with [] as the value
content_hrefs_dict = {"content_hrefs": []}
- with pytest.raises(CoreApiException) as exc:
- orphans_cleanup_api_client.cleanup(content_hrefs_dict)
+ with pytest.raises(pulpcore_bindings.ApiException) as exc:
+ pulpcore_bindings.OrphansCleanupApi.cleanup(content_hrefs_dict)
assert exc.value.status == 400
# Test whether the `content_hrefs` param raises a ValidationError with and invalid href"""
content_hrefs_dict = {"content_hrefs": ["/not/a/valid/content/href"]}
- with pytest.raises(CoreApiException) as exc:
- orphans_cleanup_api_client.cleanup(content_hrefs_dict)
+ with pytest.raises(pulpcore_bindings.ApiException) as exc:
+ pulpcore_bindings.OrphansCleanupApi.cleanup(content_hrefs_dict)
assert exc.value.status == 400
diff --git a/pulpcore/tests/functional/api/using_plugin/test_pagination.py b/pulpcore/tests/functional/api/using_plugin/test_pagination.py
--- a/pulpcore/tests/functional/api/using_plugin/test_pagination.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_pagination.py
@@ -5,7 +5,7 @@
@pytest.mark.parallel
def test_repo_version_pagination(
file_content_unit_with_name_factory,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repo,
monitor_task,
@@ -14,7 +14,7 @@ def test_repo_version_pagination(
for i in range(20):
content_unit = file_content_unit_with_name_factory(f"{i}.iso")
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"add_content_units": [content_unit.pulp_href]}
).task
)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_proxy.py b/pulpcore/tests/functional/api/using_plugin/test_proxy.py
--- a/pulpcore/tests/functional/api/using_plugin/test_proxy.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_proxy.py
@@ -7,10 +7,10 @@
def _run_basic_sync_and_assert(
- remote, file_repo, file_repository_api_client, file_content_api_client, monitor_task
+ remote, file_repo, file_bindings, file_content_api_client, monitor_task
):
body = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task)
# Check content is present, but no artifacts are there
content_response = file_content_api_client.list(
@@ -25,7 +25,7 @@ def _run_basic_sync_and_assert(
def test_sync_http_through_http_proxy(
file_remote_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
http_proxy,
basic_manifest_path,
@@ -41,7 +41,7 @@ def test_sync_http_through_http_proxy(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -51,7 +51,7 @@ def test_sync_http_through_http_proxy(
def test_sync_https_through_http_proxy(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
http_proxy,
basic_manifest_path,
@@ -67,7 +67,7 @@ def test_sync_https_through_http_proxy(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -77,7 +77,7 @@ def test_sync_https_through_http_proxy(
def test_sync_https_through_http_proxy_with_auth(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
http_proxy_with_auth,
basic_manifest_path,
@@ -98,7 +98,7 @@ def test_sync_https_through_http_proxy_with_auth(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -108,7 +108,7 @@ def test_sync_https_through_http_proxy_with_auth(
def test_sync_https_through_http_proxy_with_auth_but_auth_not_configured(
file_remote_ssl_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
http_proxy_with_auth,
basic_manifest_path,
@@ -128,7 +128,7 @@ def test_sync_https_through_http_proxy_with_auth_but_auth_not_configured(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
@@ -140,7 +140,7 @@ def test_sync_https_through_http_proxy_with_auth_but_auth_not_configured(
def test_sync_http_through_https_proxy(
file_remote_factory,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
https_proxy,
basic_manifest_path,
@@ -159,7 +159,7 @@ def test_sync_http_through_https_proxy(
_run_basic_sync_and_assert(
remote_on_demand,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
monitor_task,
)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_pulpimport.py b/pulpcore/tests/functional/api/using_plugin/test_pulpimport.py
--- a/pulpcore/tests/functional/api/using_plugin/test_pulpimport.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_pulpimport.py
@@ -33,7 +33,7 @@
@pytest.fixture
def import_export_repositories(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_remote_ssl_factory,
basic_manifest_path,
@@ -47,10 +47,12 @@ def import_export_repositories(
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="immediate")
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(export_repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(
+ export_repo.pulp_href, repository_sync_data
+ )
monitor_task(sync_response.task)
- export_repo = file_repository_api_client.read(export_repo.pulp_href)
+ export_repo = file_bindings.RepositoriesFileApi.read(export_repo.pulp_href)
export_repos.append(export_repo)
import_repos.append(import_repo)
@@ -211,9 +213,7 @@ def test_importer_delete(pulp_importer_factory, importers_pulp_api_client):
@pytest.mark.parallel
-def test_import(
- pulp_importer_factory, file_repository_api_client, import_export_repositories, perform_import
-):
+def test_import(pulp_importer_factory, file_bindings, import_export_repositories, perform_import):
"""Test an import."""
import_repos, exported_repos = import_export_repositories
importer = pulp_importer_factory()
@@ -225,7 +225,7 @@ def test_import(
assert report.done == len(import_repos)
for repo in import_repos:
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert f"{repo.pulp_href}versions/1/" == repo.latest_version_href
@@ -248,7 +248,7 @@ def test_import_auto_repo_creation(
file_content_api_client,
file_repository_factory,
file_remote_ssl_factory,
- file_repository_api_client,
+ file_bindings,
gen_object_with_cleanup,
generate_export,
importers_pulp_api_client,
@@ -262,10 +262,12 @@ def test_import_auto_repo_creation(
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="immediate")
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(export_repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(
+ export_repo.pulp_href, repository_sync_data
+ )
monitor_task(sync_response.task)
- export_repo = file_repository_api_client.read(export_repo.pulp_href)
+ export_repo = file_bindings.RepositoriesFileApi.read(export_repo.pulp_href)
added_content_in_export_repo = file_content_api_client.list(
repository_version_added=export_repo.latest_version_href
).results
@@ -280,15 +282,15 @@ def test_import_auto_repo_creation(
export = generate_export(exporter)
# 3. delete the exported repository
- monitor_task(file_repository_api_client.delete(export_repo.pulp_href).task)
- assert len(file_repository_api_client.list(name=export_repo.name).results) == 0
+ monitor_task(file_bindings.RepositoriesFileApi.delete(export_repo.pulp_href).task)
+ assert len(file_bindings.RepositoriesFileApi.list(name=export_repo.name).results) == 0
# 4. import the exported repository without creating an import repository beforehand
importer = gen_object_with_cleanup(importers_pulp_api_client, {"name": str(uuid.uuid4())})
perform_import(importer, an_export=export, body={"create_repositories": True})
# 5. run assertions on the automatically created import repository
- repositories = file_repository_api_client.list(name=export_repo.name).results
+ repositories = file_bindings.RepositoriesFileApi.list(name=export_repo.name).results
assert len(repositories) == 1
imported_repo = repositories[0]
@@ -299,7 +301,7 @@ def test_import_auto_repo_creation(
).results
assert len(added_content_in_export_repo) == len(added_content_in_imported_repo)
- monitor_task(file_repository_api_client.delete(imported_repo.pulp_href).task)
+ monitor_task(file_bindings.RepositoriesFileApi.delete(imported_repo.pulp_href).task)
@pytest.mark.parallel
@@ -307,7 +309,7 @@ def test_double_import(
pulp_importer_factory,
importers_pulp_imports_api_client,
import_export_repositories,
- file_repository_api_client,
+ file_bindings,
perform_import,
):
"""Test two imports of our export."""
@@ -321,14 +323,14 @@ def test_double_import(
assert len(imports) == 2
for repo in import_repos:
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# still only one version as pulp won't create a new version if nothing changed
assert f"{repo.pulp_href}versions/1/" == repo.latest_version_href
@pytest.mark.parallel
def test_chunked_import(
- pulp_importer_factory, import_export_repositories, file_repository_api_client, perform_import
+ pulp_importer_factory, import_export_repositories, file_bindings, perform_import
):
"""Test an import."""
import_repos, exported_repos = import_export_repositories
@@ -337,7 +339,7 @@ def test_chunked_import(
assert (len(import_repos) + 1) == task_group.completed
for repo in import_repos:
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert f"{repo.pulp_href}versions/1/" == repo.latest_version_href
# We should be able to import a second time, even though the chunks have now been reassembled.
@@ -494,7 +496,7 @@ def exported_version(
generate_export,
perform_import,
file_repo,
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
content_api_client,
monitor_task,
@@ -508,7 +510,7 @@ def exported_version(
results = file_list.results
for a_file in results:
href = a_file.pulp_href
- modify_response = file_repository_api_client.modify(
+ modify_response = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"add_content_units": [href]}
)
monitor_task(modify_response.task)
@@ -537,12 +539,12 @@ def exported_version(
@pytest.mark.parallel
-def test_import_not_latest_version(exported_version, file_repository_api_client):
+def test_import_not_latest_version(exported_version, file_bindings):
"""Test an import."""
import_repos, task_group = exported_version
for report in task_group.group_progress_reports:
if report.code == "import.repo.versions":
assert report.done == 1
- imported_repo = file_repository_api_client.read(import_repos[0].pulp_href)
+ imported_repo = file_bindings.RepositoriesFileApi.read(import_repos[0].pulp_href)
assert f"{imported_repo.pulp_href}versions/0/" != imported_repo.latest_version_href
diff --git a/pulpcore/tests/functional/api/using_plugin/test_purge.py b/pulpcore/tests/functional/api/using_plugin/test_purge.py
--- a/pulpcore/tests/functional/api/using_plugin/test_purge.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_purge.py
@@ -14,7 +14,7 @@
TOMORROW_STR = (datetime.now(timezone.utc) + timedelta(days=1)).strftime("%Y-%m-%dT%H:%M")
-def _task_summary(tasks_api_client):
+def _task_summary(pulpcore_bindings):
"""
Summary of number of tasks in all known task-states.
:return: tuple of (total-tasks, number of final tasks, summary)
@@ -23,7 +23,7 @@ def _task_summary(tasks_api_client):
total = 0
final_total = 0
for state in TASK_STATES.__dict__.values():
- response = tasks_api_client.list(state=state)
+ response = pulpcore_bindings.TasksApi.list(state=state)
summary[state] = response.count
total += summary[state]
final_total += summary[state] if state in TASK_FINAL_STATES else 0
@@ -49,8 +49,8 @@ def _check_delete_report(task, expected):
@pytest.fixture
def sync_results(
- file_repository_api_client,
- tasks_api_client,
+ file_bindings,
+ pulpcore_bindings,
file_remote_factory,
file_remote_ssl_factory,
file_repository_factory,
@@ -67,138 +67,138 @@ def sync_results(
bad_repo = file_repository_factory()
bad_sync_data = RepositorySyncURL(remote=bad_remote.pulp_href)
- pre_total, pre_final, pre_summary = _task_summary(tasks_api_client)
+ pre_total, pre_final, pre_summary = _task_summary(pulpcore_bindings)
# good sync
- sync_response = file_repository_api_client.sync(good_repo.pulp_href, good_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(good_repo.pulp_href, good_sync_data)
task = monitor_task(sync_response.task)
assert "completed" == task.state
completed_sync_task = task
# bad sync
- sync_response = file_repository_api_client.sync(bad_repo.pulp_href, bad_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(bad_repo.pulp_href, bad_sync_data)
with pytest.raises(PulpTaskError):
monitor_task(sync_response.task)
- task = tasks_api_client.read(sync_response.task)
+ task = pulpcore_bindings.TasksApi.read(sync_response.task)
assert "failed" == task.state
failed_sync_task = task
- post_total, post_final, post_summary = _task_summary(tasks_api_client)
+ post_total, post_final, post_summary = _task_summary(pulpcore_bindings)
assert post_total == (pre_total + 2)
assert post_final == (pre_final + 2)
return completed_sync_task, failed_sync_task, pre_total, pre_final, pre_summary
-def test_purge_before_time(tasks_api_client, sync_results, monitor_task):
+def test_purge_before_time(pulpcore_bindings, sync_results, monitor_task):
"""Purge that should find no tasks to delete."""
_, _, pre_total, _, _ = sync_results
dta = Purge(finished_before="1970-01-01T00:00")
- response = tasks_api_client.purge(dta)
+ response = pulpcore_bindings.TasksApi.purge(dta)
task = monitor_task(response.task)
- new_total, new_final, new_summary = _task_summary(tasks_api_client)
+ new_total, new_final, new_summary = _task_summary(pulpcore_bindings)
# Should have all tasks remaining (2 completed, 1 failed)
assert (pre_total + 3) == new_total
# Should show we report having purged no tasks
assert _purge_report_total(task) == 0
-def test_purge_defaults(tasks_api_client, sync_results, monitor_task):
+def test_purge_defaults(pulpcore_bindings, sync_results, monitor_task):
"""Purge using defaults (finished_before=30-days-ago, state=completed)"""
dta = Purge()
- response = tasks_api_client.purge(dta)
+ response = pulpcore_bindings.TasksApi.purge(dta)
monitor_task(response.task)
completed_sync_task, failed_sync_task, _, _, _ = sync_results
# default is "completed before 30 days ago" - so both sync tasks should still exist
# Make sure good sync-task still exists
- tasks_api_client.read(completed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(completed_sync_task.pulp_href)
# Make sure the failed sync still exists
- tasks_api_client.read(failed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(failed_sync_task.pulp_href)
-def test_purge_all(tasks_api_client, sync_results, monitor_task):
+def test_purge_all(pulpcore_bindings, sync_results, monitor_task):
"""Purge all tasks in any 'final' state."""
completed_sync_task, failed_sync_task, pre_total, pre_final, pre_summary = sync_results
states = list(TASK_FINAL_STATES)
dta = Purge(finished_before=TOMORROW_STR, states=states)
- response = tasks_api_client.purge(dta)
+ response = pulpcore_bindings.TasksApi.purge(dta)
task = monitor_task(response.task)
- new_total, new_final, new_summary = _task_summary(tasks_api_client)
+ new_total, new_final, new_summary = _task_summary(pulpcore_bindings)
assert 1 == new_final, "The purge-task should be the only final-task left"
# Make sure good sync-task is gone
with pytest.raises(ApiException):
- tasks_api_client.read(completed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(completed_sync_task.pulp_href)
# Make sure failed sync-task is gone
with pytest.raises(ApiException):
- tasks_api_client.read(failed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(failed_sync_task.pulp_href)
# Make sure we reported the deletions
_check_delete_report(task, pre_final + 2)
-def test_purge_leave_one(tasks_api_client, sync_results, monitor_task):
+def test_purge_leave_one(pulpcore_bindings, sync_results, monitor_task):
"""Arrange to leave one task unscathed."""
# Leave only the failed sync
completed_sync_task, failed_sync_task, pre_total, pre_final, pre_summary = sync_results
dta = Purge(finished_before=failed_sync_task.finished_at)
- response = tasks_api_client.purge(dta)
+ response = pulpcore_bindings.TasksApi.purge(dta)
task = monitor_task(response.task)
# Make sure good sync-task is gone
with pytest.raises(ApiException):
- tasks_api_client.read(completed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(completed_sync_task.pulp_href)
# Make sure the failed sync still exists
- tasks_api_client.read(failed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(failed_sync_task.pulp_href)
# Make sure we reported the task-deletion
_check_delete_report(task, pre_final + 1)
-def test_purge_only_failed(tasks_api_client, sync_results, monitor_task):
+def test_purge_only_failed(pulpcore_bindings, sync_results, monitor_task):
"""Purge all failed tasks only."""
dta = Purge(finished_before=TOMORROW_STR, states=["failed"])
- response = tasks_api_client.purge(dta)
+ response = pulpcore_bindings.TasksApi.purge(dta)
monitor_task(response.task)
# completed sync-task should exist
completed_sync_task, failed_sync_task, pre_total, pre_final, pre_summary = sync_results
- tasks_api_client.read(completed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(completed_sync_task.pulp_href)
# failed should not exist
with pytest.raises(ApiException):
- tasks_api_client.read(failed_sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(failed_sync_task.pulp_href)
-def test_bad_date(tasks_api_client, sync_results):
+def test_bad_date(pulpcore_bindings, sync_results):
"""What happens if you use a bad date format?"""
dta = Purge(finished_before="THISISNOTADATE")
with pytest.raises(ApiException):
- tasks_api_client.purge(dta)
+ pulpcore_bindings.TasksApi.purge(dta)
-def test_bad_state(tasks_api_client, sync_results):
+def test_bad_state(pulpcore_bindings, sync_results):
"""What happens if you specify junk for a state?"""
dta = Purge(finished_before=TOMORROW_STR, states=["BAD STATE"])
with pytest.raises(ApiException):
- tasks_api_client.purge(dta)
+ pulpcore_bindings.TasksApi.purge(dta)
-def test_not_final_state(tasks_api_client, sync_results):
+def test_not_final_state(pulpcore_bindings, sync_results):
"""What happens if you use a valid state that isn't a 'final' one?"""
dta = Purge(finished_before=TOMORROW_STR, states=["running"])
with pytest.raises(ApiException):
- tasks_api_client.purge(dta)
+ pulpcore_bindings.TasksApi.purge(dta)
def test_purge_with_different_users(
- tasks_api_client,
- file_repository_api_client,
+ pulpcore_bindings,
+ file_bindings,
file_remote_ssl_factory,
file_repository_factory,
basic_manifest_path,
@@ -224,46 +224,46 @@ def test_purge_with_different_users(
user_repo = file_repository_factory()
# Sync as admin
- sync_response = file_repository_api_client.sync(admin_repo.pulp_href, admin_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(admin_repo.pulp_href, admin_sync_data)
monitor_task(sync_response.task)
# Purge as user
states = list(TASK_FINAL_STATES)
data = Purge(finished_before=TOMORROW_STR, states=states)
with user:
- response = tasks_api_client.purge(data)
+ response = pulpcore_bindings.TasksApi.purge(data)
task = monitor_task(response.task)
# Make sure sync-task (executed by admin) still exists
- tasks_api_client.read(task.pulp_href)
+ pulpcore_bindings.TasksApi.read(task.pulp_href)
# Sync as user
with user:
- sync_response = file_repository_api_client.sync(user_repo.pulp_href, user_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(user_repo.pulp_href, user_sync_data)
sync_task = monitor_task(sync_response.task)
# Purge as user
states = list(TASK_FINAL_STATES)
data = Purge(finished_before=TOMORROW_STR, states=states)
with user:
- response = tasks_api_client.purge(data)
+ response = pulpcore_bindings.TasksApi.purge(data)
monitor_task(response.task)
# Make sure task DOES NOT exist
with pytest.raises(ApiException):
- tasks_api_client.read(sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(sync_task.pulp_href)
# Sync as user
with user:
- sync_response = file_repository_api_client.sync(user_repo.pulp_href, user_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(user_repo.pulp_href, user_sync_data)
monitor_task(sync_response.task)
# Purge as ADMIN
states = list(TASK_FINAL_STATES)
data = Purge(finished_before=TOMORROW_STR, states=states)
- response = tasks_api_client.purge(data)
+ response = pulpcore_bindings.TasksApi.purge(data)
monitor_task(response.task)
# Make sure task DOES NOT exist
with pytest.raises(ApiException):
- tasks_api_client.read(sync_task.pulp_href)
+ pulpcore_bindings.TasksApi.read(sync_task.pulp_href)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_reclaim_disk_space.py b/pulpcore/tests/functional/api/using_plugin/test_reclaim_disk_space.py
--- a/pulpcore/tests/functional/api/using_plugin/test_reclaim_disk_space.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_reclaim_disk_space.py
@@ -10,7 +10,7 @@
@pytest.mark.parallel
def test_reclaim_immediate_content(
- file_repository_api_client,
+ file_bindings,
file_repo,
repositories_reclaim_space_api_client,
artifacts_api_client,
@@ -26,7 +26,9 @@ def test_reclaim_immediate_content(
# sync the repository with immediate policy
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, repository_sync_data
+ )
monitor_task(sync_response.task)
# reclaim disk space
@@ -43,7 +45,9 @@ def test_reclaim_immediate_content(
# sync repo again
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(file_repo.pulp_href, repository_sync_data)
+ sync_response = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, repository_sync_data
+ )
monitor_task(sync_response.task)
# assert re-sync populated missing artifacts
@@ -54,7 +58,7 @@ def test_reclaim_immediate_content(
@pytest.fixture
def sync_repository_distribution(
- file_repository_api_client,
+ file_bindings,
file_distribution_factory,
file_remote_ssl_factory,
file_repo_with_auto_publish,
@@ -65,7 +69,7 @@ def _sync_repository_distribution(policy="immediate"):
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy=policy)
repository_sync_data = RepositorySyncURL(remote=remote.pulp_href)
- sync_response = file_repository_api_client.sync(
+ sync_response = file_bindings.RepositoriesFileApi.sync(
file_repo_with_auto_publish.pulp_href, repository_sync_data
)
monitor_task(sync_response.task)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_repair.py b/pulpcore/tests/functional/api/using_plugin/test_repair.py
--- a/pulpcore/tests/functional/api/using_plugin/test_repair.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_repair.py
@@ -11,7 +11,7 @@
@pytest.fixture
def repository_with_corrupted_artifacts(
- file_repository_api_client,
+ file_bindings,
file_repo,
artifacts_api_client,
file_remote_ssl_factory,
@@ -21,8 +21,8 @@ def repository_with_corrupted_artifacts(
# STEP 1: sync content from a remote source
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="immediate")
sync_data = RepositorySyncURL(remote=remote.pulp_href)
- monitor_task(file_repository_api_client.sync(file_repo.pulp_href, sync_data).task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, sync_data).task)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
# STEP 2: sample artifacts that will be modified on the filesystem later on
content1, content2 = sample(get_files_in_manifest(remote.url), 2)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py b/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
--- a/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_repo_versions.py
@@ -4,9 +4,6 @@
from tempfile import NamedTemporaryFile
from uuid import uuid4
-from pulpcore.client.pulpcore import ApiException as CoreApiException
-from pulpcore.client.pulp_file.exceptions import ApiException
-
from pulpcore.tests.functional.utils import PulpTaskError, get_files_in_manifest
@@ -38,7 +35,7 @@ def file_9_contents(
def file_repository_content(
file_remote_ssl_factory,
file_repository_factory,
- file_repository_api_client,
+ file_bindings,
file_content_api_client,
basic_manifest_path,
monitor_task,
@@ -46,9 +43,11 @@ def file_repository_content(
"""Create some content that was synced into a repo on-demand."""
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
base_repo = file_repository_factory()
- task = file_repository_api_client.sync(base_repo.pulp_href, {"remote": remote.pulp_href}).task
+ task = file_bindings.RepositoriesFileApi.sync(
+ base_repo.pulp_href, {"remote": remote.pulp_href}
+ ).task
monitor_task(task)
- base_repo = file_repository_api_client.read(base_repo.pulp_href)
+ base_repo = file_bindings.RepositoriesFileApi.read(base_repo.pulp_href)
assert base_repo.latest_version_href[-2] == "1"
contents = file_content_api_client.list(repository_version=base_repo.latest_version_href)
assert contents.count == 3
@@ -58,7 +57,7 @@ def file_repository_content(
@pytest.mark.parallel
def test_add_remove_content(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_content_api_client,
file_repository_factory,
@@ -83,9 +82,11 @@ def test_add_remove_content(
# Sync content into the repository
CONTENT_BASE_HREF = "/api/v3/content/file/files/"
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="immediate")
- task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
+ task = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, {"remote": remote.pulp_href}
+ ).task
task_report = monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert task_report.created_resources[0] == repo.latest_version_href
@@ -120,9 +121,9 @@ def test_add_remove_content(
content = choice(contents.results)
body = {"remove_content_units": [content.pulp_href]}
- task = file_repository_api_client.modify(repo.pulp_href, body).task
+ task = file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
repo_versions = file_repository_version_api_client.list(repo.pulp_href)
assert repo_versions.count == 3
@@ -144,9 +145,9 @@ def test_add_remove_content(
# Add content to the repository
body = {"add_content_units": [content.pulp_href]}
- task = file_repository_api_client.modify(repo.pulp_href, body).task
+ task = file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
repo_versions = file_repository_version_api_client.list(repo.pulp_href)
assert repo_versions.count == 4
@@ -161,7 +162,7 @@ def test_add_remove_content(
@pytest.mark.parallel
def test_add_remove_repo_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_content_api_client,
file_repository_factory,
@@ -181,11 +182,11 @@ def test_add_remove_repo_version(
# Add versions to repository
for content in contents:
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert repo.latest_version_href[-2] == "9"
# Test trying to delete version 0 with a populated repository
@@ -193,18 +194,18 @@ def test_add_remove_repo_version(
monitor_task(file_repository_version_api_client.delete(ver_zero).task)
versions = file_repository_version_api_client.list(repo.pulp_href)
assert versions.count == 9
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.read(ver_zero)
assert e.value.status == 404
# Test deleting the last repository version
last_ver = repo.latest_version_href
monitor_task(file_repository_version_api_client.delete(last_ver).task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == "8"
versions = file_repository_version_api_client.list(repo.pulp_href)
assert versions.count == 8
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.read(last_ver)
assert e.value.status == 404
@@ -217,7 +218,7 @@ def test_add_remove_repo_version(
monitor_task(file_repository_version_api_client.delete(middle_ver).task)
versions = file_repository_version_api_client.list(repo.pulp_href)
assert versions.count == 7
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.read(middle_ver)
assert e.value.status == 404
@@ -245,7 +246,7 @@ def test_add_remove_repo_version(
@pytest.mark.parallel
def test_squash_repo_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_content_api_client,
file_repository_factory,
@@ -269,7 +270,7 @@ def test_squash_repo_version(
"""
content_units = file_9_contents
file_repo = file_repository_factory()
- response1 = file_repository_api_client.modify(
+ response1 = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href,
{
"add_content_units": [
@@ -280,7 +281,7 @@ def test_squash_repo_version(
},
)
- response2 = file_repository_api_client.modify(
+ response2 = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href,
{
"remove_content_units": [
@@ -296,7 +297,7 @@ def test_squash_repo_version(
},
)
- response3 = file_repository_api_client.modify(
+ response3 = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href,
{
"remove_content_units": [
@@ -308,7 +309,7 @@ def test_squash_repo_version(
},
)
- response4 = file_repository_api_client.modify(
+ response4 = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href,
{
"remove_content_units": [
@@ -375,9 +376,7 @@ def test_squash_repo_version(
@pytest.mark.parallel
def test_content_immutable_repo_version(
- file_repository_api_client,
- file_repository_version_api_client,
- file_client,
+ file_bindings,
file_repository_factory,
file_remote_ssl_factory,
basic_manifest_path,
@@ -389,26 +388,28 @@ def test_content_immutable_repo_version(
"""
file_repo = file_repository_factory()
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
+ task = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, {"remote": remote.pulp_href}
+ ).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert repo.latest_version_href[-2] == "1"
- repo_ver_attributes = dir(file_repository_version_api_client)
+ repo_ver_attributes = dir(file_bindings.RepositoriesFileVersionsApi)
# POST assertion
for attr in repo_ver_attributes:
assert "create" not in attr
- with pytest.raises(ApiException) as e:
- file_client.call_api(repo.latest_version_href, "POST", auth_settings=["basicAuth"])
+ with pytest.raises(file_bindings.ApiException) as e:
+ file_bindings.client.call_api(repo.latest_version_href, "POST", auth_settings=["basicAuth"])
assert e.value.status == 405
body = {"base_version": f"{repo.versions_href}0/"}
# PUT assertion
for attr in repo_ver_attributes:
assert "update" not in attr
- with pytest.raises(ApiException) as e:
- file_client.call_api(
+ with pytest.raises(file_bindings.ApiException) as e:
+ file_bindings.client.call_api(
repo.latest_version_href, "PUT", body=body, auth_settings=["basicAuth"]
)
assert e.value.status == 405
@@ -416,8 +417,8 @@ def test_content_immutable_repo_version(
# PATCH assertion
for attr in repo_ver_attributes:
assert "partial_update" not in attr
- with pytest.raises(ApiException) as e:
- file_client.call_api(
+ with pytest.raises(file_bindings.ApiException) as e:
+ file_bindings.client.call_api(
repo.latest_version_href, "PATCH", body=body, auth_settings=["basicAuth"]
)
assert e.value.status == 405
@@ -425,7 +426,7 @@ def test_content_immutable_repo_version(
@pytest.mark.parallel
def test_filter_repo_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repository_factory,
monitor_task,
@@ -435,11 +436,11 @@ def test_filter_repo_version(
file_repo = file_repository_factory()
# Setup 8 content units in Pulp to populate test repository with
for content in file_9_contents.values():
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
file_repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert repo.latest_version_href[-2] == "9"
repo_versions = file_repository_version_api_client.list(repo.pulp_href).results
@@ -451,7 +452,7 @@ def test_filter_repo_version(
{"pulp_created__gte": criteria, "pulp_created__lte": criteria},
{"pulp_created__range": [criteria, criteria]},
):
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.list(repo.pulp_href, **params)
assert e.value.status == 400
assert "Enter a valid date/time." in e.value.body
@@ -486,7 +487,7 @@ def test_filter_repo_version(
{"number__gte": criteria, "number__lte": criteria},
{"number__range": [criteria, criteria]},
):
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.list(repo.pulp_href, **params)
assert e.value.status == 400
assert "Enter a number." in e.value.body
@@ -510,7 +511,7 @@ def test_filter_repo_version(
@pytest.mark.parallel
def test_create_repo_base_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_content_api_client,
file_repository_factory,
@@ -523,29 +524,31 @@ def test_create_repo_base_version(
# Test ``base_version`` for the same repository
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
repo = file_repository_factory()
- monitor_task(file_repository_api_client.sync(repo.pulp_href, {"remote": remote.pulp_href}).task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ monitor_task(
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {"remote": remote.pulp_href}).task
+ )
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
base_content = file_content_api_client.list(repository_version=repo.latest_version_href)
base_version = file_repository_version_api_client.read(repo.latest_version_href)
assert base_version.base_version is None
# create repo version 2
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [file_random_content_unit.pulp_href]}
).task
)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
middle_content = file_content_api_client.list(repository_version=repo.latest_version_href)
assert middle_content.count == base_content.count + 1
# create repo version 3 from version 1
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"base_version": base_version.pulp_href}
).task
)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
# assert that base_version of the version 3 points to version 1
latest_version = file_repository_version_api_client.read(repo.latest_version_href)
assert latest_version.base_version == base_version.pulp_href
@@ -558,11 +561,11 @@ def test_create_repo_base_version(
repo2 = file_repository_factory()
# create a version for repo B using repo A version 1 as base_version
monitor_task(
- file_repository_api_client.modify(
+ file_bindings.RepositoriesFileApi.modify(
repo2.pulp_href, {"base_version": base_version.pulp_href}
).task
)
- repo2 = file_repository_api_client.read(repo2.pulp_href)
+ repo2 = file_bindings.RepositoriesFileApi.read(repo2.pulp_href)
latest_version2 = file_repository_version_api_client.read(repo2.latest_version_href)
# assert that base_version of repo B points to version 1 of repo A
@@ -583,8 +586,8 @@ def test_create_repo_base_version(
"add_content_units": [added_content.pulp_href],
"remove_content_units": [removed_content.pulp_href],
}
- monitor_task(file_repository_api_client.modify(repo3.pulp_href, body).task)
- repo3 = file_repository_api_client.read(repo3.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.modify(repo3.pulp_href, body).task)
+ repo3 = file_bindings.RepositoriesFileApi.read(repo3.pulp_href)
latest_version3 = file_repository_version_api_client.read(repo3.latest_version_href)
latest_content3 = file_content_api_client.list(repository_version=repo3.latest_version_href)
@@ -600,15 +603,18 @@ def test_create_repo_base_version(
# Exception is raised when non-existent ``base_version`` is used
nonexistant_version = f"{repo.versions_href}5/"
- with pytest.raises(ApiException) as e:
- file_repository_api_client.modify(repo.pulp_href, {"base_version": nonexistant_version})
+ with pytest.raises(file_bindings.ApiException) as e:
+ file_bindings.RepositoriesFileApi.modify(
+ repo.pulp_href, {"base_version": nonexistant_version}
+ )
assert e.value.status == 400
assert "Object does not exist." in e.value.body
@pytest.mark.parallel
def test_filter_artifacts(
- file_repository_api_client,
+ pulpcore_bindings,
+ file_bindings,
artifacts_api_client,
random_artifact_factory,
file_repository_factory,
@@ -625,11 +631,11 @@ def test_filter_artifacts(
for path in duplicate_filename_paths:
remote = file_remote_ssl_factory(manifest_path=path, policy="immediate")
- task = file_repository_api_client.sync(
+ task = file_bindings.RepositoriesFileApi.sync(
file_repo.pulp_href, {"remote": remote.pulp_href}
).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
# Assert only three artifacts show up when filtering
# Even on second sync only three will show up since the 3 added content units have the same
# relative paths of current present content and thus ends up replacing them leaving the
@@ -640,7 +646,7 @@ def test_filter_artifacts(
# Filter by invalid repository version.
bad_version = f"{file_repo.versions_href}5/"
- with pytest.raises(CoreApiException) as e:
+ with pytest.raises(pulpcore_bindings.ApiException) as e:
artifacts_api_client.list(repository_version=bad_version)
assert e.value.status == 400
for key in ("uri", "repositoryversion", "not", "found"):
@@ -649,7 +655,7 @@ def test_filter_artifacts(
@pytest.mark.parallel
def test_delete_repo_version_publication(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repository_factory,
file_remote_ssl_factory,
@@ -661,9 +667,11 @@ def test_delete_repo_version_publication(
"""Test that removing a repo version will delete its publication."""
file_repo = file_repository_factory()
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
+ task = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, {"remote": remote.pulp_href}
+ ).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert repo.latest_version_href[-2] == "1"
pub_body = {"repository": repo.pulp_href}
@@ -673,7 +681,7 @@ def test_delete_repo_version_publication(
# delete repo version used to create publication
file_repository_version_api_client.delete(repo.latest_version_href)
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_publication_api_client.read(publication.pulp_href)
assert e.value.status == 404
@@ -681,7 +689,7 @@ def test_delete_repo_version_publication(
@pytest.mark.parallel
def test_delete_protected_repo_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repository_factory,
file_remote_ssl_factory,
@@ -695,9 +703,11 @@ def test_delete_protected_repo_version(
"""Test that removing a repo version fails if its publication is distributed."""
file_repo = file_repository_factory()
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- task = file_repository_api_client.sync(file_repo.pulp_href, {"remote": remote.pulp_href}).task
+ task = file_bindings.RepositoriesFileApi.sync(
+ file_repo.pulp_href, {"remote": remote.pulp_href}
+ ).task
monitor_task(task)
- repo = file_repository_api_client.read(file_repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(file_repo.pulp_href)
assert repo.latest_version_href[-2] == "1"
pub_body = {"repository": repo.pulp_href}
@@ -708,7 +718,7 @@ def test_delete_protected_repo_version(
assert distribution.publication == publication.pulp_href
# deleting a protected repo version fails
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.delete(repo.latest_version_href)
assert e.value.status == 400
assert "The repository version cannot be deleted" in e.value.body
@@ -722,14 +732,14 @@ def test_delete_protected_repo_version(
# and then delete the repo version
task = file_repository_version_api_client.delete(repo.latest_version_href).task
monitor_task(task)
- with pytest.raises(ApiException) as e:
+ with pytest.raises(file_bindings.ApiException) as e:
file_repository_version_api_client.read(repo.latest_version_href)
assert e.value.status == 404
@pytest.mark.parallel
def test_clear_all_units_repo_version(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_content_api_client,
file_repository_factory,
@@ -742,13 +752,13 @@ def test_clear_all_units_repo_version(
# Test addition and removal of all units for a given repository version.
repo = file_repository_factory()
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
- file_repository_api_client.sync(repo.pulp_href, {"remote": remote.pulp_href})
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {"remote": remote.pulp_href})
content = choice(list(file_9_contents.values()))
body = {"add_content_units": [content.pulp_href], "remove_content_units": ["*"]}
- task = file_repository_api_client.modify(repo.pulp_href, body).task
+ task = file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == "2"
latest_version = file_repository_version_api_client.read(repo.latest_version_href)
@@ -762,16 +772,16 @@ def test_clear_all_units_repo_version(
# Test clear all units using base version.
repo = file_repository_factory()
for content in file_9_contents.values():
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
base_version_four = f"{repo.versions_href}4/"
body = {"base_version": base_version_four, "remove_content_units": ["*"]}
- monitor_task(file_repository_api_client.modify(repo.pulp_href, body).task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ monitor_task(file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-3:-1] == "10"
latest_version = file_repository_version_api_client.read(repo.latest_version_href)
@@ -780,8 +790,8 @@ def test_clear_all_units_repo_version(
assert latest_version.content_summary.removed["file.file"]["count"] == 9
# Test http error is raised when invalid remove
- with pytest.raises(ApiException) as e:
- file_repository_api_client.modify(
+ with pytest.raises(file_bindings.ApiException) as e:
+ file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"remove_content_units": ["*", content.pulp_href]}
)
assert e.value.status == 400
@@ -791,7 +801,7 @@ def test_clear_all_units_repo_version(
@pytest.mark.parallel
def test_repo_version_retention(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repository_content,
file_content_api_client,
@@ -809,11 +819,11 @@ def test_repo_version_retention(
# Test repo version retention.
repo = file_repository_factory(retain_repo_versions=1)
for content in contents.results:
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == "3"
versions = file_repository_version_api_client.list(repo.pulp_href)
@@ -828,17 +838,17 @@ def test_repo_version_retention(
# Test repo version retention when retain_repo_versions is set.
repo = file_repository_factory()
for content in contents.results:
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
versions = file_repository_version_api_client.list(repo.pulp_href)
assert versions.count == 4
# update retain_repo_versions to 2
- task = file_repository_api_client.partial_update(
+ task = file_bindings.RepositoriesFileApi.partial_update(
repo.pulp_href, {"retain_repo_versions": 2}
).task
monitor_task(task)
@@ -856,18 +866,18 @@ def test_repo_version_retention(
repo = file_repository_factory(**body)
publications = []
for content in contents.results:
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
publications.append(
file_publication_api_client.list(repository_version=repo.latest_version_href).results[0]
)
# all but the last publication should be gone
for publication in publications[:-1]:
- with pytest.raises(ApiException) as ae:
+ with pytest.raises(file_bindings.ApiException) as ae:
file_publication_api_client.read(publication.pulp_href)
assert ae.value.status == 404
@@ -879,7 +889,7 @@ def test_repo_version_retention(
@pytest.mark.parallel
def test_repo_versions_protected_from_cleanup(
- file_repository_api_client,
+ file_bindings,
file_repository_version_api_client,
file_repository_content,
file_publication_api_client,
@@ -891,12 +901,12 @@ def test_repo_versions_protected_from_cleanup(
"""Test that distributed repo versions are protected from retain_repo_versions."""
def _modify_and_validate(repo, content, expected_version, expected_total):
- task = file_repository_api_client.modify(
+ task = file_bindings.RepositoriesFileApi.modify(
repo.pulp_href, {"add_content_units": [content.pulp_href]}
).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == expected_version
versions = file_repository_version_api_client.list(repo.pulp_href)
@@ -933,7 +943,8 @@ def _modify_and_validate(repo, content, expected_version, expected_total):
@pytest.mark.parallel
def test_content_in_repository_version_view(
- file_repository_api_client,
+ pulpcore_bindings,
+ file_bindings,
repository_versions_api_client,
file_repository_factory,
file_random_content_unit,
@@ -945,7 +956,7 @@ def test_content_in_repository_version_view(
"/pulp/api/v3/content/file/files/c4ed74cf-a806-490d-a25f-94c3c3dd2dd7/"
)
- with pytest.raises(CoreApiException) as e:
+ with pytest.raises(pulpcore_bindings.ApiException) as e:
repository_versions_api_client.list(content=non_existant_content_href)
assert e.value.status == 400
@@ -955,9 +966,9 @@ def test_content_in_repository_version_view(
# Add content to first repo and assert repo-ver list w/ content is correct
body = {"add_content_units": [file_random_content_unit.pulp_href]}
- task = file_repository_api_client.modify(repo.pulp_href, body).task
+ task = file_bindings.RepositoriesFileApi.modify(repo.pulp_href, body).task
monitor_task(task)
- repo = file_repository_api_client.read(repo.pulp_href)
+ repo = file_bindings.RepositoriesFileApi.read(repo.pulp_href)
assert repo.latest_version_href[-2] == "1"
repo_vers = repository_versions_api_client.list(content=file_random_content_unit.pulp_href)
@@ -965,9 +976,9 @@ def test_content_in_repository_version_view(
assert repo_vers.results[0].pulp_href == repo.latest_version_href
# Add content to second repo and assert repo-ver list w/ content is larger
- task = file_repository_api_client.modify(repo2.pulp_href, body).task
+ task = file_bindings.RepositoriesFileApi.modify(repo2.pulp_href, body).task
monitor_task(task)
- repo2 = file_repository_api_client.read(repo2.pulp_href)
+ repo2 = file_bindings.RepositoriesFileApi.read(repo2.pulp_href)
assert repo2.latest_version_href[-2] == "1"
repo_vers = repository_versions_api_client.list(content=file_random_content_unit.pulp_href)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_tasks.py b/pulpcore/tests/functional/api/using_plugin/test_tasks.py
--- a/pulpcore/tests/functional/api/using_plugin/test_tasks.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_tasks.py
@@ -12,9 +12,9 @@
@pytest.fixture
-def distribution(file_repo, file_distribution_api_client, gen_object_with_cleanup):
+def distribution(file_bindings, file_repo, gen_object_with_cleanup):
distribution = gen_object_with_cleanup(
- file_distribution_api_client,
+ file_bindings.DistributionsFileApi,
{"name": str(uuid4()), "base_path": str(uuid4()), "repository": file_repo.pulp_href},
)
@@ -42,8 +42,8 @@ def test_retrieve_task_with_fields_created_resources_only(
@pytest.fixture
def setup_filter_fixture(
+ file_bindings,
file_repo,
- file_repository_api_client,
file_remote_ssl_factory,
basic_manifest_path,
tasks_api_client,
@@ -52,9 +52,11 @@ def setup_filter_fixture(
remote = file_remote_ssl_factory(manifest_path=basic_manifest_path, policy="on_demand")
body = RepositorySyncURL(remote=remote.pulp_href)
- repo_sync_task = monitor_task(file_repository_api_client.sync(file_repo.pulp_href, body).task)
+ repo_sync_task = monitor_task(
+ file_bindings.RepositoriesFileApi.sync(file_repo.pulp_href, body).task
+ )
- repo_update_action = file_repository_api_client.partial_update(
+ repo_update_action = file_bindings.RepositoriesFileApi.partial_update(
file_repo.pulp_href, {"description": str(uuid4())}
)
repo_update_task = tasks_api_client.read(repo_update_action.task)
diff --git a/pulpcore/tests/functional/api/using_plugin/test_unlinking_repo.py b/pulpcore/tests/functional/api/using_plugin/test_unlinking_repo.py
--- a/pulpcore/tests/functional/api/using_plugin/test_unlinking_repo.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_unlinking_repo.py
@@ -4,7 +4,7 @@
@pytest.mark.parallel
def test_shared_remote_usage(
- file_repository_api_client,
+ file_bindings,
file_repository_factory,
file_content_api_client,
file_remote_ssl_factory,
@@ -17,13 +17,13 @@ def test_shared_remote_usage(
# Create and sync repos.
repos = [file_repository_factory() for dummy in range(4)]
sync_tasks = [
- file_repository_api_client.sync(repo.pulp_href, {"remote": remote.pulp_href}).task
+ file_bindings.RepositoriesFileApi.sync(repo.pulp_href, {"remote": remote.pulp_href}).task
for repo in repos
]
for task in sync_tasks:
monitor_task(task)
- repos = [(file_repository_api_client.read(repo.pulp_href)) for repo in repos]
+ repos = [(file_bindings.RepositoriesFileApi.read(repo.pulp_href)) for repo in repos]
# Compare contents of repositories.
contents = set()
diff --git a/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py b/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
--- a/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
+++ b/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
@@ -1,16 +1,13 @@
-import asyncio
import json
-import os
-import stat
import subprocess
import uuid
-import aiohttp
+import requests
import gnupg
import pytest
-signing_script_string = r"""#!/usr/bin/env bash
+SIGNING_SCRIPT_STRING = r"""#!/usr/bin/env bash
FILE_PATH=$1
SIGNATURE_PATH="$1.asc"
@@ -33,26 +30,24 @@
@pytest.fixture(scope="session")
def signing_script_path(signing_script_temp_dir, signing_gpg_homedir_path):
- signing_script_filename = signing_script_temp_dir / "sign-metadata.sh"
- with open(signing_script_filename, "w", 0o770) as sign_metadata_file:
- sign_metadata_file.write(
- signing_script_string.replace("HOMEDIRHERE", str(signing_gpg_homedir_path))
- )
+ signing_script_file = signing_script_temp_dir / "sign-metadata.sh"
+ signing_script_file.write_text(
+ SIGNING_SCRIPT_STRING.replace("HOMEDIRHERE", str(signing_gpg_homedir_path))
+ )
- st = os.stat(signing_script_filename)
- os.chmod(signing_script_filename, st.st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH)
+ signing_script_file.chmod(0o755)
- return signing_script_filename
+ return signing_script_file
@pytest.fixture(scope="session")
-def signing_script_temp_dir(tmpdir_factory):
- yield tmpdir_factory.mktemp(str(uuid.uuid4()))
+def signing_script_temp_dir(tmp_path_factory):
+ return tmp_path_factory.mktemp("sigining_script_dir")
@pytest.fixture(scope="session")
-def signing_gpg_homedir_path(tmpdir_factory):
- return tmpdir_factory.mktemp(str(uuid.uuid4()))
+def signing_gpg_homedir_path(tmp_path_factory):
+ return tmp_path_factory.mktemp("gpghome")
@pytest.fixture
@@ -87,18 +82,13 @@ def _sign_with_ascii_armored_detached_signing_service(filename):
@pytest.fixture(scope="session")
def signing_gpg_metadata(signing_gpg_homedir_path):
"""A fixture that returns a GPG instance and related metadata (i.e., fingerprint, keyid)."""
- private_key_url = "https://raw.githubusercontent.com/pulp/pulp-fixtures/master/common/GPG-PRIVATE-KEY-fixture-signing" # noqa: E501
+ PRIVATE_KEY_URL = "https://raw.githubusercontent.com/pulp/pulp-fixtures/master/common/GPG-PRIVATE-KEY-fixture-signing" # noqa: E501
- async def download_key():
- async with aiohttp.ClientSession() as session:
- async with session.get(private_key_url) as response:
- return await response.text()
-
- private_key_data = asyncio.run(download_key())
+ response = requests.get(PRIVATE_KEY_URL)
+ response.raise_for_status()
gpg = gnupg.GPG(gnupghome=signing_gpg_homedir_path)
-
- gpg.import_keys(private_key_data)
+ gpg.import_keys(response.content)
fingerprint = gpg.list_keys()[0]["fingerprint"]
keyid = gpg.list_keys()[0]["keyid"]
diff --git a/pulpcore/tests/functional/utils.py b/pulpcore/tests/functional/utils.py
--- a/pulpcore/tests/functional/utils.py
+++ b/pulpcore/tests/functional/utils.py
@@ -40,6 +40,21 @@ def __init__(self, task_group):
self.task_group = task_group
+class BindingsNamespace:
+ def __init__(self, module, client):
+ self.module = module
+ self.client = client
+ self.ApiException = self.module.exceptions.ApiException
+
+ def __getattr__(self, name):
+ # __getattr__ is only consulted if nothing is found in __dict__.
+ assert name.endswith("Api")
+
+ api_object = getattr(self.module, name)(self.client)
+ self.__dict__[name] = api_object
+ return api_object
+
+
@dataclass
class MockDownload:
"""Class for representing a downloaded file."""
| Running unit tests with Pytest is pulling in functional test requirements
**Describe the bug**
I should be able to exclusively run the unit tests without issues. However, it seems we're getting caught up on a pytest plugin that gets loaded, which then pulls in the functional test code, making the separation between unit tests and functional tests moot.
```
root@5531d25eaa6e pulp_rpm]# pytest -v -r sx --color=yes pulp_rpm/tests/unit/
Traceback (most recent call last):
File "/usr/local/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 192, in console_main
code = main()
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 150, in main
config = _prepareconfig(args, plugins)
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 331, in _prepareconfig
config = pluginmanager.hook.pytest_cmdline_parse(
File "/usr/local/lib/python3.8/site-packages/pluggy/_hooks.py", line 493, in __call__
return self._hookexec(self.name, self._hookimpls, kwargs, firstresult)
File "/usr/local/lib/python3.8/site-packages/pluggy/_manager.py", line 115, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "/usr/local/lib/python3.8/site-packages/pluggy/_callers.py", line 130, in _multicall
teardown[0].send(outcome)
File "/usr/local/lib/python3.8/site-packages/_pytest/helpconfig.py", line 104, in pytest_cmdline_parse
config: Config = outcome.get_result()
File "/usr/local/lib/python3.8/site-packages/pluggy/_result.py", line 114, in get_result
raise exc.with_traceback(exc.__traceback__)
File "/usr/local/lib/python3.8/site-packages/pluggy/_callers.py", line 77, in _multicall
res = hook_impl.function(*args)
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1075, in pytest_cmdline_parse
self.parse(args)
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1425, in parse
self._preparse(args, addopts=addopts)
File "/usr/local/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1305, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/usr/local/lib/python3.8/site-packages/pluggy/_manager.py", line 398, in load_setuptools_entrypoints
plugin = ep.load()
File "/usr/lib64/python3.8/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/usr/lib64/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/src/pulpcore/pulp_file/pytest_plugin.py", line 21, in <module>
from pulpcore.tests.functional.utils import generate_iso, generate_manifest
File "/src/pulpcore/pulpcore/tests/functional/__init__.py", line 25, in <module>
from pulpcore.tests.functional.utils import (
File "/src/pulpcore/pulpcore/tests/functional/utils.py", line 22, in <module>
from pulp_smash.pulp3.bindings import PulpTaskError, PulpTaskGroupError
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "/usr/local/lib/python3.8/site-packages/_pytest/assertion/rewrite.py", line 186, in exec_module
exec(co, module.__dict__)
File "/usr/local/lib/python3.8/site-packages/pulp_smash/pulp3/bindings.py", line 15, in <module>
cfg = get_config()
File "/usr/local/lib/python3.8/site-packages/pulp_smash/config.py", line 242, in get_config
_CONFIG = PulpSmashConfig.load()
File "/usr/local/lib/python3.8/site-packages/pulp_smash/config.py", line 553, in load
path = cls.get_load_path(xdg_subdir, config_file)
File "/usr/local/lib/python3.8/site-packages/pulp_smash/config.py", line 616, in get_load_path
raise exceptions.ConfigFileNotFoundError(
pulp_smash.exceptions.ConfigFileNotFoundError: /opt/settings/pulp_smash/settings.jsonPulp Smash is unable to find a configuration file. The following (XDG compliant) paths have been searched: , /etc/xdg/pulp_smash/settings.json
```
I think this pins it down a bit
```
File "/src/pulpcore/pulp_file/pytest_plugin.py", line 21, in <module>
from pulpcore.tests.functional.utils import generate_iso, generate_manifest
File "/src/pulpcore/pulpcore/tests/functional/__init__.py", line 25, in <module>
from pulpcore.tests.functional.utils import (
File "/src/pulpcore/pulpcore/tests/functional/utils.py", line 22, in <module>
from pulp_smash.pulp3.bindings import PulpTaskError, PulpTaskGroupError
```
**To Reproduce**
Run `pytest -v -r sx --color=yes {project}/tests/unit/`
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
| 2023-12-08T16:51:19 |
|
pulp/pulpcore | 4,863 | pulp__pulpcore-4863 | [
"4777"
] | 120c08ef0881fcbae6136a2390fdc98b13129caf | diff --git a/pulpcore/app/serializers/exporter.py b/pulpcore/app/serializers/exporter.py
--- a/pulpcore/app/serializers/exporter.py
+++ b/pulpcore/app/serializers/exporter.py
@@ -1,6 +1,6 @@
import os
-import re
from gettext import gettext as _
+import re
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -19,6 +19,16 @@
from pulpcore.constants import FS_EXPORT_CHOICES, FS_EXPORT_METHODS
+def parse_human_readable_file_size(size: str):
+ # based on https://stackoverflow.com/a/42865957/2002471
+ units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
+ size = size.upper()
+ if not re.match(r" ", size):
+ size = re.sub(r"([KMGT]?B)", r" \1", size)
+ number, unit = [string.strip() for string in size.split()]
+ return int(float(number) * units[unit])
+
+
class ExporterSerializer(ModelSerializer):
"""
Base serializer for Exporters.
@@ -208,23 +218,13 @@ def validate(self, data):
)
return super().validate(data)
- @staticmethod
- def _parse_size(size):
+ def validate_chunk_size(self, chunk_size):
try:
- # based on https://stackoverflow.com/a/42865957/2002471
- units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
- size = size.upper()
- if not re.match(r" ", size):
- size = re.sub(r"([KMGT]?B)", r" \1", size)
- number, unit = [string.strip() for string in size.split()]
- return int(float(number) * units[unit])
+ the_size = parse_human_readable_file_size(chunk_size)
except ValueError:
raise serializers.ValidationError(
- _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(size)
+ _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(chunk_size)
)
-
- def validate_chunk_size(self, chunk_size):
- the_size = self._parse_size(chunk_size)
if the_size <= 0:
raise serializers.ValidationError(
_("Chunk size {} is not greater than zero!").format(the_size)
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -422,12 +422,10 @@ def pulp_export(exporter_pk, params):
os.remove(pathname)
raise
# compute the hashes
- global_hash = hasher()
paths = sorted([str(Path(p)) for p in glob(tarfile_fp + ".*")])
for a_file in paths:
- a_hash = compute_file_hash(a_file, hasher=hasher(), cumulative_hash=global_hash)
+ a_hash = compute_file_hash(a_file, hasher=hasher())
rslts[a_file] = a_hash
- tarfile_hash = global_hash.hexdigest()
else:
# write into the file
@@ -450,23 +448,20 @@ def pulp_export(exporter_pk, params):
# write outputfile/hash info to a file 'next to' the output file(s)
output_file_info_path = tarfile_fp.replace(".tar", "-toc.json")
with open(output_file_info_path, "w") as outfile:
- if the_export.validated_chunk_size:
- chunk_size = the_export.validated_chunk_size
- else:
- chunk_size = 0
- chunk_toc = {
+ table_of_contents = {
"meta": {
- "chunk_size": chunk_size,
- "file": os.path.basename(tarfile_fp),
- "global_hash": tarfile_hash,
"checksum_type": checksum_type,
},
"files": {},
}
+
+ if the_export.validated_chunk_size:
+ table_of_contents["meta"]["chunk_size"] = the_export.validated_chunk_size
+
# Build a toc with just filenames (not the path on the exporter-machine)
for a_path in rslts.keys():
- chunk_toc["files"][os.path.basename(a_path)] = rslts[a_path]
- json.dump(chunk_toc, outfile)
+ table_of_contents["files"][os.path.basename(a_path)] = rslts[a_path]
+ json.dump(table_of_contents, outfile)
# store toc info
toc_hash = compute_file_hash(output_file_info_path)
diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -76,12 +76,17 @@ def __init__(self, toc_path):
raise ValidationError(_("Missing 'files' or 'meta' keys in table-of-contents!"))
toc_dir = os.path.dirname(toc_path)
- self.chunk_size = int(self.toc["meta"]["chunk_size"])
# sorting-by-filename is REALLY IMPORTANT here
# keys are of the form <base-export-name>.00..<base-export-name>.NN,
# and must be reassembled IN ORDER
self.chunk_names = sorted(self.toc["files"].keys())
self.chunk_paths = [os.path.join(toc_dir, chunk_name) for chunk_name in self.chunk_names]
+ self.chunk_size = int(self.toc["meta"].get("chunk_size", 0))
+ if not self.chunk_size:
+ assert (
+ len(self.toc["files"]) == 1
+ ), "chunk_size must exist and be non-zero if more than one chunk exists"
+ self.chunk_size = os.path.getsize(self.chunk_paths[0])
def __enter__(self):
assert not hasattr(self, "chunks"), "ChunkedFile is not reentrant."
| Division by zero during Pulp import
**Version**
3.36.0+
**Describe the bug**
During a Katello test run, the following exception was encountered
```
{"traceback"=>" File \"/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py\", line 61, in _execute_task\n result = func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 453, in pulp_import\n with tarfile.open(path, \"r\", fileobj=fp) as tar:\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib64/python3.11/tarfile.py\", line 1815, in open\n fileobj.seek(saved_pos)\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 132, in seek\n self.chunk = target // self.chunk_size\n ~~~~~~~^^~~~~~~~~~~~~~~~~\n", "description"=>"integer division or modulo by zero"} (Katello::Errors::Pulp3Error)
```
I'm not 100% certain, but I suspect the cause here is that if chunks aren't used during the export, chunk_size is set to 0:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L456C31-L456C31
and ChunkedFile reads that value here:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L79
and we're using ChunkedFile even in the non-chunked case so long as a TOC file was provided.
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L335-L336
**To Reproduce**
I would expect it to be reproducible if performing a non-chunked export, and providing a TOC path for the import.
**Expected behavior**
We should never see an integer division by zero error.
**Additional context**
| We'll also want to expand our tests to cover this
Does this mean that we can provide a toc file that makes a non-chunked import look like one with a single chunk, only the specified chunk-size is wrong? | 2023-12-14T13:08:50 |
|
pulp/pulpcore | 4,864 | pulp__pulpcore-4864 | [
"4777"
] | 918821e4a3b581afa686bbe7e978f5b5601ffa40 | diff --git a/pulpcore/app/serializers/exporter.py b/pulpcore/app/serializers/exporter.py
--- a/pulpcore/app/serializers/exporter.py
+++ b/pulpcore/app/serializers/exporter.py
@@ -1,6 +1,6 @@
import os
-import re
from gettext import gettext as _
+import re
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -19,6 +19,16 @@
from pulpcore.constants import FS_EXPORT_CHOICES, FS_EXPORT_METHODS
+def parse_human_readable_file_size(size: str):
+ # based on https://stackoverflow.com/a/42865957/2002471
+ units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
+ size = size.upper()
+ if not re.match(r" ", size):
+ size = re.sub(r"([KMGT]?B)", r" \1", size)
+ number, unit = [string.strip() for string in size.split()]
+ return int(float(number) * units[unit])
+
+
class ExporterSerializer(ModelSerializer):
"""
Base serializer for Exporters.
@@ -208,23 +218,13 @@ def validate(self, data):
)
return super().validate(data)
- @staticmethod
- def _parse_size(size):
+ def validate_chunk_size(self, chunk_size):
try:
- # based on https://stackoverflow.com/a/42865957/2002471
- units = {"B": 1, "KB": 2**10, "MB": 2**20, "GB": 2**30, "TB": 2**40}
- size = size.upper()
- if not re.match(r" ", size):
- size = re.sub(r"([KMGT]?B)", r" \1", size)
- number, unit = [string.strip() for string in size.split()]
- return int(float(number) * units[unit])
+ the_size = parse_human_readable_file_size(chunk_size)
except ValueError:
raise serializers.ValidationError(
- _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(size)
+ _("chunk_size '{}' is not valid (valid units are B/KB/MB/GB/TB)").format(chunk_size)
)
-
- def validate_chunk_size(self, chunk_size):
- the_size = self._parse_size(chunk_size)
if the_size <= 0:
raise serializers.ValidationError(
_("Chunk size {} is not greater than zero!").format(the_size)
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -422,12 +422,10 @@ def pulp_export(exporter_pk, params):
os.remove(pathname)
raise
# compute the hashes
- global_hash = hasher()
paths = sorted([str(Path(p)) for p in glob(tarfile_fp + ".*")])
for a_file in paths:
- a_hash = compute_file_hash(a_file, hasher=hasher(), cumulative_hash=global_hash)
+ a_hash = compute_file_hash(a_file, hasher=hasher())
rslts[a_file] = a_hash
- tarfile_hash = global_hash.hexdigest()
else:
# write into the file
@@ -450,23 +448,20 @@ def pulp_export(exporter_pk, params):
# write outputfile/hash info to a file 'next to' the output file(s)
output_file_info_path = tarfile_fp.replace(".tar", "-toc.json")
with open(output_file_info_path, "w") as outfile:
- if the_export.validated_chunk_size:
- chunk_size = the_export.validated_chunk_size
- else:
- chunk_size = 0
- chunk_toc = {
+ table_of_contents = {
"meta": {
- "chunk_size": chunk_size,
- "file": os.path.basename(tarfile_fp),
- "global_hash": tarfile_hash,
"checksum_type": checksum_type,
},
"files": {},
}
+
+ if the_export.validated_chunk_size:
+ table_of_contents["meta"]["chunk_size"] = the_export.validated_chunk_size
+
# Build a toc with just filenames (not the path on the exporter-machine)
for a_path in rslts.keys():
- chunk_toc["files"][os.path.basename(a_path)] = rslts[a_path]
- json.dump(chunk_toc, outfile)
+ table_of_contents["files"][os.path.basename(a_path)] = rslts[a_path]
+ json.dump(table_of_contents, outfile)
# store toc info
toc_hash = compute_file_hash(output_file_info_path)
diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -76,12 +76,17 @@ def __init__(self, toc_path):
raise ValidationError(_("Missing 'files' or 'meta' keys in table-of-contents!"))
toc_dir = os.path.dirname(toc_path)
- self.chunk_size = int(self.toc["meta"]["chunk_size"])
# sorting-by-filename is REALLY IMPORTANT here
# keys are of the form <base-export-name>.00..<base-export-name>.NN,
# and must be reassembled IN ORDER
self.chunk_names = sorted(self.toc["files"].keys())
self.chunk_paths = [os.path.join(toc_dir, chunk_name) for chunk_name in self.chunk_names]
+ self.chunk_size = int(self.toc["meta"].get("chunk_size", 0))
+ if not self.chunk_size:
+ assert (
+ len(self.toc["files"]) == 1
+ ), "chunk_size must exist and be non-zero if more than one chunk exists"
+ self.chunk_size = os.path.getsize(self.chunk_paths[0])
def __enter__(self):
assert not hasattr(self, "chunks"), "ChunkedFile is not reentrant."
| Division by zero during Pulp import
**Version**
3.36.0+
**Describe the bug**
During a Katello test run, the following exception was encountered
```
{"traceback"=>" File \"/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py\", line 61, in _execute_task\n result = func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 453, in pulp_import\n with tarfile.open(path, \"r\", fileobj=fp) as tar:\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/lib64/python3.11/tarfile.py\", line 1815, in open\n fileobj.seek(saved_pos)\n File \"/usr/lib/python3.11/site-packages/pulpcore/app/tasks/importer.py\", line 132, in seek\n self.chunk = target // self.chunk_size\n ~~~~~~~^^~~~~~~~~~~~~~~~~\n", "description"=>"integer division or modulo by zero"} (Katello::Errors::Pulp3Error)
```
I'm not 100% certain, but I suspect the cause here is that if chunks aren't used during the export, chunk_size is set to 0:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L456C31-L456C31
and ChunkedFile reads that value here:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L79
and we're using ChunkedFile even in the non-chunked case so long as a TOC file was provided.
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L335-L336
**To Reproduce**
I would expect it to be reproducible if performing a non-chunked export, and providing a TOC path for the import.
**Expected behavior**
We should never see an integer division by zero error.
**Additional context**
| We'll also want to expand our tests to cover this
Does this mean that we can provide a toc file that makes a non-chunked import look like one with a single chunk, only the specified chunk-size is wrong? | 2023-12-14T13:09:02 |
|
pulp/pulpcore | 4,869 | pulp__pulpcore-4869 | [
"4845"
] | 5f6fe99851eec0fd175be01d832ce2d6aeb0119e | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -186,7 +186,8 @@ def make_response(self, key, base_key):
return None
entry = json.loads(entry)
response_type = entry.pop("type", None)
- expires = entry.pop("expires", None)
+ # None means "doesn't expire", unset means "already expired".
+ expires = entry.pop("expires", -1)
if (not response_type or response_type not in self.RESPONSE_TYPES) or (
expires and expires < time.time()
):
@@ -202,9 +203,12 @@ def make_entry(self, key, base_key, handler, args, kwargs, expires=DEFAULT_EXPIR
"""Gets the response for the request and try to turn it into a cacheable entry"""
response = handler(*args, **kwargs)
entry = {"headers": dict(response.headers), "status": response.status_code}
- if expires:
+ if expires is not None:
# Redis TTL is not sufficient: https://github.com/pulp/pulpcore/issues/4845
entry["expires"] = expires + time.time()
+ else:
+ # Settings allow you to set None to mean "does not expire". Persist.
+ entry["expires"] = None
response.headers["X-PULP-CACHE"] = "MISS"
if isinstance(response, HttpResponseRedirect):
entry["redirect_to"] = str(response.headers["Location"])
@@ -373,7 +377,8 @@ async def make_response(self, key, base_key):
entry["body"] = bytes.fromhex(binary)
response_type = entry.pop("type", None)
- expires = entry.pop("expires", None)
+ # None means "doesn't expire", unset means "already expired".
+ expires = entry.pop("expires", -1)
if (not response_type or response_type not in self.RESPONSE_TYPES) or (
expires and expires < time.time()
):
@@ -392,9 +397,12 @@ async def make_entry(self, key, base_key, handler, args, kwargs, expires=DEFAULT
response = e
entry = {"headers": dict(response.headers), "status": response.status}
- if expires:
+ if expires is not None:
# Redis TTL is not sufficient: https://github.com/pulp/pulpcore/issues/4845
entry["expires"] = expires + time.time()
+ else:
+ # Settings allow you to set None to mean "does not expire". Persist.
+ entry["expires"] = None
response.headers.update({"X-PULP-CACHE": "MISS"})
if isinstance(response, FileResponse):
entry["path"] = str(response._path)
| Intermittent 403s when using Redis + Azure Storage backend
**Version**
`pulpcore 3.40.4`
**The bug**
Cloud storage options often have a timeframe that a given signed URL is valid for. Let's suppose that I have set [AZURE_URL_EXPIRATION_SECS](https://django-storages.readthedocs.io/en/latest/backends/azure.html#settings) = 3600 (an hour), which I have.
Pulp's Redis cache has a [default TTL](https://docs.pulpproject.org/pulpcore/configuration/settings.html#cache-settings) of 600 seconds (ten minutes). Let's suppose that I have not changed that, which is true.
A naive developer (me) would assume that since ten minutes is much smaller than an hour there will be no issue with enabling the Redis cache and that users would never get redirected to an expired Azure Storage URL. This is false. The issue is with how pulp-content interacts with Redis.
Pulp-content caches responses (which in this case are always redirect urls to Azure Storage) in hashes with `HSET`. The hash's overall key will be related to the _whole repo_ (or more accurately the distribution, but whatever), something like `yumrepos/sherr-test`. Then individual url/response pairs will get set as key/values of that hash, where an example key might be `/yumrepos/sherr-test/repodata/repomd.xml:GET`. I assume this is so that it's easy to wipe out all data related to a repo when a new version gets published.
The Redis TTL (10 minutes) gets set on that distribution-level hash object. So one would assume that all cached responses related to that distribution will get wiped every 10 minutes, even if some of them are much younger than that. **However**, pulp-content [resets the TTL for the hash every time it caches a new response](https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L88).
This means that as long as users are requesting _some new file_ from the repo before each reset 10 minute TTL is reached, this hash object will persist in Redis _indefinitely_. And after an hour those earlier-cached Azure Storage URLs are in fact expired, and users requesting those files will get 403s.
[You cannot set](https://stackoverflow.com/questions/16545321/how-to-expire-the-hset-child-key-in-redis) a TTL on individual child keys of hashes in Redis. So it seems to me that the options for fixing this are:
1. Don't use hashes, and `SET` a top-level key in Redis for every url/response. This makes invalidating the cache after a publication harder.
2. Attempt to only set the TTL on the hash one time, when it is first created. This would set the behavior back to "all info for the whole repo is wiped every 10 minutes, even if some responses are much younger", which was probably the original intention.
3. Store an additional field on each child key that pulp-content can evaluate to see if _this particular_ response is past the Redis TTL, and refresh it if so.
I don't have any strong opinions on which of those is the correct thing to do, but the current behavior of always resetting the TTL is definitely wrong.
**Additional context**
I am aware that this sounds very similar to the issue @vonsch reported in #3077. However his debugging sounds like it _might actually_ be a different issue, with `django_storages` returning about-to-expire S3 urls to a correctly-behaving `redis` cache, while in this case it's actually entirely Redis' fault, or rather pulp-content's usage of it. If they come back and say that it is in fact the same issue then I have no problem with merging this issue into that one.
| Good find! I would go with the third option. Setting the additional field in the cache entry here (https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L200) and adding an extra check here (https://github.com/pulp/pulpcore/blob/main/pulpcore/cache/cache.py#L187-L188) should handle this. The same change will have to be mirrored across the `SyncContentCache` and `AsyncContentCache`.
Reopening requested on matrix.
I have discovered an issue with the current patch that we probably should fix before release:
The current patch considers the non-existence of the `expires` key to mean "does not expire". It should _instead_ mean "already expired". Otherwise cached responses from the previous version of pulp will persist indefinitely, causing this issue to remain. | 2023-12-19T15:10:48 |
|
pulp/pulpcore | 4,951 | pulp__pulpcore-4951 | [
"3036"
] | e374977c2cbc0dba9eb2d6c17e247461ec51c55c | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -120,6 +120,8 @@ def _make_aiohttp_session_from_remote(self):
sslcontext.verify_mode = ssl.CERT_NONE
if sslcontext:
tcp_conn_opts["ssl_context"] = sslcontext
+ # Trust the system-known CA certs, not just the end-remote CA
+ sslcontext.load_default_certs()
headers = MultiDict({"User-Agent": DownloaderFactory.user_agent()})
if self._remote.headers is not None:
| diff --git a/pulpcore/tests/functional/api/using_plugin/test_proxy.py b/pulpcore/tests/functional/api/using_plugin/test_proxy.py
--- a/pulpcore/tests/functional/api/using_plugin/test_proxy.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_proxy.py
@@ -4,6 +4,7 @@
from pulpcore.client.pulp_file import (
RepositorySyncURL,
)
+import sys
def _run_basic_sync_and_assert(
@@ -163,3 +164,34 @@ def test_sync_http_through_https_proxy(
file_content_api_client,
monitor_task,
)
+
+
[email protected]
+def test_sync_https_through_https_proxy(
+ file_remote_ssl_factory,
+ file_repo,
+ file_bindings,
+ file_content_api_client,
+ https_proxy,
+ basic_manifest_path,
+ monitor_task,
+):
+ """
+ Test syncing http through an https proxy.
+ """
+ if not (sys.version_info.major >= 3 and sys.version_info.minor >= 11):
+ pytest.skip("HTTPS proxy only supported on python3.11+")
+ remote_on_demand = file_remote_ssl_factory(
+ manifest_path=basic_manifest_path,
+ policy="on_demand",
+ proxy_url=https_proxy.proxy_url,
+ tls_validation="false",
+ )
+
+ _run_basic_sync_and_assert(
+ remote_on_demand,
+ file_repo,
+ file_bindings,
+ file_content_api_client,
+ monitor_task,
+ )
| HTTPs Proxies not working.
**Version**
tfm-pulpcore-python3-pulpcore-3.17.7-1.el7.noarch
tfm-pulpcore-python3-aiohttp-3.8.1-2.el7.x86_64
**Describe the bug**
There was HTTPS proxy tunneling support added to the aiohttp library for 3.8.1 => https://github.com/aio-libs/aiohttp/pull/5992
However pulpcore does not
1. Have the necessary bindings to get this mode to run.
2. Expects the CA Cert of the proxy to be concatenated to the repo's remote CA cert instead of expecting it in the default trust store (as it used to do in pulp2).
**To Reproduce**
Steps to reproduce the behavior:
- Setup a https proxy or ping me about one
- Add the proxy's cacert to the default trust store.
- Add a repo remote with the feed and proxy urls
- Sync
**Expected behavior**
Clean Sync
**Actual behavior**
SSL error while trying to connect to the proxy.
| Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1993917
This works with Python 3.11. Closing the issue since Pulp is not in the position to update Python for users.
@dkliban Does this work *immediately* with Python 3.11 or are other patches required? Because I see Partha's PR which monkeypatches the internal flag that enables this support, but that's not all it does, it does other things too.
Will we need to adopt some or all of those other changes in order for this to work? If so then this issue ought to be reopened because we won't get this resolved for free by upgrading the Python runtime.
This should just work with Python 3.11. There are no code changes in Pulp required.
Reopening - while python/3.11 and recent aiohttp will (finally) allow this, Pulp still needs to load the system-allowed certstore to be able to trust an HTTPS proxy's CA. | 2024-01-22T21:10:08 |
pulp/pulpcore | 4,963 | pulp__pulpcore-4963 | [
"3036"
] | 24807c5f1e8af99d3d741b6004f35e681cac6bd8 | diff --git a/pulpcore/download/factory.py b/pulpcore/download/factory.py
--- a/pulpcore/download/factory.py
+++ b/pulpcore/download/factory.py
@@ -120,6 +120,8 @@ def _make_aiohttp_session_from_remote(self):
sslcontext.verify_mode = ssl.CERT_NONE
if sslcontext:
tcp_conn_opts["ssl_context"] = sslcontext
+ # Trust the system-known CA certs, not just the end-remote CA
+ sslcontext.load_default_certs()
headers = MultiDict({"User-Agent": DownloaderFactory.user_agent()})
if self._remote.headers is not None:
| HTTPs Proxies not working.
**Version**
tfm-pulpcore-python3-pulpcore-3.17.7-1.el7.noarch
tfm-pulpcore-python3-aiohttp-3.8.1-2.el7.x86_64
**Describe the bug**
There was HTTPS proxy tunneling support added to the aiohttp library for 3.8.1 => https://github.com/aio-libs/aiohttp/pull/5992
However pulpcore does not
1. Have the necessary bindings to get this mode to run.
2. Expects the CA Cert of the proxy to be concatenated to the repo's remote CA cert instead of expecting it in the default trust store (as it used to do in pulp2).
**To Reproduce**
Steps to reproduce the behavior:
- Setup a https proxy or ping me about one
- Add the proxy's cacert to the default trust store.
- Add a repo remote with the feed and proxy urls
- Sync
**Expected behavior**
Clean Sync
**Actual behavior**
SSL error while trying to connect to the proxy.
| Bugzilla
https://bugzilla.redhat.com/show_bug.cgi?id=1993917
This works with Python 3.11. Closing the issue since Pulp is not in the position to update Python for users.
@dkliban Does this work *immediately* with Python 3.11 or are other patches required? Because I see Partha's PR which monkeypatches the internal flag that enables this support, but that's not all it does, it does other things too.
Will we need to adopt some or all of those other changes in order for this to work? If so then this issue ought to be reopened because we won't get this resolved for free by upgrading the Python runtime.
This should just work with Python 3.11. There are no code changes in Pulp required.
Reopening - while python/3.11 and recent aiohttp will (finally) allow this, Pulp still needs to load the system-allowed certstore to be able to trust an HTTPS proxy's CA. | 2024-01-24T15:22:32 |
|
pulp/pulpcore | 4,968 | pulp__pulpcore-4968 | [
"4967"
] | cd31fb66b1d412d8d3d79efe53e6764c80bcfb42 | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -383,7 +383,7 @@ async def make_response(self, key, base_key):
expires and expires < time.time()
):
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T15:09:47 |
||
pulp/pulpcore | 4,970 | pulp__pulpcore-4970 | [
"4967"
] | 6525bdae9c1d787e9c153459bfbc2da00af9ab96 | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -364,7 +364,7 @@ async def make_response(self, key, base_key):
response_type = entry.pop("type", None)
if not response_type or response_type not in self.RESPONSE_TYPES:
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:01:00 |
||
pulp/pulpcore | 4,971 | pulp__pulpcore-4971 | [
"4967"
] | c0e2580a31693dc685e62f79c217c4b6c31616a5 | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -366,7 +366,7 @@ async def make_response(self, key, base_key):
response_type = entry.pop("type", None)
if not response_type or response_type not in self.RESPONSE_TYPES:
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:01:13 |
||
pulp/pulpcore | 4,972 | pulp__pulpcore-4972 | [
"4967"
] | 4152871a1b52c131223e5d7db711ec75eeddf2e8 | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -366,7 +366,7 @@ async def make_response(self, key, base_key):
response_type = entry.pop("type", None)
if not response_type or response_type not in self.RESPONSE_TYPES:
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:01:27 |
||
pulp/pulpcore | 4,973 | pulp__pulpcore-4973 | [
"4967"
] | 02ad372b1534efbe31ece427e12cc6c08a84414c | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -366,7 +366,7 @@ async def make_response(self, key, base_key):
response_type = entry.pop("type", None)
if not response_type or response_type not in self.RESPONSE_TYPES:
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:01:42 |
||
pulp/pulpcore | 4,974 | pulp__pulpcore-4974 | [
"4967"
] | 37a49bd6660c80c2faa9da0d50f81eabf84bc6aa | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -366,7 +366,7 @@ async def make_response(self, key, base_key):
response_type = entry.pop("type", None)
if not response_type or response_type not in self.RESPONSE_TYPES:
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:01:54 |
||
pulp/pulpcore | 4,975 | pulp__pulpcore-4975 | [
"4967"
] | 4e477acc51f9f0d581b193c8a1ffe5967e21cfbc | diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -383,7 +383,7 @@ async def make_response(self, key, base_key):
expires and expires < time.time()
):
# Bad entry, delete from cache
- self.delete(key, base_key)
+ await self.delete(key, base_key)
return None
response = self.RESPONSE_TYPES[response_type](**entry)
response.headers.update({"X-PULP-CACHE": "HIT"})
| Unawaited cache coroutine throws warnings
We're seeing the following error on pulpcore 3.45.0:
```
/venv/lib/python3.9/site-packages/pulpcore/cache/cache.py:386: RuntimeWarning: coroutine 'AsyncCache.delete' was never awaited
```
| 2024-01-25T16:02:16 |
||
pulp/pulpcore | 5,025 | pulp__pulpcore-5025 | [
"5024"
] | 229766a1b746215035ff16d3577f2a0dbf5de22b | diff --git a/pulpcore/plugin/exceptions.py b/pulpcore/plugin/exceptions.py
--- a/pulpcore/plugin/exceptions.py
+++ b/pulpcore/plugin/exceptions.py
@@ -4,5 +4,6 @@
PulpException,
SizeValidationError,
MissingDigestValidationError,
+ TimeoutException,
UnsupportedDigestValidationError,
)
| Plugin writers should have access to `TimeoutException` exception
This exception may be raised from the downloader and cannot be caught properly by the plugin writers.
Required for https://github.com/pulp/pulp_container/issues/1499.
| 2024-02-03T21:32:56 |
||
pulp/pulpcore | 5,072 | pulp__pulpcore-5072 | [
"5071"
] | bfb32e6d790fe68c5da82763a9389e9607545e6d | diff --git a/pulpcore/constants.py b/pulpcore/constants.py
--- a/pulpcore/constants.py
+++ b/pulpcore/constants.py
@@ -5,6 +5,13 @@
VAR_TMP_PULP = Path("/var/tmp/pulp")
+# Special purpose advisory locks for use with the two number variant.
+# The group will be 0.
+# The numbers are randomly chosen.
+# !!! Never change these values !!!
+TASK_DISPATCH_LOCK = 21
+TASK_SCHEDULING_LOCK = 42
+
#: All valid task states.
TASK_STATES = SimpleNamespace(
diff --git a/pulpcore/tasking/tasks.py b/pulpcore/tasking/tasks.py
--- a/pulpcore/tasking/tasks.py
+++ b/pulpcore/tasking/tasks.py
@@ -5,15 +5,21 @@
import logging
import sys
import traceback
+from datetime import timedelta
from gettext import gettext as _
from django.db import connection, transaction
-from django.db.models import Model
+from django.db.models import Model, Max
from django_guid import get_guid
from pulpcore.app.apps import MODULE_PLUGIN_VERSIONS
from pulpcore.app.models import Task
from pulpcore.app.util import current_task, get_domain, get_url
-from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
+from pulpcore.constants import (
+ TASK_FINAL_STATES,
+ TASK_INCOMPLETE_STATES,
+ TASK_STATES,
+ TASK_DISPATCH_LOCK,
+)
_logger = logging.getLogger(__name__)
@@ -148,6 +154,13 @@ def dispatch(
notify_workers = False
with contextlib.ExitStack() as stack:
with transaction.atomic():
+ # Task creation need to be serialized so that pulp_created will provide a stable order
+ # at every time. We specifically need to ensure that each task, when commited to the
+ # task table will be the newest with respect to `pulp_created`.
+ with connection.cursor() as cursor:
+ # Wait for exclusive access and release automatically after transaction.
+ cursor.execute("SELECT pg_advisory_xact_lock(%s, %s)", [0, TASK_DISPATCH_LOCK])
+ newest_created = Task.objects.aggregate(Max("pulp_created"))["pulp_created__max"]
task = Task.objects.create(
state=TASK_STATES.WAITING,
logging_cid=(get_guid()),
@@ -159,6 +172,15 @@ def dispatch(
reserved_resources_record=resources,
versions=versions,
)
+ if newest_created and task.pulp_created <= newest_created:
+ # Let this workaround not row forever into the future.
+ if newest_created - task.pulp_created > timedelta(seconds=1):
+ # Do not commit the transaction if this condition is not met.
+ # If we ever hit this, think about delegating the timestamping to PostgresQL.
+ raise RuntimeError("Clockscrew detected. Task dispatching would be dangerous.")
+ # Try to work around the smaller glitch
+ task.pulp_created = newest_created + timedelta(milliseconds=1)
+ task.save()
if immediate:
# Grab the advisory lock before the task hits the db.
stack.enter_context(task)
diff --git a/pulpcore/tasking/worker.py b/pulpcore/tasking/worker.py
--- a/pulpcore/tasking/worker.py
+++ b/pulpcore/tasking/worker.py
@@ -16,7 +16,7 @@
from django.db import connection
from django.utils import timezone
-from pulpcore.constants import TASK_STATES, TASK_INCOMPLETE_STATES
+from pulpcore.constants import TASK_STATES, TASK_INCOMPLETE_STATES, TASK_SCHEDULING_LOCK
from pulpcore.exceptions import AdvisoryLockError
from pulpcore.app.apps import pulp_plugin_configs
from pulpcore.app.models import Worker, Task, ApiAppStatus, ContentAppStatus
@@ -40,8 +40,6 @@
TASK_KILL_INTERVAL = 1
# Number of heartbeats between cleaning up worker processes (approx)
WORKER_CLEANUP_INTERVAL = 100
-# Randomly chosen
-TASK_SCHEDULING_LOCK = 42
class PulpcoreWorker:
| Possible race on task dispatch
We identified that there may be a possible race when dispatching new tasks.
With two processes adding a new task they become visible to the workers after finishing their transaction, but the created_at fields may actually report a different order. This can lead to two workers picking up conflicting tasks concurrently.
This may be even worse on distributed systems where you can have a clockscrew on top.
We believe the window to already be very small, so any attempt to fix is should be like a vacuum seal.
Current working idea: Wrap adding a task to the table in a transaction-based-exclusive-wait-for advisory-lock. Check that the current latest task is actually from the past and add the new task afterwards.
| Wild suggestion here, perhaps not even something we can do currently:
UUIDv7 embeds a timestamp and is designed to have a monotonically increasing unique value. Theoretically we could work with the PK instead of raw timestamps (so long as no ancient tasks utilizing UUIDv4 PKs remain) to obtain a complete table ordering. Postgresql has functions for generating UUIDs (but not UUIDv7 [currently](https://commitfest.postgresql.org/47/4388/), you need an extension for it), so if [0] we were able to utilize those for sourcing the PK then we could mostly [1] eliminate clock skew as an issue, as only the database timestamp would matter
[0] we currently "expect" the PK to have been generated in Python prior to any attempts to save IIRC
[1] assuming the database isn't itself split across a blue/green deployment, and/or nobody is messing with the clocks on those systems. I'm not sure how time would be handled in that scenario.
> Wild suggestion here, perhaps not even something we can do currently:
>
> UUIDv7 embeds a timestamp and is designed to have a monotonically increasing unique value. Theoretically we could work with the PK instead of raw timestamps (so long as no ancient tasks utilizing UUIDv4 PKs remain) to obtain a complete table ordering. Postgresql has functions for generating UUIDs (but not UUIDv7 [currently](https://commitfest.postgresql.org/47/4388/), you need an extension for it), so if [0] we were able to utilize those for sourcing the PK then we could mostly [1] eliminate clock skew as an issue, as only the database timestamp would matter
>
> [0] we currently "expect" the PK to have been generated in Python prior to any attempts to save IIRC [1] assuming the database isn't itself split across a blue/green deployment, and/or nobody is messing with the clocks on those systems. I'm not sure how time would be handled in that scenario.
I'm not entirely convinced this would solve the issue, which is basically that there is a time gap between creating the task and committing the transaction it is in. And while once all tasks are committed (after say 5 s) we do have a definite order, by that time we'd hope some worker would have started on it already. | 2024-02-21T14:40:18 |
|
pulp/pulpcore | 5,088 | pulp__pulpcore-5088 | [
"5086"
] | 51c29b503c12edc9ac6072177ef6c5c20cc27cf6 | diff --git a/pulpcore/plugin/viewsets/__init__.py b/pulpcore/plugin/viewsets/__init__.py
--- a/pulpcore/plugin/viewsets/__init__.py
+++ b/pulpcore/plugin/viewsets/__init__.py
@@ -44,6 +44,7 @@
from pulpcore.filters import HyperlinkRelatedFilter
from .content import (
+ NoArtifactContentViewSet,
NoArtifactContentUploadViewSet,
SingleArtifactContentUploadViewSet,
)
diff --git a/pulpcore/plugin/viewsets/content.py b/pulpcore/plugin/viewsets/content.py
--- a/pulpcore/plugin/viewsets/content.py
+++ b/pulpcore/plugin/viewsets/content.py
@@ -29,6 +29,35 @@ def get_deferred_context(self, request):
return {}
+class NoArtifactContentViewSet(DefaultDeferredContextMixin, ContentViewSet):
+ """A ViewSet for content creation that does not require a file to be uploaded."""
+
+ @extend_schema(
+ description="Trigger an asynchronous task to create content,"
+ "optionally create new repository version.",
+ responses={202: AsyncOperationResponseSerializer},
+ )
+ def create(self, request):
+ """Create a content unit."""
+ serializer = self.get_serializer(data=request.data)
+ serializer.is_valid(raise_exception=True)
+
+ exclusive_resources = [
+ item for item in (serializer.validated_data.get(key) for key in ("repository",)) if item
+ ]
+
+ task = dispatch(
+ tasks.base.general_create,
+ exclusive_resources=exclusive_resources,
+ args=(self.queryset.model._meta.app_label, serializer.__class__.__name__),
+ kwargs={
+ "data": {k: v for k, v in request.data.items()},
+ "context": self.get_deferred_context(request),
+ },
+ )
+ return OperationPostponedResponse(task, request)
+
+
class NoArtifactContentUploadViewSet(DefaultDeferredContextMixin, ContentViewSet):
"""A ViewSet for uploads that do not require to store an uploaded content as an Artifact."""
| Enable file-less "uploads"
**Is your feature request related to a problem? Please describe.**
Right now pulpcore knows artifactless types, that can be created via file upload using the `NoArtifactContentUploadViewSet` and the `NoArtifactContentUploadSerializer`, which can be combined with "retrieve behaviour" (do not throw errors if the requested content already esists/is already in the repo it should be added to) by defining a plugin specific `retrieve` function on the serializer.
However, pulp_deb has several artifact less types, that do not need an actual uploaded file as part of this process at all. All they need (for pulp_deb to be able to create them) is the set of required API parameters. Examples include the `ReleaseComponent` and `ReleaseArchitecture`. These content types should still use the `repository` parameter to create and add them to a repository in one action, along with "retrieve behaviour". Since this means creating new repository versions, this action must be performed as a task to ensure resource locks.
As far as I can tell this is currently not possible, because pulpcore does not have the right kind of `ViewSet`. I was able to get things to work with the following adjustments to the `NoArtifactContentUploadViewSet`: https://github.com/pulp/pulpcore/pull/5084
An alternative might be to split up `NoArtifactContentUploadViewSet` into `NoArtifactContentUploadViewSet` and `NoArtifactContentViewSet`, which would mirror the class structure on the serializer side, and possibly make the semantic intention more clear.
**Additional context**
- See here for the pulp_deb change that prompted this need: https://github.com/pulp/pulp_deb/pull/1018
- I am happy to implement this as soon as there is a consensus on whether to add a whole new ViewSet in the class hierarchy, or whether it is enough to adjust `NoArtifactContentUploadViewSet` to support both use cases.
- I could use help in designing a good test to cover this special use case.
| I am feeling a bit of time pressure with respect to this change, because I ideally need it to propagate into Katello within a reasonable time frame. | 2024-02-28T11:55:22 |
|
pulp/pulpcore | 5,140 | pulp__pulpcore-5140 | [
"4637"
] | 1dd98508b89be25173c2a0fb451d59dc29a714b3 | diff --git a/pulp_file/pytest_plugin.py b/pulp_file/pytest_plugin.py
--- a/pulp_file/pytest_plugin.py
+++ b/pulp_file/pytest_plugin.py
@@ -308,7 +308,7 @@ def _file_remote_client_cert_req_factory(*, manifest_path, policy, **kwargs):
@pytest.fixture(scope="class")
-def file_repository_factory(file_repository_api_client, gen_object_with_cleanup):
+def file_repository_factory(file_bindings, gen_object_with_cleanup):
"""A factory to generate a File Repository with auto-deletion after the test run."""
def _file_repository_factory(pulp_domain=None, **body):
@@ -316,11 +316,26 @@ def _file_repository_factory(pulp_domain=None, **body):
if pulp_domain:
kwargs["pulp_domain"] = pulp_domain
body.setdefault("name", str(uuid.uuid4()))
- return gen_object_with_cleanup(file_repository_api_client, body, **kwargs)
+ return gen_object_with_cleanup(file_bindings.RepositoriesFileApi, body, **kwargs)
return _file_repository_factory
[email protected](scope="class")
+def file_publication_factory(file_bindings, gen_object_with_cleanup):
+ """A factory to generate a File Publication with auto-deletion after the test run."""
+
+ def _file_publication_factory(**kwargs):
+ extra_args = {}
+ if pulp_domain := kwargs.pop("pulp_domain", None):
+ extra_args["pulp_domain"] = pulp_domain
+ # XOR check on repository and repository_version
+ assert bool("repository" in kwargs) ^ bool("repository_version" in kwargs)
+ return gen_object_with_cleanup(file_bindings.PublicationsFileApi, kwargs, **extra_args)
+
+ return _file_publication_factory
+
+
@pytest.fixture
def gen_bad_response_fixture_server(gen_threaded_aiohttp_server):
"""
diff --git a/pulpcore/app/replica.py b/pulpcore/app/replica.py
--- a/pulpcore/app/replica.py
+++ b/pulpcore/app/replica.py
@@ -1,18 +1,36 @@
from django.conf import settings
from django.db.models import Model
+import logging
from pulp_glue.common.context import PulpContext
from pulpcore.tasking.tasks import dispatch
-from pulpcore.app.tasks.base import general_update, general_create, general_delete
+from pulpcore.app.tasks.base import (
+ general_update,
+ general_create,
+ general_multi_delete,
+)
from pulpcore.plugin.util import get_url, get_domain
+_logger = logging.getLogger(__name__)
-class ReplicaContext(PulpContext):
- def prompt(self, *args, **kwargs):
- pass
- def echo(self, *args, **kwargs):
- pass
+class ReplicaContext(PulpContext):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.out_buf = ""
+ self.err_buf = ""
+
+ def echo(self, message: str, nl: bool = True, err: bool = False) -> None:
+ if err:
+ self.err_buf += message
+ if nl:
+ _logger.warn("{}", self.err_buf)
+ self.err_buf = ""
+ else:
+ self.out_buf += message
+ if nl:
+ _logger.info("{}", self.out_buf)
+ self.out_buf = ""
class Replicator:
@@ -39,6 +57,7 @@ def __init__(self, pulp_ctx, task_group, tls_settings):
self.tls_settings = tls_settings
self.domain = get_domain()
uri = "/api/v3/distributions/"
+ # TODO check and compare this to distribution locking on the distribution viewset.
if settings.DOMAIN_ENABLED:
uri = f"/{self.domain.name}{uri}"
self.distros_uri = uri
@@ -133,49 +152,47 @@ def create_or_update_repository(self, remote):
repository.save()
return repository
+ def distribution_data(self, repository, upstream_distribution):
+ """
+ Return the fields that need to be updated/cleared on distributions for idempotence.
+ """
+ return {
+ "repository": get_url(repository),
+ "publication": None,
+ "base_path": upstream_distribution["base_path"],
+ }
+
def create_or_update_distribution(self, repository, upstream_distribution):
+ distribution_data = self.distribution_data(repository, upstream_distribution)
try:
distro = self.distribution_model_cls.objects.get(
name=upstream_distribution["name"], pulp_domain=self.domain
)
# Check that the distribution has the right repository associated
- needs_update = self.needs_update(
- {
- "repository": get_url(repository),
- "base_path": upstream_distribution["base_path"],
- },
- distro,
- )
+ needs_update = self.needs_update(distribution_data, distro)
if needs_update:
# Update the distribution
dispatch(
general_update,
task_group=self.task_group,
+ shared_resources=[repository],
exclusive_resources=[self.distros_uri],
args=(distro.pk, self.app_label, self.distribution_serializer_name),
kwargs={
- "data": {
- "name": upstream_distribution["name"],
- "base_path": upstream_distribution["base_path"],
- "repository": get_url(repository),
- },
+ "data": distribution_data,
"partial": True,
},
)
except self.distribution_model_cls.DoesNotExist:
# Dispatch a task to create the distribution
+ distribution_data["name"] = upstream_distribution["name"]
dispatch(
general_create,
task_group=self.task_group,
+ shared_resources=[repository],
exclusive_resources=[self.distros_uri],
args=(self.app_label, self.distribution_serializer_name),
- kwargs={
- "data": {
- "name": upstream_distribution["name"],
- "base_path": upstream_distribution["base_path"],
- "repository": get_url(repository),
- }
- },
+ kwargs={"data": distribution_data},
)
def sync_params(self, repository, remote):
@@ -193,35 +210,42 @@ def sync(self, repository, remote):
def remove_missing(self, names):
# Remove all distributions with names not present in the list of names
- distros_to_delete = self.distribution_model_cls.objects.filter(
- pulp_domain=self.domain
- ).exclude(name__in=names)
- for distro in distros_to_delete:
+ # Perform this in an extra task, because we hold a big lock here.
+ distribution_ids = [
+ (distribution.pk, self.app_label, self.distribution_serializer_name)
+ for distribution in self.distribution_model_cls.objects.filter(
+ pulp_domain=self.domain
+ ).exclude(name__in=names)
+ ]
+ if distribution_ids:
dispatch(
- general_delete,
+ general_multi_delete,
task_group=self.task_group,
exclusive_resources=[self.distros_uri],
- args=(distro.pk, self.app_label, self.distribution_serializer_name),
+ args=(distribution_ids,),
)
# Remove all the repositories and remotes of the missing distributions
- repos_to_delete = self.repository_model_cls.objects.filter(pulp_domain=self.domain).exclude(
- name__in=names
- )
- for repo in repos_to_delete:
- dispatch(
- general_delete,
- task_group=self.task_group,
- exclusive_resources=[repo],
- args=(repo.pk, self.app_label, self.repository_serializer_name),
+ repositories = list(
+ self.repository_model_cls.objects.filter(pulp_domain=self.domain).exclude(
+ name__in=names
)
- remotes_to_delete = self.remote_model_cls.objects.filter(pulp_domain=self.domain).exclude(
- name__in=names
)
- for remote in remotes_to_delete:
+ repository_ids = [
+ (repo.pk, self.app_label, self.repository_serializer_name) for repo in repositories
+ ]
+
+ remotes = list(
+ self.remote_model_cls.objects.filter(pulp_domain=self.domain).exclude(name__in=names)
+ )
+ remote_ids = [
+ (remote.pk, self.app_label, self.remote_serializer_name) for remote in remotes
+ ]
+
+ if repository_ids or remote_ids:
dispatch(
- general_delete,
+ general_multi_delete,
task_group=self.task_group,
- exclusive_resources=[remote],
- args=(remote.pk, self.app_label, self.remote_serializer_name),
+ exclusive_resources=repositories + remotes,
+ args=(repository_ids + remote_ids,),
)
diff --git a/pulpcore/app/tasks/replica.py b/pulpcore/app/tasks/replica.py
--- a/pulpcore/app/tasks/replica.py
+++ b/pulpcore/app/tasks/replica.py
@@ -7,7 +7,6 @@
from pulpcore.app.apps import pulp_plugin_configs
from pulpcore.app.models import UpstreamPulp, TaskGroup
from pulpcore.app.replica import ReplicaContext
-from pulpcore.app.util import get_domain
from pulp_glue.common import __version__ as pulp_glue_version
@@ -24,7 +23,6 @@ def user_agent():
def replicate_distributions(server_pk):
- domain = get_domain()
server = UpstreamPulp.objects.get(pk=server_pk)
# Write out temporary files related to SSL
@@ -80,18 +78,9 @@ def replicate_distributions(server_pk):
# Create remote
remote = replicator.create_or_update_remote(upstream_distribution=distro)
if not remote:
- # The upstream distribution is not serving any content, cleanup an existing local
- # distribution
- try:
- local_distro = replicator.distribution_model.objects.get(
- name=distro["name"], pulp_domain=domain
- )
- local_distro.repository = None
- local_distro.publication = None
- local_distro.save()
- continue
- except replicator.distribution_model.DoesNotExist:
- continue
+ # The upstream distribution is not serving any content,
+ # let if fall throug the cracks and be cleanup below.
+ continue
# Check if there is already a repository
repository = replicator.create_or_update_repository(remote=remote)
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -955,6 +955,44 @@ def random_artifact(random_artifact_factory):
return random_artifact_factory()
[email protected]()
+def domain_factory(pulpcore_bindings, pulp_settings, gen_object_with_cleanup):
+ def _domain_factory():
+ if not pulp_settings.DOMAIN_ENABLED:
+ pytest.skip("Domains not enabled")
+ keys = dict()
+ keys["pulpcore.app.models.storage.FileSystem"] = ["MEDIA_ROOT"]
+ keys["storages.backends.s3boto3.S3Boto3Storage"] = [
+ "AWS_ACCESS_KEY_ID",
+ "AWS_SECRET_ACCESS_KEY",
+ "AWS_S3_ENDPOINT_URL",
+ "AWS_S3_ADDRESSING_STYLE",
+ "AWS_S3_SIGNATURE_VERSION",
+ "AWS_S3_REGION_NAME",
+ "AWS_STORAGE_BUCKET_NAME",
+ ]
+ keys["storages.backends.azure_storage.AzureStorage"] = [
+ "AZURE_ACCOUNT_NAME",
+ "AZURE_CONTAINER",
+ "AZURE_ACCOUNT_KEY",
+ "AZURE_URL_EXPIRATION_SECS",
+ "AZURE_OVERWRITE_FILES",
+ "AZURE_LOCATION",
+ "AZURE_CONNECTION_STRING",
+ ]
+ settings = dict()
+ for key in keys[pulp_settings.DEFAULT_FILE_STORAGE]:
+ settings[key] = getattr(pulp_settings, key, None)
+ body = {
+ "name": str(uuid.uuid4()),
+ "storage_class": pulp_settings.DEFAULT_FILE_STORAGE,
+ "storage_settings": settings,
+ }
+ return gen_object_with_cleanup(pulpcore_bindings.DomainsApi, body)
+
+ return _domain_factory
+
+
# Random other fixtures
@@ -1263,41 +1301,3 @@ def _wget_recursive_download_on_host(url, destination):
)
return _wget_recursive_download_on_host
-
-
[email protected]()
-def domain_factory(domains_api_client, pulp_settings, gen_object_with_cleanup):
- def _domain_factory():
- if not pulp_settings.DOMAIN_ENABLED:
- pytest.skip("Domains not enabled")
- keys = dict()
- keys["pulpcore.app.models.storage.FileSystem"] = ["MEDIA_ROOT"]
- keys["storages.backends.s3boto3.S3Boto3Storage"] = [
- "AWS_ACCESS_KEY_ID",
- "AWS_SECRET_ACCESS_KEY",
- "AWS_S3_ENDPOINT_URL",
- "AWS_S3_ADDRESSING_STYLE",
- "AWS_S3_SIGNATURE_VERSION",
- "AWS_S3_REGION_NAME",
- "AWS_STORAGE_BUCKET_NAME",
- ]
- keys["storages.backends.azure_storage.AzureStorage"] = [
- "AZURE_ACCOUNT_NAME",
- "AZURE_CONTAINER",
- "AZURE_ACCOUNT_KEY",
- "AZURE_URL_EXPIRATION_SECS",
- "AZURE_OVERWRITE_FILES",
- "AZURE_LOCATION",
- "AZURE_CONNECTION_STRING",
- ]
- settings = dict()
- for key in keys[pulp_settings.DEFAULT_FILE_STORAGE]:
- settings[key] = getattr(pulp_settings, key, None)
- body = {
- "name": str(uuid.uuid4()),
- "storage_class": pulp_settings.DEFAULT_FILE_STORAGE,
- "storage_settings": settings,
- }
- return gen_object_with_cleanup(domains_api_client, body)
-
- return _domain_factory
diff --git a/pulpcore/tests/functional/api/test_replication.py b/pulpcore/tests/functional/api/test_replication.py
--- a/pulpcore/tests/functional/api/test_replication.py
+++ b/pulpcore/tests/functional/api/test_replication.py
@@ -44,6 +44,109 @@ def test_replication(
assert task.state == "completed"
[email protected]
+def test_replication_idempotence(
+ domain_factory,
+ bindings_cfg,
+ pulpcore_bindings,
+ file_bindings,
+ monitor_task,
+ monitor_task_group,
+ pulp_settings,
+ add_to_cleanup,
+ gen_object_with_cleanup,
+ file_distribution_factory,
+ file_publication_factory,
+ file_repository_factory,
+ tmp_path,
+):
+ # This test assures that an Upstream Pulp can be created in a non-default domain and that this
+ # Upstream Pulp configuration can be used to execute the replicate task.
+
+ # Create a domain to replicate from
+ source_domain = domain_factory()
+
+ # Add stuff to it
+ repository = file_repository_factory(pulp_domain=source_domain.name)
+ file_path = tmp_path / "file.txt"
+ file_path.write_text("DEADBEEF")
+ monitor_task(
+ file_bindings.ContentFilesApi.create(
+ file=file_path,
+ relative_path="file.txt",
+ repository=repository.pulp_href,
+ pulp_domain=source_domain.name,
+ ).task
+ )
+ publication = file_publication_factory(
+ pulp_domain=source_domain.name, repository=repository.pulp_href
+ )
+ file_distribution_factory(pulp_domain=source_domain.name, publication=publication.pulp_href)
+
+ # Create a domain as replica
+ replica_domain = domain_factory()
+
+ # Create an Upstream Pulp in the non-default domain
+ upstream_pulp_body = {
+ "name": str(uuid.uuid4()),
+ "base_url": bindings_cfg.host,
+ "api_root": pulp_settings.API_ROOT,
+ "domain": source_domain.name,
+ "username": bindings_cfg.username,
+ "password": bindings_cfg.password,
+ }
+ upstream_pulp = gen_object_with_cleanup(
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=replica_domain.name
+ )
+ # Run the replicate task and assert that all tasks successfully complete.
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
+ monitor_task_group(response.task_group)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ for item in result.results:
+ add_to_cleanup(api_client, item)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ file_bindings.ContentFilesApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ assert result.count == 1
+
+ # Now replicate backwards
+
+ upstream_pulp_body = {
+ "name": str(uuid.uuid4()),
+ "base_url": bindings_cfg.host,
+ "api_root": pulp_settings.API_ROOT,
+ "domain": replica_domain.name,
+ "username": bindings_cfg.username,
+ "password": bindings_cfg.password,
+ }
+ upstream_pulp = gen_object_with_cleanup(
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=source_domain.name
+ )
+ # Run the replicate task and assert that all tasks successfully complete.
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
+ monitor_task_group(response.task_group)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ for item in result.results:
+ add_to_cleanup(api_client, item)
+
+
@pytest.mark.parallel
def test_replication_with_wrong_ca_cert(
domain_factory,
| Add ability to turn any Pulp instance into a replica
**Version**
Python PYPI installation
PulpCore 3.32.0
CertGuard 1.6.5
file 1.14.4
python 3.10.0
rpm 3.22.3
**Describe the bug**
Replication task fails to maintain the publication attribute in a existing RPM distribution artifact. Instead it tries and fail to update the repository attribute in distribution artifact.
In a replication task group, I am getting a error for "general_update" task of a RPM distribution artifact
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/base.py
ErrorDetail(string="Only one of the attributes 'repository' and 'publication' may used simultaneously.", code='invalid')
**To Reproduce**
In a upstream pulp server(Pulp1 server), create a repository with AutoPublish as false and it's distribution is based on Publications and repository field value is null.
In a downstream pulp server(Pulp2 server), do the same like create a repository with AutoPublish as false and it's distribution is based on Publications and repository field value is null.
Create a upstream object in Pulp2 server to replicate from upstream server (Pulp1 server)
Start the replication task in downstream pulp server(Pulp2 server) to replicate content from upstream server (Pulp1 server).
Replication task will fail to update the distribution artifact of the repository.
**Expected behavior**
Why Pulp replication doesn't replicate the publication from upstream pulp server when RPM repository distribution is based on publication.
| This may still be a bug, but for the workflow, one is not supposed to create the repositories downstream by hand. Replicate is meant to handle _all_ of that.
What does mean 'distribution is based on publication'? Does it mean that you have manually triggered publish task for the repo and then updated the distribution wth the created href of publication?
@mdellweg I agree that the feature as currently designed requires the user to only use the replication API to create repositories/publications/distributions in the system. However, it would be better if you could use the replicate command to turn an existing Pulp installation into a replica with a single replicate task. The replicate task would need to be able to handle situations such as the one described in this issue.
This is a request for a feature and not a bug.
Hi,
Based on my knowledge I see this as a bug not as a feature request. Apologize, as I haven't provided an example. I'm doing this now.
In the Upstream server, we sync/add RPM packages to the repository and then publish the repository so distribution object is based on publication not by repository version.
` $ pulp rpm distribution list --name "rh-el7-extras"
{
"pulp_href": "/pulp/api/v3/distribution/rpm/rpm/XXXXXXXXXXX",
"pulp_created": "DATE",
"base_path": "rh/el7/extras",
"base_url": "Upstream Pulp Content Server Distribution URL",
"content_guard": null,
"hidden": false,
"pulp_labels": {},
"name": "rh-el7-extras",
"repository": null,
"publication: "/pulp/api/v3/publications/rpm/rpm/XXXXXXXXXXXX"
}`
Now I initiate a pulp replication task in the downstream server. The distribution object in downstream server is created based on repository attribute and not based on publication attribute.
`$ pulp rpm distribution list --name "rh-el7-extras"
{
"pulp_href": "/pulp/api/v3/distribution/rpm/rpm/XXXXXXXXXXX",
"pulp_created": "DATE",
"base_path": "rh/el7/extras",
"base_url": "Downstream Pulp Content Server Distribution URL",
"content_guard": null,
"hidden": false,
"pulp_labels": {},
"name": "rh-el7-extras",
"repository": "/pulp/api/v3/repositories/rpm/rpm/XXXXXXXXXXXX",
"publication: null
}`
I see the confusion here, however, do you have any actual problem?
Rpm replicator task triggers MIRROR_COMPLETE sync, which automatically re-creates bit-by-bit publication. Then when content is served it makes sure to find publication serving the latest version https://github.com/pulp/pulpcore/blob/main/pulpcore/content/handler.py#L598
Even though our Handler code seems to handle the serving of content, this not really intuitive https://github.com/pulp/pulpcore/blob/main/pulpcore/app/replica.py#L176
Maybe replica code should take into account publications and set them for the publication based distributions? @dkliban you have any insights here?
> I see the confusion here, however, do you have any actual problem?
>
> Rpm replicator task triggers MIRROR_COMPLETE sync, which automatically re-creates bit-by-bit publication. Then when content is served it makes sure to find publication serving the latest version https://github.com/pulp/pulpcore/blob/main/pulpcore/content/handler.py#L598
Yes, I am facing a problem. In our setup, the role of upstream and downstream is swapped. During the maintenance window of the upstream server, one of the existing downstream servers will be promoted as upstream server and start supplying comtent to rest of the downstream servers.
At this time in the downsteam server (whose role is due to change to upstream), I have destroy all distribution artifacts and recreate them based on publication attribute and keep repository attribute as null.
To clarify, based on recent pulpcore mtg to discuss: The ask here, is for the replication code to notice this problem, and for the replication to "win" (ie reset/'fix' the downstream to be 'replicable') instead of failing.
Should be a straightforward fix, and easy to have test cases for. | 2024-03-19T16:21:27 |
pulp/pulpcore | 5,147 | pulp__pulpcore-5147 | [
"4637"
] | 1d7ad6fe994729159c5184241ffcbc7b9c733804 | diff --git a/pulp_file/pytest_plugin.py b/pulp_file/pytest_plugin.py
--- a/pulp_file/pytest_plugin.py
+++ b/pulp_file/pytest_plugin.py
@@ -308,7 +308,7 @@ def _file_remote_client_cert_req_factory(*, manifest_path, policy, **kwargs):
@pytest.fixture(scope="class")
-def file_repository_factory(file_repository_api_client, gen_object_with_cleanup):
+def file_repository_factory(file_bindings, gen_object_with_cleanup):
"""A factory to generate a File Repository with auto-deletion after the test run."""
def _file_repository_factory(pulp_domain=None, **body):
@@ -316,11 +316,26 @@ def _file_repository_factory(pulp_domain=None, **body):
if pulp_domain:
kwargs["pulp_domain"] = pulp_domain
body.setdefault("name", str(uuid.uuid4()))
- return gen_object_with_cleanup(file_repository_api_client, body, **kwargs)
+ return gen_object_with_cleanup(file_bindings.RepositoriesFileApi, body, **kwargs)
return _file_repository_factory
[email protected](scope="class")
+def file_publication_factory(file_bindings, gen_object_with_cleanup):
+ """A factory to generate a File Publication with auto-deletion after the test run."""
+
+ def _file_publication_factory(**kwargs):
+ extra_args = {}
+ if pulp_domain := kwargs.pop("pulp_domain", None):
+ extra_args["pulp_domain"] = pulp_domain
+ # XOR check on repository and repository_version
+ assert bool("repository" in kwargs) ^ bool("repository_version" in kwargs)
+ return gen_object_with_cleanup(file_bindings.PublicationsFileApi, kwargs, **extra_args)
+
+ return _file_publication_factory
+
+
@pytest.fixture
def gen_bad_response_fixture_server(gen_threaded_aiohttp_server):
"""
diff --git a/pulpcore/app/replica.py b/pulpcore/app/replica.py
--- a/pulpcore/app/replica.py
+++ b/pulpcore/app/replica.py
@@ -1,18 +1,36 @@
from django.conf import settings
from django.db.models import Model
+import logging
from pulp_glue.common.context import PulpContext
from pulpcore.tasking.tasks import dispatch
-from pulpcore.app.tasks.base import general_update, general_create, general_delete
+from pulpcore.app.tasks.base import (
+ general_update,
+ general_create,
+ general_multi_delete,
+)
from pulpcore.plugin.util import get_url, get_domain
+_logger = logging.getLogger(__name__)
-class ReplicaContext(PulpContext):
- def prompt(self, *args, **kwargs):
- pass
- def echo(self, *args, **kwargs):
- pass
+class ReplicaContext(PulpContext):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+ self.out_buf = ""
+ self.err_buf = ""
+
+ def echo(self, message: str, nl: bool = True, err: bool = False) -> None:
+ if err:
+ self.err_buf += message
+ if nl:
+ _logger.warn("{}", self.err_buf)
+ self.err_buf = ""
+ else:
+ self.out_buf += message
+ if nl:
+ _logger.info("{}", self.out_buf)
+ self.out_buf = ""
class Replicator:
@@ -39,6 +57,7 @@ def __init__(self, pulp_ctx, task_group, tls_settings):
self.tls_settings = tls_settings
self.domain = get_domain()
uri = "/api/v3/distributions/"
+ # TODO check and compare this to distribution locking on the distribution viewset.
if settings.DOMAIN_ENABLED:
uri = f"/{self.domain.name}{uri}"
self.distros_uri = uri
@@ -133,49 +152,47 @@ def create_or_update_repository(self, remote):
repository.save()
return repository
+ def distribution_data(self, repository, upstream_distribution):
+ """
+ Return the fields that need to be updated/cleared on distributions for idempotence.
+ """
+ return {
+ "repository": get_url(repository),
+ "publication": None,
+ "base_path": upstream_distribution["base_path"],
+ }
+
def create_or_update_distribution(self, repository, upstream_distribution):
+ distribution_data = self.distribution_data(repository, upstream_distribution)
try:
distro = self.distribution_model_cls.objects.get(
name=upstream_distribution["name"], pulp_domain=self.domain
)
# Check that the distribution has the right repository associated
- needs_update = self.needs_update(
- {
- "repository": get_url(repository),
- "base_path": upstream_distribution["base_path"],
- },
- distro,
- )
+ needs_update = self.needs_update(distribution_data, distro)
if needs_update:
# Update the distribution
dispatch(
general_update,
task_group=self.task_group,
+ shared_resources=[repository],
exclusive_resources=[self.distros_uri],
args=(distro.pk, self.app_label, self.distribution_serializer_name),
kwargs={
- "data": {
- "name": upstream_distribution["name"],
- "base_path": upstream_distribution["base_path"],
- "repository": get_url(repository),
- },
+ "data": distribution_data,
"partial": True,
},
)
except self.distribution_model_cls.DoesNotExist:
# Dispatch a task to create the distribution
+ distribution_data["name"] = upstream_distribution["name"]
dispatch(
general_create,
task_group=self.task_group,
+ shared_resources=[repository],
exclusive_resources=[self.distros_uri],
args=(self.app_label, self.distribution_serializer_name),
- kwargs={
- "data": {
- "name": upstream_distribution["name"],
- "base_path": upstream_distribution["base_path"],
- "repository": get_url(repository),
- }
- },
+ kwargs={"data": distribution_data},
)
def sync_params(self, repository, remote):
@@ -193,35 +210,42 @@ def sync(self, repository, remote):
def remove_missing(self, names):
# Remove all distributions with names not present in the list of names
- distros_to_delete = self.distribution_model_cls.objects.filter(
- pulp_domain=self.domain
- ).exclude(name__in=names)
- for distro in distros_to_delete:
+ # Perform this in an extra task, because we hold a big lock here.
+ distribution_ids = [
+ (distribution.pk, self.app_label, self.distribution_serializer_name)
+ for distribution in self.distribution_model_cls.objects.filter(
+ pulp_domain=self.domain
+ ).exclude(name__in=names)
+ ]
+ if distribution_ids:
dispatch(
- general_delete,
+ general_multi_delete,
task_group=self.task_group,
exclusive_resources=[self.distros_uri],
- args=(distro.pk, self.app_label, self.distribution_serializer_name),
+ args=(distribution_ids,),
)
# Remove all the repositories and remotes of the missing distributions
- repos_to_delete = self.repository_model_cls.objects.filter(pulp_domain=self.domain).exclude(
- name__in=names
- )
- for repo in repos_to_delete:
- dispatch(
- general_delete,
- task_group=self.task_group,
- exclusive_resources=[repo],
- args=(repo.pk, self.app_label, self.repository_serializer_name),
+ repositories = list(
+ self.repository_model_cls.objects.filter(pulp_domain=self.domain).exclude(
+ name__in=names
)
- remotes_to_delete = self.remote_model_cls.objects.filter(pulp_domain=self.domain).exclude(
- name__in=names
)
- for remote in remotes_to_delete:
+ repository_ids = [
+ (repo.pk, self.app_label, self.repository_serializer_name) for repo in repositories
+ ]
+
+ remotes = list(
+ self.remote_model_cls.objects.filter(pulp_domain=self.domain).exclude(name__in=names)
+ )
+ remote_ids = [
+ (remote.pk, self.app_label, self.remote_serializer_name) for remote in remotes
+ ]
+
+ if repository_ids or remote_ids:
dispatch(
- general_delete,
+ general_multi_delete,
task_group=self.task_group,
- exclusive_resources=[remote],
- args=(remote.pk, self.app_label, self.remote_serializer_name),
+ exclusive_resources=repositories + remotes,
+ args=(repository_ids + remote_ids,),
)
diff --git a/pulpcore/app/tasks/replica.py b/pulpcore/app/tasks/replica.py
--- a/pulpcore/app/tasks/replica.py
+++ b/pulpcore/app/tasks/replica.py
@@ -7,7 +7,6 @@
from pulpcore.app.apps import pulp_plugin_configs
from pulpcore.app.models import UpstreamPulp, TaskGroup
from pulpcore.app.replica import ReplicaContext
-from pulpcore.app.util import get_domain
from pulp_glue.common import __version__ as pulp_glue_version
@@ -24,7 +23,6 @@ def user_agent():
def replicate_distributions(server_pk):
- domain = get_domain()
server = UpstreamPulp.objects.get(pk=server_pk)
# Write out temporary files related to SSL
@@ -80,18 +78,9 @@ def replicate_distributions(server_pk):
# Create remote
remote = replicator.create_or_update_remote(upstream_distribution=distro)
if not remote:
- # The upstream distribution is not serving any content, cleanup an existing local
- # distribution
- try:
- local_distro = replicator.distribution_model.objects.get(
- name=distro["name"], pulp_domain=domain
- )
- local_distro.repository = None
- local_distro.publication = None
- local_distro.save()
- continue
- except replicator.distribution_model.DoesNotExist:
- continue
+ # The upstream distribution is not serving any content,
+ # let if fall throug the cracks and be cleanup below.
+ continue
# Check if there is already a repository
repository = replicator.create_or_update_repository(remote=remote)
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -955,6 +955,44 @@ def random_artifact(random_artifact_factory):
return random_artifact_factory()
[email protected]()
+def domain_factory(pulpcore_bindings, pulp_settings, gen_object_with_cleanup):
+ def _domain_factory():
+ if not pulp_settings.DOMAIN_ENABLED:
+ pytest.skip("Domains not enabled")
+ keys = dict()
+ keys["pulpcore.app.models.storage.FileSystem"] = ["MEDIA_ROOT"]
+ keys["storages.backends.s3boto3.S3Boto3Storage"] = [
+ "AWS_ACCESS_KEY_ID",
+ "AWS_SECRET_ACCESS_KEY",
+ "AWS_S3_ENDPOINT_URL",
+ "AWS_S3_ADDRESSING_STYLE",
+ "AWS_S3_SIGNATURE_VERSION",
+ "AWS_S3_REGION_NAME",
+ "AWS_STORAGE_BUCKET_NAME",
+ ]
+ keys["storages.backends.azure_storage.AzureStorage"] = [
+ "AZURE_ACCOUNT_NAME",
+ "AZURE_CONTAINER",
+ "AZURE_ACCOUNT_KEY",
+ "AZURE_URL_EXPIRATION_SECS",
+ "AZURE_OVERWRITE_FILES",
+ "AZURE_LOCATION",
+ "AZURE_CONNECTION_STRING",
+ ]
+ settings = dict()
+ for key in keys[pulp_settings.DEFAULT_FILE_STORAGE]:
+ settings[key] = getattr(pulp_settings, key, None)
+ body = {
+ "name": str(uuid.uuid4()),
+ "storage_class": pulp_settings.DEFAULT_FILE_STORAGE,
+ "storage_settings": settings,
+ }
+ return gen_object_with_cleanup(pulpcore_bindings.DomainsApi, body)
+
+ return _domain_factory
+
+
# Random other fixtures
@@ -1263,41 +1301,3 @@ def _wget_recursive_download_on_host(url, destination):
)
return _wget_recursive_download_on_host
-
-
[email protected]()
-def domain_factory(domains_api_client, pulp_settings, gen_object_with_cleanup):
- def _domain_factory():
- if not pulp_settings.DOMAIN_ENABLED:
- pytest.skip("Domains not enabled")
- keys = dict()
- keys["pulpcore.app.models.storage.FileSystem"] = ["MEDIA_ROOT"]
- keys["storages.backends.s3boto3.S3Boto3Storage"] = [
- "AWS_ACCESS_KEY_ID",
- "AWS_SECRET_ACCESS_KEY",
- "AWS_S3_ENDPOINT_URL",
- "AWS_S3_ADDRESSING_STYLE",
- "AWS_S3_SIGNATURE_VERSION",
- "AWS_S3_REGION_NAME",
- "AWS_STORAGE_BUCKET_NAME",
- ]
- keys["storages.backends.azure_storage.AzureStorage"] = [
- "AZURE_ACCOUNT_NAME",
- "AZURE_CONTAINER",
- "AZURE_ACCOUNT_KEY",
- "AZURE_URL_EXPIRATION_SECS",
- "AZURE_OVERWRITE_FILES",
- "AZURE_LOCATION",
- "AZURE_CONNECTION_STRING",
- ]
- settings = dict()
- for key in keys[pulp_settings.DEFAULT_FILE_STORAGE]:
- settings[key] = getattr(pulp_settings, key, None)
- body = {
- "name": str(uuid.uuid4()),
- "storage_class": pulp_settings.DEFAULT_FILE_STORAGE,
- "storage_settings": settings,
- }
- return gen_object_with_cleanup(domains_api_client, body)
-
- return _domain_factory
diff --git a/pulpcore/tests/functional/api/test_replication.py b/pulpcore/tests/functional/api/test_replication.py
--- a/pulpcore/tests/functional/api/test_replication.py
+++ b/pulpcore/tests/functional/api/test_replication.py
@@ -44,6 +44,109 @@ def test_replication(
assert task.state == "completed"
[email protected]
+def test_replication_idempotence(
+ domain_factory,
+ bindings_cfg,
+ pulpcore_bindings,
+ file_bindings,
+ monitor_task,
+ monitor_task_group,
+ pulp_settings,
+ add_to_cleanup,
+ gen_object_with_cleanup,
+ file_distribution_factory,
+ file_publication_factory,
+ file_repository_factory,
+ tmp_path,
+):
+ # This test assures that an Upstream Pulp can be created in a non-default domain and that this
+ # Upstream Pulp configuration can be used to execute the replicate task.
+
+ # Create a domain to replicate from
+ source_domain = domain_factory()
+
+ # Add stuff to it
+ repository = file_repository_factory(pulp_domain=source_domain.name)
+ file_path = tmp_path / "file.txt"
+ file_path.write_text("DEADBEEF")
+ monitor_task(
+ file_bindings.ContentFilesApi.create(
+ file=file_path,
+ relative_path="file.txt",
+ repository=repository.pulp_href,
+ pulp_domain=source_domain.name,
+ ).task
+ )
+ publication = file_publication_factory(
+ pulp_domain=source_domain.name, repository=repository.pulp_href
+ )
+ file_distribution_factory(pulp_domain=source_domain.name, publication=publication.pulp_href)
+
+ # Create a domain as replica
+ replica_domain = domain_factory()
+
+ # Create an Upstream Pulp in the non-default domain
+ upstream_pulp_body = {
+ "name": str(uuid.uuid4()),
+ "base_url": bindings_cfg.host,
+ "api_root": pulp_settings.API_ROOT,
+ "domain": source_domain.name,
+ "username": bindings_cfg.username,
+ "password": bindings_cfg.password,
+ }
+ upstream_pulp = gen_object_with_cleanup(
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=replica_domain.name
+ )
+ # Run the replicate task and assert that all tasks successfully complete.
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
+ monitor_task_group(response.task_group)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ for item in result.results:
+ add_to_cleanup(api_client, item)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ file_bindings.ContentFilesApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ assert result.count == 1
+
+ # Now replicate backwards
+
+ upstream_pulp_body = {
+ "name": str(uuid.uuid4()),
+ "base_url": bindings_cfg.host,
+ "api_root": pulp_settings.API_ROOT,
+ "domain": replica_domain.name,
+ "username": bindings_cfg.username,
+ "password": bindings_cfg.password,
+ }
+ upstream_pulp = gen_object_with_cleanup(
+ pulpcore_bindings.UpstreamPulpsApi, upstream_pulp_body, pulp_domain=source_domain.name
+ )
+ # Run the replicate task and assert that all tasks successfully complete.
+ response = pulpcore_bindings.UpstreamPulpsApi.replicate(upstream_pulp.pulp_href)
+ monitor_task_group(response.task_group)
+
+ for api_client in (
+ file_bindings.DistributionsFileApi,
+ file_bindings.RemotesFileApi,
+ file_bindings.RepositoriesFileApi,
+ ):
+ result = api_client.list(pulp_domain=replica_domain.name)
+ for item in result.results:
+ add_to_cleanup(api_client, item)
+
+
@pytest.mark.parallel
def test_replication_with_wrong_ca_cert(
domain_factory,
| Add ability to turn any Pulp instance into a replica
**Version**
Python PYPI installation
PulpCore 3.32.0
CertGuard 1.6.5
file 1.14.4
python 3.10.0
rpm 3.22.3
**Describe the bug**
Replication task fails to maintain the publication attribute in a existing RPM distribution artifact. Instead it tries and fail to update the repository attribute in distribution artifact.
In a replication task group, I am getting a error for "general_update" task of a RPM distribution artifact
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/base.py
ErrorDetail(string="Only one of the attributes 'repository' and 'publication' may used simultaneously.", code='invalid')
**To Reproduce**
In a upstream pulp server(Pulp1 server), create a repository with AutoPublish as false and it's distribution is based on Publications and repository field value is null.
In a downstream pulp server(Pulp2 server), do the same like create a repository with AutoPublish as false and it's distribution is based on Publications and repository field value is null.
Create a upstream object in Pulp2 server to replicate from upstream server (Pulp1 server)
Start the replication task in downstream pulp server(Pulp2 server) to replicate content from upstream server (Pulp1 server).
Replication task will fail to update the distribution artifact of the repository.
**Expected behavior**
Why Pulp replication doesn't replicate the publication from upstream pulp server when RPM repository distribution is based on publication.
| This may still be a bug, but for the workflow, one is not supposed to create the repositories downstream by hand. Replicate is meant to handle _all_ of that.
What does mean 'distribution is based on publication'? Does it mean that you have manually triggered publish task for the repo and then updated the distribution wth the created href of publication?
@mdellweg I agree that the feature as currently designed requires the user to only use the replication API to create repositories/publications/distributions in the system. However, it would be better if you could use the replicate command to turn an existing Pulp installation into a replica with a single replicate task. The replicate task would need to be able to handle situations such as the one described in this issue.
This is a request for a feature and not a bug.
Hi,
Based on my knowledge I see this as a bug not as a feature request. Apologize, as I haven't provided an example. I'm doing this now.
In the Upstream server, we sync/add RPM packages to the repository and then publish the repository so distribution object is based on publication not by repository version.
` $ pulp rpm distribution list --name "rh-el7-extras"
{
"pulp_href": "/pulp/api/v3/distribution/rpm/rpm/XXXXXXXXXXX",
"pulp_created": "DATE",
"base_path": "rh/el7/extras",
"base_url": "Upstream Pulp Content Server Distribution URL",
"content_guard": null,
"hidden": false,
"pulp_labels": {},
"name": "rh-el7-extras",
"repository": null,
"publication: "/pulp/api/v3/publications/rpm/rpm/XXXXXXXXXXXX"
}`
Now I initiate a pulp replication task in the downstream server. The distribution object in downstream server is created based on repository attribute and not based on publication attribute.
`$ pulp rpm distribution list --name "rh-el7-extras"
{
"pulp_href": "/pulp/api/v3/distribution/rpm/rpm/XXXXXXXXXXX",
"pulp_created": "DATE",
"base_path": "rh/el7/extras",
"base_url": "Downstream Pulp Content Server Distribution URL",
"content_guard": null,
"hidden": false,
"pulp_labels": {},
"name": "rh-el7-extras",
"repository": "/pulp/api/v3/repositories/rpm/rpm/XXXXXXXXXXXX",
"publication: null
}`
I see the confusion here, however, do you have any actual problem?
Rpm replicator task triggers MIRROR_COMPLETE sync, which automatically re-creates bit-by-bit publication. Then when content is served it makes sure to find publication serving the latest version https://github.com/pulp/pulpcore/blob/main/pulpcore/content/handler.py#L598
Even though our Handler code seems to handle the serving of content, this not really intuitive https://github.com/pulp/pulpcore/blob/main/pulpcore/app/replica.py#L176
Maybe replica code should take into account publications and set them for the publication based distributions? @dkliban you have any insights here?
> I see the confusion here, however, do you have any actual problem?
>
> Rpm replicator task triggers MIRROR_COMPLETE sync, which automatically re-creates bit-by-bit publication. Then when content is served it makes sure to find publication serving the latest version https://github.com/pulp/pulpcore/blob/main/pulpcore/content/handler.py#L598
Yes, I am facing a problem. In our setup, the role of upstream and downstream is swapped. During the maintenance window of the upstream server, one of the existing downstream servers will be promoted as upstream server and start supplying comtent to rest of the downstream servers.
At this time in the downsteam server (whose role is due to change to upstream), I have destroy all distribution artifacts and recreate them based on publication attribute and keep repository attribute as null.
To clarify, based on recent pulpcore mtg to discuss: The ask here, is for the replication code to notice this problem, and for the replication to "win" (ie reset/'fix' the downstream to be 'replicable') instead of failing.
Should be a straightforward fix, and easy to have test cases for. | 2024-03-22T14:24:14 |
pulp/pulpcore | 5,181 | pulp__pulpcore-5181 | [
"3574"
] | 1e573d0aad3e413e2c6ae2e56b0b2bdb5e7ee30f | diff --git a/pulpcore/app/redis_connection.py b/pulpcore/app/redis_connection.py
--- a/pulpcore/app/redis_connection.py
+++ b/pulpcore/app/redis_connection.py
@@ -1,5 +1,5 @@
from redis import Redis
-from aioredis import Redis as aRedis
+from redis.asyncio import Redis as aRedis
from pulpcore.app.settings import settings
diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -11,7 +11,7 @@
from aiohttp.web_exceptions import HTTPFound
from redis import ConnectionError
-from aioredis import ConnectionError as AConnectionError
+from redis.asyncio import ConnectionError as AConnectionError
from pulpcore.app.settings import settings
from pulpcore.app.redis_connection import (
| Aioredis is no longer supported, replace the dependency with redis-py (which we are already using)
**Version**
3.16+
**Describe the bug**
pulpcore depends on aioredis, which is no longer supported, and has been subsumed by the standard redis library which we are already using in synchronous contexts
https://github.com/aio-libs/aioredis-py#-aioredis-is-now-in-redis-py-420rc1-
| 2024-03-26T19:48:11 |
||
pulp/pulpcore | 5,182 | pulp__pulpcore-5182 | [
"3574"
] | c09b5dd64f8a10ebb21e98eeb75c7fe85ca57e03 | diff --git a/pulpcore/app/redis_connection.py b/pulpcore/app/redis_connection.py
--- a/pulpcore/app/redis_connection.py
+++ b/pulpcore/app/redis_connection.py
@@ -1,5 +1,5 @@
from redis import Redis
-from aioredis import Redis as aRedis
+from redis.asyncio import Redis as aRedis
from pulpcore.app.settings import settings
diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -11,7 +11,7 @@
from aiohttp.web_exceptions import HTTPFound
from redis import ConnectionError
-from aioredis import ConnectionError as AConnectionError
+from redis.asyncio import ConnectionError as AConnectionError
from pulpcore.app.settings import settings
from pulpcore.app.redis_connection import (
| Aioredis is no longer supported, replace the dependency with redis-py (which we are already using)
**Version**
3.16+
**Describe the bug**
pulpcore depends on aioredis, which is no longer supported, and has been subsumed by the standard redis library which we are already using in synchronous contexts
https://github.com/aio-libs/aioredis-py#-aioredis-is-now-in-redis-py-420rc1-
| 2024-03-26T19:50:20 |
||
pulp/pulpcore | 5,190 | pulp__pulpcore-5190 | [
"5189"
] | d84a88a433c4344e4b60d220343930cc33a437a2 | diff --git a/pulpcore/app/wsgi.py b/pulpcore/app/wsgi.py
--- a/pulpcore/app/wsgi.py
+++ b/pulpcore/app/wsgi.py
@@ -11,7 +11,6 @@
from opentelemetry.instrumentation.wsgi import OpenTelemetryMiddleware
from pulpcore.app.entrypoint import using_pulp_api_worker
-from pulpcore.app.util import init_domain_metrics_exporter
if not using_pulp_api_worker.get(False):
raise RuntimeError("This app must be executed using pulpcore-api entrypoint.")
@@ -19,4 +18,6 @@
application = get_wsgi_application()
application = OpenTelemetryMiddleware(application)
+from pulpcore.app.util import init_domain_metrics_exporter # noqa: E402
+
init_domain_metrics_exporter()
| Fix import in wsgi preventing startup
**Version**
Confirmed with Katello folks using 3.49 branch.
**Describe the bug**
We're getting an error during the startup stage:
```python
Starting Pulp API Server...
Traceback (most recent call last):
File "/usr/bin/pulpcore-api", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.49.1', 'console_scripts', 'pulpcore-api')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py", line 140, in main
PulpcoreApiApplication(options).run()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 231, in run
super().run()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/gunicorn/arbiter.py", line 58, in __init__
self.setup(app)
File "/usr/lib/python3.11/site-packages/gunicorn/arbiter.py", line 118, in setup
self.app.wsgi()
File "/usr/lib/python3.11/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/entrypoint.py", line 95, in load
import pulpcore.app.wsgi
File "/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py", line 14, in <module>
from pulpcore.app.util import init_domain_metrics_exporter
File "/usr/lib/python3.11/site-packages/pulpcore/app/util.py", line 24, in <module>
from pulpcore.app import models
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/__init__.py", line 4, in <module>
from .base import (
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/base.py", line 3, in <module>
from django.contrib.contenttypes.fields import GenericRelation
File "/usr/lib/python3.11/site-packages/django/contrib/contenttypes/fields.py", line 7, in <module>
from django.contrib.contenttypes.models import ContentType
File "/usr/lib/python3.11/site-packages/django/contrib/contenttypes/models.py", line 139, in <module>
class ContentType(models.Model):
File "/usr/lib/python3.11/site-packages/django/db/models/base.py", line 129, in __new__
app_config = apps.get_containing_app_config(module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/apps/registry.py", line 260, in get_containing_app_config
```
and what got our eye was this line:
```python
File "/usr/lib/python3.11/site-packages/pulpcore/app/wsgi.py", line 14, in <module>
from pulpcore.app.util import init_domain_metrics_exporter
```
Also, there's already a fix for this in the main branch #5178
**To Reproduce**
Installing using pip and rpm packages.
**Expected behavior**
The application should start without issues
| 2024-03-27T18:51:55 |
||
pulp/pulpcore | 5,193 | pulp__pulpcore-5193 | [
"5179"
] | 1893dcae461d1845578ef2224cd5e9852610bd44 | diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -256,6 +256,7 @@
AUTHENTICATION_JSON_HEADER = ""
AUTHENTICATION_JSON_HEADER_JQ_FILTER = ""
+AUTHENTICATION_JSON_HEADER_OPENAPI_SECURITY_SCHEME = {}
ALLOWED_IMPORT_PATHS = []
@@ -407,6 +408,16 @@
authentication_json_header_validator & authentication_json_header_jq_filter_validator
)
+authentication_json_header_openapi_security_scheme_setting_validator = Validator(
+ "AUTHENTICATION_JSON_HEADER_OPENAPI_SECURITY_SCHEME", len_min=1
+)
+authentication_json_header_openapi_security_scheme_validator = Validator(
+ "AUTHENTICATION_JSON_HEADER_OPENAPI_SECURITY_SCHEME",
+ when=authentication_json_header_openapi_security_scheme_setting_validator,
+ is_type_of=dict,
+ messages={"is_type_of": "{name} must be a dictionary."},
+)
+
settings = DjangoDynaconf(
__name__,
ENVVAR_PREFIX_FOR_DYNACONF="PULP",
@@ -424,6 +435,7 @@
storage_validator,
unknown_algs_validator,
json_header_auth_validator,
+ authentication_json_header_openapi_security_scheme_validator,
],
)
# HERE ENDS DYNACONF EXTENSION LOAD (No more code below this line)
diff --git a/pulpcore/openapi/__init__.py b/pulpcore/openapi/__init__.py
--- a/pulpcore/openapi/__init__.py
+++ b/pulpcore/openapi/__init__.py
@@ -22,6 +22,7 @@
from drf_spectacular.settings import spectacular_settings
from drf_spectacular.types import OpenApiTypes
from drf_spectacular.utils import OpenApiParameter, extend_schema_field
+from drf_spectacular.extensions import OpenApiAuthenticationExtension
from rest_framework import mixins, serializers
from rest_framework.exceptions import ParseError
from rest_framework.schemas.utils import get_pk_description
@@ -518,3 +519,11 @@ def get_schema(self, request=None, public=False):
result["servers"] = [{"url": server_url}]
return normalize_result_object(result)
+
+
+class JSONHeaderRemoteAuthenticationScheme(OpenApiAuthenticationExtension):
+ target_class = "pulpcore.app.authentication.JSONHeaderRemoteAuthentication"
+ name = "json_header_remote_authentication"
+
+ def get_security_definition(self, auto_schema):
+ return settings.AUTHENTICATION_JSON_HEADER_OPENAPI_SECURITY_SCHEME
| diff --git a/pulpcore/tests/functional/api/test_openapi_schema.py b/pulpcore/tests/functional/api/test_openapi_schema.py
--- a/pulpcore/tests/functional/api/test_openapi_schema.py
+++ b/pulpcore/tests/functional/api/test_openapi_schema.py
@@ -86,3 +86,22 @@ def test_no_dup_operation_ids(pulp_openapi_schema):
dup_ids = [id for id, cnt in operation_ids.items() if cnt > 1]
assert len(dup_ids) == 0, f"Duplicate operationIds found: {dup_ids}"
+
+
[email protected]
+def test_external_auth_on_security_scheme(pulp_settings, pulp_openapi_schema):
+ if (
+ "django.contrib.auth.backends.RemoteUserBackend"
+ not in pulp_settings.AUTHENTICATION_BACKENDS
+ and "pulpcore.app.authentication.JSONHeaderRemoteAuthentication"
+ not in pulp_settings.REST_FRAMEWORK["DEFAULT_AUTHENTICATION_CLASSES"]
+ ):
+ pytest.skip(
+ "Test can't run unless RemoteUserBackend and JSONHeaderRemoteAuthentication are enabled"
+ )
+
+ security_schemes = pulp_openapi_schema["components"]["securitySchemes"]
+ assert "json_header_remote_authentication" in security_schemes
+
+ security_scheme = security_schemes["json_header_remote_authentication"]
+ assert pulp_settings.AUTHENTICATION_JSON_HEADER_OPENAPI_SECURITY_SCHEME == security_scheme
| Include SecurityScheme for the external auth in the OpenAPI schema
**Is your feature request related to a problem? Please describe.**
We are using the `JSONHeaderRemoteAuthentication` backend to authenticate users. The authentication actually occurs before the request is routed to Pulp. The OpenAPI schema currently states that Basic and Session auth are supported. There is no way for `pulp-cli` to know that it can use `OAuth2` to authenticate with this Pulp server.
**Describe the solution you'd like**
`REMOTE_AUTH_OPENAPI_SECURITY_SCHEME` setting which would need to be a Python dictionary or a JSON object that looks like this:
```
{"oAuth2": {
"type": "oauth2",
"description": "External OAuth integration",
"flows": {
"clientCredentials": {
"tokenUrl": "https://sso.mycompany.com/auth/realms/company-external/protocol/openid-connect/token",
"scopes": {
"api.console": "Grant access to Pulp"
}
}
}
}
```
The top key is the `name` of the security scheme. The whole data structure should look like a full securityScheme that would appear in the OpenAPI schema.
| 2024-03-27T20:05:05 |
|
pulp/pulpcore | 5,196 | pulp__pulpcore-5196 | [
"5149"
] | 9192c2bf0ccb0e0a2df595fd3efdd0980c80ff34 | diff --git a/pulpcore/plugin/viewsets/content.py b/pulpcore/plugin/viewsets/content.py
--- a/pulpcore/plugin/viewsets/content.py
+++ b/pulpcore/plugin/viewsets/content.py
@@ -133,18 +133,20 @@ def init_content_data(self, serializer, request):
# in the upload code path make sure, the artifact exists, and the 'file'
# parameter is replaced by 'artifact'
artifact = Artifact.init_and_validate(task_payload.pop("file"))
+ # if artifact already exists, let's use it
try:
- artifact.save()
- except IntegrityError:
- # if artifact already exists, let's use it
+ artifact = Artifact.objects.get(
+ sha256=artifact.sha256, pulp_domain=request.pulp_domain
+ )
+ artifact.touch()
+ except (Artifact.DoesNotExist, DatabaseError):
try:
+ artifact.save()
+ except IntegrityError:
artifact = Artifact.objects.get(
sha256=artifact.sha256, pulp_domain=request.pulp_domain
)
artifact.touch()
- except (Artifact.DoesNotExist, DatabaseError):
- # the artifact has since been removed from when we first attempted to save it
- artifact.save()
task_payload["artifact"] = ArtifactSerializer(
artifact, context={"request": request}
| Overwriting existing packages in backend storage can lead to caching issues
If an existing package is re-added to pulp, the default behavior will overwrite the existing file in backing storage. This is typically fine.
- If using Azure Blobstore, the timestamp of the blob is updated (Last-Modified time and ETag).
- Conversely, some CDN's (notably Azure Front Door) use Last-Modified Time as a signal that a file in origin has updated.
- This can lead to poor cache behavior, and in some cases, incomplete downloads as the CDN attempts to resolve disparate content.
- If we set `AZURE_OVERWRITE_FILES` to `false` this partially mitigates the issue (Last-Modified/ETag are unmodified). However, this results in duplicate copies written to storage (with a suffix to differentiate from the original).
- We should have an option that does "nothing" if the uploaded file already exists (don't overwrite, and don't write a new copy).
| I was able to reproduce this with a pulp dev environment using minio. Steps to reproduce:
```
pulp rpm repository create --name test
pulp rpm distribution create --name test --repository test --base-path test
pulp rpm content upload --file bear-4.1-1.noarch.rpm
pulp rpm repository content add --repository test --package-href /pulp/api/v3/content/rpm/packages/018e6776-e768-7f2e-bff5-8aeb5bf6e43b/
pulp rpm publication create --repository test
# now check the Last-Modified header
curl -sL --head http://localhost:24816/pulp/content/test/Packages/b/bear-4.1-1.noarch.rpm | grep "Last-Modified"
# upload the package again
pulp rpm content upload --file bear-4.1-1.noarch.rpm > /dev/null
# check the Last-Modified header again
curl -sL --head http://localhost:24816/pulp/content/test/Packages/b/bear-4.1-1.noarch.rpm | grep "Last-Modified"
```
Here's the output I see:
```
$ curl -sL --head http://localhost:24816/pulp/content/test/Packages/b/bear-4.1-1.noarch.rpm | grep "Last-Modified"
Last-Modified: Fri, 22 Mar 2024 18:40:29 GMT
$ pulp rpm content upload --file bear-4.1-1.noarch.rpm > /dev/null
Started background task /pulp/api/v3/tasks/018e6782-e5f5-7b25-9e6f-d1e240170130/
.Done.
$ curl -sL --head http://localhost:24816/pulp/content/test/Packages/b/bear-4.1-1.noarch.rpm | grep "Last-Modified"
Last-Modified: Fri, 22 Mar 2024 18:53:35 GMT
```
So the `Last Modified` header changes when a package is uploaded. I definitely think that reuploading a package shouldn't change its modified timestamp for various reasons.
Like Azure, there is a `AWS_S3_FILE_OVERWRITE` option (defaults to True) but setting it to False makes django-storages create new files in storage with random chars as a suffix which is problematic.
Adding a check [here before save](https://github.com/pulp/pulpcore/blob/a1dd579e5aa831c1dd69b3dc028c6e9108bab06e/pulpcore/plugin/viewsets/content.py#L137
) to try to fetch any existing artifact seems to fix the problem. I think what's happening is that django-storages updates the file in storage regardless of whether the artifact already exists or not.
@daviddavis do you want to open a PR? | 2024-03-27T23:08:20 |
|
pulp/pulpcore | 5,234 | pulp__pulpcore-5234 | [
"3821",
"3821"
] | ff2a2a39bd2bf9b00dd2eea2770022f90997db56 | diff --git a/pulpcore/constants.py b/pulpcore/constants.py
--- a/pulpcore/constants.py
+++ b/pulpcore/constants.py
@@ -12,6 +12,7 @@
TASK_DISPATCH_LOCK = 21
TASK_SCHEDULING_LOCK = 42
TASK_UNBLOCKING_LOCK = 84
+TASK_METRICS_HEARTBEAT_LOCK = 74
#: All valid task states.
diff --git a/pulpcore/tasking/worker.py b/pulpcore/tasking/worker.py
--- a/pulpcore/tasking/worker.py
+++ b/pulpcore/tasking/worker.py
@@ -7,13 +7,15 @@
import signal
import socket
import contextlib
-from datetime import timedelta
+from datetime import datetime, timedelta
from multiprocessing import Process
from tempfile import TemporaryDirectory
from packaging.version import parse as parse_version
+from opentelemetry.metrics import get_meter
from django.conf import settings
from django.db import connection
+from django.db.models import Case, Count, F, Max, Value, When
from django.utils import timezone
from pulpcore.constants import (
@@ -21,6 +23,7 @@
TASK_INCOMPLETE_STATES,
TASK_SCHEDULING_LOCK,
TASK_UNBLOCKING_LOCK,
+ TASK_METRICS_HEARTBEAT_LOCK,
)
from pulpcore.exceptions import AdvisoryLockError
from pulpcore.app.apps import pulp_plugin_configs
@@ -38,12 +41,18 @@
_logger = logging.getLogger(__name__)
random.seed()
+# The following four constants are current "best guesses".
+# Unless/until we can provide reasonable ways to decide to change their values,
+# they will live as constants instead of "proper" settings.
+
# Number of heartbeats for a task to finish on graceful worker shutdown (approx)
TASK_GRACE_INTERVAL = 3
# Number of heartbeats between attempts to kill the subprocess (approx)
TASK_KILL_INTERVAL = 1
# Number of heartbeats between cleaning up worker processes (approx)
WORKER_CLEANUP_INTERVAL = 100
+# Threshold time in seconds of an unblocked task before we consider a queue stalled
+THRESHOLD_UNBLOCKED_WAITING_TIME = 5
class PulpcoreWorker:
@@ -55,7 +64,8 @@ def __init__(self):
self.task = None
self.name = f"{os.getpid()}@{socket.getfqdn()}"
- self.heartbeat_period = settings.WORKER_TTL / 3
+ self.heartbeat_period = timedelta(seconds=settings.WORKER_TTL / 3)
+ self.last_metric_heartbeat = timezone.now()
self.versions = {app.label: app.version for app in pulp_plugin_configs()}
self.cursor = connection.cursor()
self.worker = self.handle_worker_heartbeat()
@@ -64,6 +74,19 @@ def __init__(self):
WORKER_CLEANUP_INTERVAL / 10, WORKER_CLEANUP_INTERVAL
)
+ meter = get_meter(__name__)
+ self.tasks_unblocked_queue_meter = meter.create_gauge(
+ name="tasks_unblocked_queue",
+ description="Number of unblocked tasks waiting in the queue.",
+ unit="tasks",
+ )
+
+ self.tasks_longest_unblocked_time_meter = meter.create_gauge(
+ name="tasks_longest_unblocked_time",
+ description="The age of the longest waiting task.",
+ unit="seconds",
+ )
+
# Add a file descriptor to trigger select on signals
self.sentinel, sentinel_w = os.pipe()
os.set_blocking(self.sentinel, False)
@@ -90,6 +113,8 @@ def _signal_handler(self, thesignal, frame):
def _pg_notify_handler(self, notification):
if notification.channel == "pulp_worker_wakeup":
self.wakeup = True
+ elif notification.channel == "pulp_worker_metrics_heartbeat":
+ self.last_metric_heartbeat = datetime.fromisoformat(notification.payload)
elif self.task and notification.channel == "pulp_worker_cancel":
if notification.payload == str(self.task.pk):
self.cancel_task = True
@@ -140,7 +165,7 @@ def worker_cleanup(self):
qs.delete()
def beat(self):
- if self.worker.last_heartbeat < timezone.now() - timedelta(seconds=self.heartbeat_period):
+ if self.worker.last_heartbeat < timezone.now() - self.heartbeat_period:
self.worker = self.handle_worker_heartbeat()
if self.task_grace_timeout > 0:
self.task_grace_timeout -= 1
@@ -150,6 +175,7 @@ def beat(self):
self.worker_cleanup()
with contextlib.suppress(AdvisoryLockError), PGAdvisoryLock(TASK_SCHEDULING_LOCK):
dispatch_scheduled_tasks()
+ self.record_unblocked_waiting_tasks_metric()
def notify_workers(self):
self.cursor.execute("NOTIFY pulp_worker_wakeup")
@@ -223,7 +249,7 @@ def identify_unblocked_tasks(self):
_logger.debug("Marking canceling task %s unblocked.", task.pk)
task.unblock()
changed = True
- # Don't consider this tasks reosurces as held.
+ # Don't consider this task's resources as held.
continue
elif (
@@ -244,6 +270,7 @@ def identify_unblocked_tasks(self):
# Record the resources of the pending task
taken_exclusive_resources.update(exclusive_resources)
taken_shared_resources.update(shared_resources)
+
return changed
def iter_tasks(self):
@@ -293,7 +320,7 @@ def sleep(self):
_logger.debug(_("Worker %s entering sleep state."), self.name)
while not self.shutdown_requested and not self.wakeup:
r, w, x = select.select(
- [self.sentinel, connection.connection], [], [], self.heartbeat_period
+ [self.sentinel, connection.connection], [], [], self.heartbeat_period.seconds
)
self.beat()
if connection.connection in r:
@@ -329,7 +356,7 @@ def supervise_task(self, task):
[self.sentinel, connection.connection, task_process.sentinel],
[],
[],
- self.heartbeat_period,
+ self.heartbeat_period.seconds,
)
self.beat()
if connection.connection in r:
@@ -392,6 +419,45 @@ def handle_available_tasks(self):
keep_looping = True
self.supervise_task(task)
+ def record_unblocked_waiting_tasks_metric(self):
+ if os.getenv("PULP_OTEL_ENABLED").lower() != "true":
+ return
+
+ now = timezone.now()
+ if now > self.last_metric_heartbeat + self.heartbeat_period:
+ with contextlib.suppress(AdvisoryLockError), PGAdvisoryLock(
+ TASK_METRICS_HEARTBEAT_LOCK
+ ):
+ # For performance reasons we aggregate these statistics on a single database call.
+ unblocked_tasks_stats = (
+ Task.objects.filter(unblocked_at__isnull=False, started_at__isnull=True)
+ .annotate(unblocked_for=Value(timezone.now()) - F("unblocked_at"))
+ .aggregate(
+ longest_unblocked_waiting_time=Max(
+ "unblocked_for", default=timezone.timedelta(0)
+ ),
+ unblocked_tasks_count_gte_threshold=Count(
+ Case(
+ When(
+ unblocked_for__gte=Value(
+ timezone.timedelta(seconds=THRESHOLD_UNBLOCKED_WAITING_TIME)
+ ),
+ then=1,
+ )
+ )
+ ),
+ )
+ )
+
+ self.tasks_unblocked_queue_meter.set(
+ unblocked_tasks_stats["unblocked_tasks_count_gte_threshold"]
+ )
+ self.tasks_longest_unblocked_time_meter.set(
+ unblocked_tasks_stats["longest_unblocked_waiting_time"].seconds
+ )
+
+ self.cursor.execute(f"NOTIFY pulp_worker_metrics_heartbeat, '{str(now)}'")
+
def run(self, burst=False):
with WorkerDirectory(self.name):
signal.signal(signal.SIGINT, self._signal_handler)
@@ -400,6 +466,7 @@ def run(self, burst=False):
# Subscribe to pgsql channels
connection.connection.add_notify_handler(self._pg_notify_handler)
self.cursor.execute("LISTEN pulp_worker_cancel")
+ self.cursor.execute("LISTEN pulp_worker_metrics_heartbeat")
if burst:
self.handle_available_tasks()
else:
@@ -412,5 +479,6 @@ def run(self, burst=False):
break
self.sleep()
self.cursor.execute("UNLISTEN pulp_worker_wakeup")
+ self.cursor.execute("UNLISTEN pulp_worker_metrics_heartbeat")
self.cursor.execute("UNLISTEN pulp_worker_cancel")
self.shutdown()
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -568,6 +568,37 @@ async def _send_request():
return _received_otel_span
[email protected](scope="session")
+def received_otel_metrics():
+ """A fixture for checking the presence of specific metrics on the otel collector server.
+
+ Ensure the collector server is up and running before executing tests with this fixture. To do
+ so, please, run the server as follows: python3 pulpcore/tests/functional/assets/otel_server.py
+ """
+
+ def _received_otel_metric(data, retries=3):
+ if os.environ.get("PULP_OTEL_ENABLED") != "true":
+ # pretend everything is working as expected if tests are run from
+ # a non-configured runner
+ return True
+
+ async def _send_request():
+ async with aiohttp.ClientSession(raise_for_status=False) as session:
+ otel_server_url = os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT")
+ async with session.post(f"{otel_server_url}/metrics_test", json=data) as response:
+ return response.status
+
+ while retries:
+ status = asyncio.run(_send_request())
+ if status == 200:
+ return True
+ sleep(2)
+ retries -= 1
+ return False
+
+ return _received_otel_metric
+
+
@pytest.fixture
def test_path():
return os.getenv("PYTEST_CURRENT_TEST").split()[0]
diff --git a/pulpcore/tests/functional/api/test_tasking.py b/pulpcore/tests/functional/api/test_tasking.py
--- a/pulpcore/tests/functional/api/test_tasking.py
+++ b/pulpcore/tests/functional/api/test_tasking.py
@@ -1,9 +1,11 @@
"""Tests related to the tasking system."""
+import os
import json
import pytest
import subprocess
import time
+
from aiohttp import BasicAuth
from urllib.parse import urljoin
from uuid import uuid4
@@ -349,3 +351,47 @@ def test_task_version_prevent_pickup(dispatch_task, pulpcore_bindings):
task = pulpcore_bindings.TasksApi.read(task_href)
assert task.state == "waiting"
pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
+
+
+def test_emmiting_unblocked_task_telemetry(
+ dispatch_task, pulpcore_bindings, pulp_settings, received_otel_metrics
+):
+ if os.getenv("PULP_OTEL_ENABLED").lower() != "true":
+ pytest.skip("Need PULP_OTEL_ENABLED to run this test.")
+
+ # Checking online workers ready to get a task
+ workers_online = pulpcore_bindings.WorkersApi.list(online="true").count
+
+ # We need to generate long running tasks to block the workers from executing other tasks
+ resident_task_hrefs = [
+ dispatch_task("pulpcore.app.tasks.test.sleep", args=(30,))
+ for worker in range(workers_online)
+ ]
+
+ # Then we dispatch a quick unblockable task just to keep it waiting in the queue
+ task_href = dispatch_task("pulpcore.app.tasks.test.sleep", args=(0,))
+
+ task = pulpcore_bindings.TasksApi.read(task_href)
+ assert task.state == "waiting"
+
+ # And trigger the metrics
+ assert received_otel_metrics(
+ {
+ "name": "tasks_unblocked_queue",
+ "description": "Number of unblocked tasks waiting in the queue.",
+ "unit": "tasks",
+ }
+ )
+
+ assert received_otel_metrics(
+ {
+ "name": "tasks_longest_unblocked_time",
+ "description": "The age of the longest waiting task.",
+ "unit": "seconds",
+ }
+ )
+
+ [
+ pulpcore_bindings.TasksApi.tasks_cancel(task_href, {"state": "canceled"})
+ for task_href in resident_task_hrefs
+ ]
diff --git a/pulpcore/tests/functional/assets/otel_server.py b/pulpcore/tests/functional/assets/otel_server.py
--- a/pulpcore/tests/functional/assets/otel_server.py
+++ b/pulpcore/tests/functional/assets/otel_server.py
@@ -8,6 +8,9 @@
from aiohttp import web
from opentelemetry.proto.trace.v1.trace_pb2 import TracesData
+from opentelemetry.proto.metrics.v1.metrics_pb2 import MetricsData
+
+_logger = logging.getLogger(__name__)
class ThreadedAiohttpServer(threading.Thread):
@@ -47,12 +50,13 @@ def _otel_collector():
or os.environ.get("OTEL_EXPORTER_OTLP_ENDPOINT") != "http://localhost:4318"
or os.environ.get("OTEL_EXPORTER_OTLP_PROTOCOL") != "http/protobuf"
):
- logging.info("Telemetry was not configured. Exiting...")
+ _logger.info("Telemetry was not configured. Exiting...")
sys.exit(0)
else:
- logging.info("Booting up the otel collector server...")
+ _logger.info("Booting up the otel collector server...")
spans = []
+ metrics = []
async def _null_handler(request):
raise web.HTTPOk()
@@ -70,6 +74,25 @@ async def _traces_handler(request):
spans.append(attrs)
raise web.HTTPOk()
+ async def _metrics_handler(request):
+ disabled_metrics = {"http.server.active_requests"}
+
+ metrics_data = MetricsData()
+ metrics_data.ParseFromString(await request.read())
+ for resource_metric in metrics_data.resource_metrics:
+ for scope_metric in resource_metric.scope_metrics:
+ for metric in scope_metric.metrics:
+ if metric.name in disabled_metrics:
+ _logger.info("Dropping {} metric".format(metric.name))
+ break
+ translated_metric = {}
+ translated_metric["name"] = metric.name
+ translated_metric["description"] = metric.description
+ translated_metric["unit"] = metric.unit
+ metrics.append(translated_metric)
+ _logger.info("Received a {} metric meter".format(translated_metric["name"]))
+ raise web.HTTPOk()
+
async def _test_handler(request):
try:
attrs = await request.json()
@@ -85,12 +108,37 @@ async def _test_handler(request):
else:
raise web.HTTPNotFound()
+ async def _metrics_test_handler(request):
+ try:
+ attrs = await request.json()
+ except json.decoder.JSONDecodeError:
+ raise web.HTTPNotFound()
+
+ matched_metric = next(
+ (
+ metric
+ for metric in metrics
+ if all((metric.get(key) == value for key, value in attrs.items()))
+ ),
+ None,
+ )
+ if matched_metric:
+ metrics.remove(matched_metric)
+ raise web.HTTPOk()
+ else:
+ raise web.HTTPNotFound()
+
+ async def _read_handler(request):
+ return web.Response(text=json.dumps(metrics))
+
app = web.Application()
app.add_routes(
[
- web.post("/v1/metrics", _null_handler),
+ web.post("/v1/metrics", _metrics_handler),
web.post("/v1/traces", _traces_handler),
web.post("/test", _test_handler),
+ web.post("/metrics_test", _metrics_test_handler),
+ web.get("/read", _read_handler),
]
)
| Adds open telemetry metrics for tasks
## Background / Need
It would be good to know more about the tasking system to answer the following question:
* I want to know if I have the right number of workers for a given queue in a specific time.
## Proposal
Report measurements from the tasking system, like:
* Number of unblocked waiting tasks waiting more than 5 seconds
* the longest waiting task time
Also, find the best place to trigger those measurements to avoid unneeded queries or any avoidable overload.
Adds open telemetry metrics for tasks
## Background / Need
It would be good to know more about the tasking system to answer the following question:
* I want to know if I have the right number of workers for a given queue in a specific time.
## Proposal
Report measurements from the tasking system, like:
* Number of unblocked waiting tasks waiting more than 5 seconds
* the longest waiting task time
Also, find the best place to trigger those measurements to avoid unneeded queries or any avoidable overload.
| 2024-04-08T22:38:04 |
|
pulp/pulpcore | 5,245 | pulp__pulpcore-5245 | [
"5199"
] | bcf4e6ab1a6084d91f091fddf7820d5f8d416ed8 | diff --git a/pulpcore/app/models/upload.py b/pulpcore/app/models/upload.py
--- a/pulpcore/app/models/upload.py
+++ b/pulpcore/app/models/upload.py
@@ -9,11 +9,11 @@
from django.dispatch import receiver
from rest_framework import serializers
-from pulpcore.app.models import BaseModel, fields, storage
+from pulpcore.app.models import BaseModel, fields, storage, AutoAddObjPermsMixin
from pulpcore.app.util import get_domain_pk
-class Upload(BaseModel):
+class Upload(BaseModel, AutoAddObjPermsMixin):
"""
A chunked upload. Stores chunks until used to create an artifact, etc.
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -840,31 +840,31 @@ def _role_factory(**kwargs):
@pytest.fixture
-def gen_user(bindings_cfg, users_api_client, users_roles_api_client, gen_object_with_cleanup):
+def gen_user(bindings_cfg, pulpcore_bindings, gen_object_with_cleanup):
class user_context:
def __init__(self, username=None, model_roles=None, object_roles=None, domain_roles=None):
self.username = username or str(uuid.uuid4())
self.password = str(uuid.uuid4())
self.user = gen_object_with_cleanup(
- users_api_client, {"username": self.username, "password": self.password}
+ pulpcore_bindings.UsersApi, {"username": self.username, "password": self.password}
)
self._saved_credentials = []
if model_roles:
for role in model_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": None, "content_object": None},
)
if domain_roles:
for role, domain in domain_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": domain, "content_object": None},
)
if object_roles:
for role, content_object in object_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": None, "content_object": content_object},
)
diff --git a/pulpcore/tests/functional/api/test_upload.py b/pulpcore/tests/functional/api/test_upload.py
--- a/pulpcore/tests/functional/api/test_upload.py
+++ b/pulpcore/tests/functional/api/test_upload.py
@@ -1,4 +1,5 @@
"""Tests related to content upload."""
+
import hashlib
import uuid
import pytest
@@ -183,3 +184,16 @@ def test_delete_upload(
uploads_api_client.read(upload.pulp_href)
assert e.value.status == 404
+
+
+def test_upload_owner(pulpcore_bindings, gen_user, gen_object_with_cleanup):
+ user = gen_user(model_roles=["core.upload_creator"])
+ with user:
+ upload = gen_object_with_cleanup(pulpcore_bindings.UploadsApi, {"size": 1024})
+ pulpcore_bindings.UploadsApi.read(upload.pulp_href)
+ assert set(pulpcore_bindings.UploadsApi.my_permissions(upload.pulp_href).permissions) == {
+ "core.view_upload",
+ "core.change_upload",
+ "core.delete_upload",
+ "core.manage_roles_upload",
+ }
| RBAC denies upload unless chunk size is specified
**Version**
Deployed on K8s via Operator
```
{
"versions": [
{
"component": "core",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulpcore.app",
"domain_compatible": true
},
{
"component": "ansible",
"version": "0.21.3",
"package": "pulp-ansible",
"module": "pulp_ansible.app",
"domain_compatible": false
},
{
"component": "container",
"version": "2.19.2",
"package": "pulp-container",
"module": "pulp_container.app",
"domain_compatible": false
},
{
"component": "deb",
"version": "3.2.0",
"package": "pulp_deb",
"module": "pulp_deb.app",
"domain_compatible": false
},
{
"component": "maven",
"version": "0.8.0",
"package": "pulp-maven",
"module": "pulp_maven.app",
"domain_compatible": false
},
{
"component": "ostree",
"version": "2.3.0",
"package": "pulp-ostree",
"module": "pulp_ostree.app",
"domain_compatible": true
},
{
"component": "python",
"version": "3.11.0",
"package": "pulp-python",
"module": "pulp_python.app",
"domain_compatible": false
},
{
"component": "rpm",
"version": "3.25.1",
"package": "pulp-rpm",
"module": "pulp_rpm.app",
"domain_compatible": true
},
{
"component": "certguard",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulp_certguard.app",
"domain_compatible": true
},
{
"component": "file",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulp_file.app",
"domain_compatible": true
}
],
```
**Describe the bug**
While uploading some files (I haven't been able to exactly pin down what they have in common yet) as a non-admin user, we get `Error: {"detail":"You do not have permission to perform this action."}` while doing a `pulp file content upload` despite permissions looking fine across the board. Specifying a _sufficiently high_ chunk size is the only thing that seems to resolve it. For example:
```
~$ wget https://github.com/mstorsjo/llvm-mingw/releases/download/20231128/llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz
~$ pulp --config /tmp/config.toml file content upload --file llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --relative-path llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --repository file-local --chunk-size 10000000
Error: {"detail":"You do not have permission to perform this action."}
~$ stat --printf="%n,%s" llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz
llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz,72800008
~$ pulp --config /tmp/config.toml file content upload --file llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --relative-path llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --repository file-local --chunk-size 100000000
Started background task /pulp/api/v3/tasks/018e860b-bde4-7a59-bacd-710eec7c94bd/
Done.
<snip>
```
Interestingly, I _don't_ see this behavior when using the admin user, only a user that we created, which makes me think this is some permission I missed when creating the user, but I have no idea what it would be.
**To Reproduce**
1. Create a file repository and distribution (with autopublish).
2. Create a user with the following roles for the repository and distribution: `file.filedistribution_creator`, `file.filedistribution_owner`, `file.filerepository_creator`, `file.filerepository_owner`.
3. Download a large file of an affected type (whatever that is... .tar.xz seems to trigger it) and try to upload without `--chunk-size` set.
4. Upload should fail with permissions error.
**Expected behavior**
Upload should happen without an error.
**Additional context**
N/A
| I suspect, the user is missing permissions to create upload. So can you verify that adding the upload_creator role would solve the problem?
In the long run, we should probably include the `upload_create` permission in all the content upload roles.
Adding `core.upload_creator` role to the user (even with `--object ""`) without `--chunk-size` didn't resolve the issue. I was in a bit of a time crunch so I just gave `file.filedistribution_owner`, `file.filerepository_owner`, and `core.upload_owner` globally to the user, and it started working, so I'm not sure which of the three was the one that mattered. I assume it was the third one but I'll confirm tomorrow.
It's probably worth mentioning that the path in question wasn't in the top level of the repository but in a directory (i.e., `coverage/myfile.txt`) that already existed and contained files. I assumed the permissions would apply to all subpaths within the repository but is that not the case or something? I did see a similar issue [here](https://github.com/pulp/pulp_container/issues/1588), which feels very similar, although it's obviously in pulp_container instead.
> It's probably worth mentioning that the path in question wasn't in the top level of the repository but in a directory (i.e., `coverage/myfile.txt`) that already existed and contained files.
Permissions are scoped to the repository. So that should not matter.
Thanks for investigating. I'll have a look myself.
With the `file.filerepository_creator` and the core.upload_creator` roles, i get the following result:
```
pulp -vv -p user1 file content upload --file Makefile --relative-path Makefile --repository user1_test --chunk-size 10
uploads_create : post http://localhost:5001/pulp/api/v3/uploads/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Content-Length: 13
Content-Type: application/json
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 201
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 180
Connection: keep-alive
Location: /pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
Vary: Accept
Allow: GET, POST, HEAD
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
uploads_read : get http://localhost:5001/pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 403
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 63
Connection: keep-alive
Vary: Accept
Allow: GET, HEAD, PUT, DELETE
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
uploads_delete : delete http://localhost:5001/pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Content-Length: 0
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 403
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 63
Connection: keep-alive
Vary: Accept
Allow: GET, HEAD, PUT, DELETE
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
Error: {"detail":"You do not have permission to perform this action."}
```
It looks like we are allowed to create the upload, but the user1 did not become its owner.
`pulp user role-assignment list --username user1` also shows that the `core.upload_owner` role is missing for the user. | 2024-04-11T10:43:48 |
pulp/pulpcore | 5,246 | pulp__pulpcore-5246 | [
"5199"
] | e50d28fed88e86f6843f8c14680e390b5d30efbd | diff --git a/pulpcore/app/models/upload.py b/pulpcore/app/models/upload.py
--- a/pulpcore/app/models/upload.py
+++ b/pulpcore/app/models/upload.py
@@ -9,11 +9,11 @@
from django.dispatch import receiver
from rest_framework import serializers
-from pulpcore.app.models import BaseModel, fields, storage
+from pulpcore.app.models import BaseModel, fields, storage, AutoAddObjPermsMixin
from pulpcore.app.util import get_domain_pk
-class Upload(BaseModel):
+class Upload(BaseModel, AutoAddObjPermsMixin):
"""
A chunked upload. Stores chunks until used to create an artifact, etc.
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -840,31 +840,31 @@ def _role_factory(**kwargs):
@pytest.fixture
-def gen_user(bindings_cfg, users_api_client, users_roles_api_client, gen_object_with_cleanup):
+def gen_user(bindings_cfg, pulpcore_bindings, gen_object_with_cleanup):
class user_context:
def __init__(self, username=None, model_roles=None, object_roles=None, domain_roles=None):
self.username = username or str(uuid.uuid4())
self.password = str(uuid.uuid4())
self.user = gen_object_with_cleanup(
- users_api_client, {"username": self.username, "password": self.password}
+ pulpcore_bindings.UsersApi, {"username": self.username, "password": self.password}
)
self._saved_credentials = []
if model_roles:
for role in model_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": None, "content_object": None},
)
if domain_roles:
for role, domain in domain_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": domain, "content_object": None},
)
if object_roles:
for role, content_object in object_roles:
- users_roles_api_client.create(
+ pulpcore_bindings.UsersRolesApi.create(
auth_user_href=self.user.pulp_href,
user_role={"role": role, "domain": None, "content_object": content_object},
)
diff --git a/pulpcore/tests/functional/api/test_upload.py b/pulpcore/tests/functional/api/test_upload.py
--- a/pulpcore/tests/functional/api/test_upload.py
+++ b/pulpcore/tests/functional/api/test_upload.py
@@ -184,3 +184,16 @@ def test_delete_upload(
uploads_api_client.read(upload.pulp_href)
assert e.value.status == 404
+
+
+def test_upload_owner(pulpcore_bindings, gen_user, gen_object_with_cleanup):
+ user = gen_user(model_roles=["core.upload_creator"])
+ with user:
+ upload = gen_object_with_cleanup(pulpcore_bindings.UploadsApi, {"size": 1024})
+ pulpcore_bindings.UploadsApi.read(upload.pulp_href)
+ assert set(pulpcore_bindings.UploadsApi.my_permissions(upload.pulp_href).permissions) == {
+ "core.view_upload",
+ "core.change_upload",
+ "core.delete_upload",
+ "core.manage_roles_upload",
+ }
| RBAC denies upload unless chunk size is specified
**Version**
Deployed on K8s via Operator
```
{
"versions": [
{
"component": "core",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulpcore.app",
"domain_compatible": true
},
{
"component": "ansible",
"version": "0.21.3",
"package": "pulp-ansible",
"module": "pulp_ansible.app",
"domain_compatible": false
},
{
"component": "container",
"version": "2.19.2",
"package": "pulp-container",
"module": "pulp_container.app",
"domain_compatible": false
},
{
"component": "deb",
"version": "3.2.0",
"package": "pulp_deb",
"module": "pulp_deb.app",
"domain_compatible": false
},
{
"component": "maven",
"version": "0.8.0",
"package": "pulp-maven",
"module": "pulp_maven.app",
"domain_compatible": false
},
{
"component": "ostree",
"version": "2.3.0",
"package": "pulp-ostree",
"module": "pulp_ostree.app",
"domain_compatible": true
},
{
"component": "python",
"version": "3.11.0",
"package": "pulp-python",
"module": "pulp_python.app",
"domain_compatible": false
},
{
"component": "rpm",
"version": "3.25.1",
"package": "pulp-rpm",
"module": "pulp_rpm.app",
"domain_compatible": true
},
{
"component": "certguard",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulp_certguard.app",
"domain_compatible": true
},
{
"component": "file",
"version": "3.49.1",
"package": "pulpcore",
"module": "pulp_file.app",
"domain_compatible": true
}
],
```
**Describe the bug**
While uploading some files (I haven't been able to exactly pin down what they have in common yet) as a non-admin user, we get `Error: {"detail":"You do not have permission to perform this action."}` while doing a `pulp file content upload` despite permissions looking fine across the board. Specifying a _sufficiently high_ chunk size is the only thing that seems to resolve it. For example:
```
~$ wget https://github.com/mstorsjo/llvm-mingw/releases/download/20231128/llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz
~$ pulp --config /tmp/config.toml file content upload --file llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --relative-path llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --repository file-local --chunk-size 10000000
Error: {"detail":"You do not have permission to perform this action."}
~$ stat --printf="%n,%s" llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz
llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz,72800008
~$ pulp --config /tmp/config.toml file content upload --file llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --relative-path llvm-mingw-20231128-ucrt-ubuntu-20.04-x86_64.tar.xz --repository file-local --chunk-size 100000000
Started background task /pulp/api/v3/tasks/018e860b-bde4-7a59-bacd-710eec7c94bd/
Done.
<snip>
```
Interestingly, I _don't_ see this behavior when using the admin user, only a user that we created, which makes me think this is some permission I missed when creating the user, but I have no idea what it would be.
**To Reproduce**
1. Create a file repository and distribution (with autopublish).
2. Create a user with the following roles for the repository and distribution: `file.filedistribution_creator`, `file.filedistribution_owner`, `file.filerepository_creator`, `file.filerepository_owner`.
3. Download a large file of an affected type (whatever that is... .tar.xz seems to trigger it) and try to upload without `--chunk-size` set.
4. Upload should fail with permissions error.
**Expected behavior**
Upload should happen without an error.
**Additional context**
N/A
| I suspect, the user is missing permissions to create upload. So can you verify that adding the upload_creator role would solve the problem?
In the long run, we should probably include the `upload_create` permission in all the content upload roles.
Adding `core.upload_creator` role to the user (even with `--object ""`) without `--chunk-size` didn't resolve the issue. I was in a bit of a time crunch so I just gave `file.filedistribution_owner`, `file.filerepository_owner`, and `core.upload_owner` globally to the user, and it started working, so I'm not sure which of the three was the one that mattered. I assume it was the third one but I'll confirm tomorrow.
It's probably worth mentioning that the path in question wasn't in the top level of the repository but in a directory (i.e., `coverage/myfile.txt`) that already existed and contained files. I assumed the permissions would apply to all subpaths within the repository but is that not the case or something? I did see a similar issue [here](https://github.com/pulp/pulp_container/issues/1588), which feels very similar, although it's obviously in pulp_container instead.
> It's probably worth mentioning that the path in question wasn't in the top level of the repository but in a directory (i.e., `coverage/myfile.txt`) that already existed and contained files.
Permissions are scoped to the repository. So that should not matter.
Thanks for investigating. I'll have a look myself.
With the `file.filerepository_creator` and the core.upload_creator` roles, i get the following result:
```
pulp -vv -p user1 file content upload --file Makefile --relative-path Makefile --repository user1_test --chunk-size 10
uploads_create : post http://localhost:5001/pulp/api/v3/uploads/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Content-Length: 13
Content-Type: application/json
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 201
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 180
Connection: keep-alive
Location: /pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
Vary: Accept
Allow: GET, POST, HEAD
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
uploads_read : get http://localhost:5001/pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 403
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 63
Connection: keep-alive
Vary: Accept
Allow: GET, HEAD, PUT, DELETE
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
uploads_delete : delete http://localhost:5001/pulp/api/v3/uploads/018eccaf-3c4f-7a10-bbdd-4fd1101427bf/
User-Agent: Pulp-CLI/0.25.0.dev
Accept-Encoding: gzip, deflate
Accept: application/json
Connection: keep-alive
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Content-Length: 0
Authorization: Basic dXNlcjE6SXRvYUI4Y2g=
Response: 403
Server: nginx/1.22.1
Date: Thu, 11 Apr 2024 10:23:40 GMT
Content-Type: application/json
Content-Length: 63
Connection: keep-alive
Vary: Accept
Allow: GET, HEAD, PUT, DELETE
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Correlation-ID: c4d2bced1fb24cceb2a790cc4ed78146
Access-Control-Expose-Headers: Correlation-ID
Error: {"detail":"You do not have permission to perform this action."}
```
It looks like we are allowed to create the upload, but the user1 did not become its owner.
`pulp user role-assignment list --username user1` also shows that the `core.upload_owner` role is missing for the user. | 2024-04-11T16:01:05 |
pulp/pulpcore | 5,269 | pulp__pulpcore-5269 | [
"5243"
] | a2a397f5ee8d256b348d20b814ddf225d813f1f0 | diff --git a/pulpcore/app/urls.py b/pulpcore/app/urls.py
--- a/pulpcore/app/urls.py
+++ b/pulpcore/app/urls.py
@@ -13,7 +13,13 @@
from rest_framework.routers import APIRootView
from pulpcore.app.apps import pulp_plugin_configs
-from pulpcore.app.views import OrphansView, PulpImporterImportCheckView, RepairView, StatusView
+from pulpcore.app.views import (
+ LivezView,
+ OrphansView,
+ PulpImporterImportCheckView,
+ RepairView,
+ StatusView,
+)
from pulpcore.app.viewsets import (
ListRepositoryVersionViewSet,
OrphansCleanupViewset,
@@ -157,6 +163,7 @@ class PulpDefaultRouter(routers.DefaultRouter):
]
docs_and_status = [
+ path("livez/", LivezView.as_view()),
path("status/", StatusView.as_view()),
path(
"docs/api.json",
diff --git a/pulpcore/app/views/__init__.py b/pulpcore/app/views/__init__.py
--- a/pulpcore/app/views/__init__.py
+++ b/pulpcore/app/views/__init__.py
@@ -1,4 +1,4 @@
from .orphans import OrphansView
-from .status import StatusView
+from .status import LivezView, StatusView
from .repair import RepairView
from .importer import PulpImporterImportCheckView
diff --git a/pulpcore/app/views/status.py b/pulpcore/app/views/status.py
--- a/pulpcore/app/views/status.py
+++ b/pulpcore/app/views/status.py
@@ -136,3 +136,24 @@ def _get_redis_conn_status():
return False
else:
return True
+
+
+class LivezView(APIView):
+ """
+ Liveness Probe for the REST API.
+ """
+
+ # allow anyone to access the liveness api
+ authentication_classes = []
+ permission_classes = []
+
+ @extend_schema(
+ summary="Inspect liveness of Pulp's REST API.",
+ operation_id="livez_read",
+ responses={200: None},
+ )
+ def get(self, request):
+ """
+ Returns 200 OK when API is alive.
+ """
+ return Response()
| diff --git a/pulpcore/tests/functional/api/test_status.py b/pulpcore/tests/functional/api/test_status.py
--- a/pulpcore/tests/functional/api/test_status.py
+++ b/pulpcore/tests/functional/api/test_status.py
@@ -175,3 +175,15 @@ def verify_get_response(status, expected_schema):
else:
assert status["storage"]["free"] is not None
assert status["storage"]["total"] is not None
+
+
[email protected]
+def test_livez_unauthenticated(
+ pulpcore_bindings,
+ anonymous_user,
+):
+ """
+ Assert that GET requests to Livez API return 200 without a response body.
+ """
+ with anonymous_user:
+ assert pulpcore_bindings.LivezApi.livez_read() is None
| Have a lean way to check for the health of a pulp instance
**Is your feature request related to a problem? Please describe.**
Right now when we want to check for how healthy is a pulp instance we hit the `/status/` endpoint which returns:
- The pulpcore version
- plugins installed and its versions
- space available
- number of api/content/workers
- db/redis connection
- domain enabled
- and so on...
Although this information is really useful when you're interacting with a pulp instance, for automated systems that
need to check it from time to time this doesn't seems to be need or even wanted.
**Describe the solution you'd like**
What if Pulp could be checked with just a HEAD request? 200 if a minimum number of workers are online and a 400 class if something is wrong?
No body on the response. No need to serialize anything.
Just the minimal amount of things needed to understand that Pulp is alive and kicking.
**Describe alternatives you've considered**
We could have the HEAD method on the status endpoint itself.
HEAD if you just wanted a lean check, and GET if you need all possible data from your instance.
This example if just to provoke the discussion:
```python
def head(self, *args):
if settings.CACHE_ENABLED:
redis_status = self._get_redis_conn_status()
else:
redis_status = False
db_status = self._get_db_conn_status()
online_workers = Worker.objects.online().exists()
online_api_apps = ApiAppStatus.objects.online().exists()
online_content_apps = ContentAppStatus.objects.online().exists()
healthy_stack = all(
[online_content_apps, online_api_apps, online_workers, db_status, redis_status]
)
if healthy_stack:
return Response(status=200)
return Response(status=409)
```
| What if for the HEAD request we just return 200 OK without doing anything? That's as light weight as it gets.
> What if for the HEAD request we just return 200 OK without doing anything? That's as light weight as it gets.
Neat, but dangerous. What is an API without database access?
Can we say it's really alive?
I just tried this out. I modified the status viewset to just return `Response("ok")` and when I take down the database I get a 500 error back. I also see the following in the logs:
```
pulp [d6479c3347714d5390c54a7f327410df]: django.request:ERROR: Internal Server Error: /pulp/api/v3/status/
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 289, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 270, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 275, in get_new_connection
connection = self.Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg/connection.py", line 748, in connect
raise last_ex.with_traceback(None)
psycopg.OperationalError: connection is bad: No such file or directory
Is the server running locally and accepting connections on that socket?
```
So I think we can just rely on Django using the database when handling the request to know that the database connection is working.
I think you can use the fields query parameter to get less information.
Reverted the initial feature implementation because it was not compatible with Kubernetes livenessProbe.
Let's add a new API endpoint at `/pulp/api/v3/livez` that will respond with 200 OK and an empty response body. | 2024-04-16T20:13:24 |
pulp/pulpcore | 5,272 | pulp__pulpcore-5272 | [
"5267"
] | a9d92fd258966eb95fec8a8e21edb6c438ed1e34 | diff --git a/pulpcore/app/models/access_policy.py b/pulpcore/app/models/access_policy.py
--- a/pulpcore/app/models/access_policy.py
+++ b/pulpcore/app/models/access_policy.py
@@ -67,10 +67,14 @@ def __init__(self, *args, **kwargs):
@hook("after_create")
def add_perms(self):
- viewset = get_viewset_for_model(self)
- for permission_class in viewset.get_permissions(viewset):
- if hasattr(permission_class, "handle_creation_hooks"):
- permission_class.handle_creation_hooks(self)
+ try:
+ viewset = get_viewset_for_model(self)
+ except LookupError:
+ pass
+ else:
+ for permission_class in viewset.get_permissions(viewset):
+ if hasattr(permission_class, "handle_creation_hooks"):
+ permission_class.handle_creation_hooks(self)
def add_roles_for_users(self, roles, users):
"""
| diff --git a/pulpcore/tests/unit/models/test_base.py b/pulpcore/tests/unit/models/test_base.py
--- a/pulpcore/tests/unit/models/test_base.py
+++ b/pulpcore/tests/unit/models/test_base.py
@@ -1,12 +1,8 @@
import pytest
from uuid import uuid4
-from pulpcore.app.models import Repository
-
-try:
- from pulp_file.app.models import FileRepository, FileRemote
-except ImportError:
- pytestmark = pytest.mark.skip("These tests need pulp_file to be installed.")
+from pulpcore.app.models import AutoAddObjPermsMixin, Repository
+from pulp_file.app.models import FileRepository, FileRemote
@pytest.mark.django_db
@@ -61,3 +57,15 @@ def test_cast(django_assert_num_queries):
def test_get_model_for_pulp_type():
assert Repository.get_model_for_pulp_type("core.repository") is Repository
assert Repository.get_model_for_pulp_type("file.file") is FileRepository
+
+
+class PermissionRepository(Repository, AutoAddObjPermsMixin):
+ class Meta:
+ app_label = "test"
+ default_related_name = "permission_repository"
+ proxy = True
+
+
[email protected]_db
+def test_resiliant_auto_perms():
+ PermissionRepository(name="auto permission test").save()
| Cannot upload content via pulp-container because of the change made to the `Upload` model
The following commit broke the pulp-container upload: https://github.com/pulp/pulpcore/commit/9192c2bf0ccb0e0a2df595fd3efdd0980c80ff34.
Traceback:
```
pulp_1 | pulp [adbae673f9b7498d8240989c1bba93ff]: django.request:ERROR: Internal Server Error: /v2/myorg/mygroup/ubuntu/blobs/uploads/
pulp_1 | Traceback (most recent call last):
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
pulp_1 | response = get_response(request)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
pulp_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
pulp_1 | return view_func(*args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py", line 124, in view
pulp_1 | return self.dispatch(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
pulp_1 | response = self.handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 271, in handle_exception
pulp_1 | response = super().handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
pulp_1 | self.raise_uncaught_exception(exc)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
pulp_1 | raise exc
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
pulp_1 | response = handler(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 758, in create
pulp_1 | upload.save()
pulp_1 | File "/usr/lib64/python3.11/contextlib.py", line 81, in inner
pulp_1 | return func(*args, **kwds)
pulp_1 | ^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 196, in save
pulp_1 | self._run_hooked_methods(AFTER_CREATE, **kwargs)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 312, in _run_hooked_methods
pulp_1 | method.run(self)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 46, in run
pulp_1 | self.method(instance)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/decorators.py", line 119, in func
pulp_1 | hooked_method(*args, **kwargs)
pulp_1 | File "/src/pulpcore/pulpcore/app/models/access_policy.py", line 70, in add_perms
pulp_1 | viewset = get_viewset_for_model(self)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulpcore/pulpcore/app/util.py", line 188, in get_viewset_for_model
pulp_1 | raise LookupError("Could not determine ViewSet base name for model {}".format(model_class))
```
This is reproducible always when trying to push any image to the Pulp Container Registry.
Affected code:
https://github.com/pulp/pulp_container/blob/742acc52f8fc44c4d18a41621455b21e2b9133ec/pulp_container/app/models.py#L804
| Solutions considered:
1. make the post create hook not fail in that case
2. teach pulp_container to do something about it
3. we need a common super class without the mixin +1
@mdellweg | 2024-04-17T08:00:38 |
pulp/pulpcore | 5,273 | pulp__pulpcore-5273 | [
"5267"
] | 4fd5aff44a6aed6a49f3aa6ead9116a7edc1bcac | diff --git a/pulpcore/app/models/access_policy.py b/pulpcore/app/models/access_policy.py
--- a/pulpcore/app/models/access_policy.py
+++ b/pulpcore/app/models/access_policy.py
@@ -67,10 +67,14 @@ def __init__(self, *args, **kwargs):
@hook("after_create")
def add_perms(self):
- viewset = get_viewset_for_model(self)
- for permission_class in viewset.get_permissions(viewset):
- if hasattr(permission_class, "handle_creation_hooks"):
- permission_class.handle_creation_hooks(self)
+ try:
+ viewset = get_viewset_for_model(self)
+ except LookupError:
+ pass
+ else:
+ for permission_class in viewset.get_permissions(viewset):
+ if hasattr(permission_class, "handle_creation_hooks"):
+ permission_class.handle_creation_hooks(self)
def add_roles_for_users(self, roles, users):
"""
| diff --git a/pulpcore/tests/unit/models/test_base.py b/pulpcore/tests/unit/models/test_base.py
--- a/pulpcore/tests/unit/models/test_base.py
+++ b/pulpcore/tests/unit/models/test_base.py
@@ -1,12 +1,8 @@
import pytest
from uuid import uuid4
-from pulpcore.app.models import Repository
-
-try:
- from pulp_file.app.models import FileRepository, FileRemote
-except ImportError:
- pytestmark = pytest.mark.skip("These tests need pulp_file to be installed.")
+from pulpcore.app.models import AutoAddObjPermsMixin, Repository
+from pulp_file.app.models import FileRepository, FileRemote
@pytest.mark.django_db
@@ -61,3 +57,15 @@ def test_cast(django_assert_num_queries):
def test_get_model_for_pulp_type():
assert Repository.get_model_for_pulp_type("core.repository") is Repository
assert Repository.get_model_for_pulp_type("file.file") is FileRepository
+
+
+class PermissionRepository(Repository, AutoAddObjPermsMixin):
+ class Meta:
+ app_label = "test"
+ default_related_name = "permission_repository"
+ proxy = True
+
+
[email protected]_db
+def test_resiliant_auto_perms():
+ PermissionRepository(name="auto permission test").save()
| Cannot upload content via pulp-container because of the change made to the `Upload` model
The following commit broke the pulp-container upload: https://github.com/pulp/pulpcore/commit/9192c2bf0ccb0e0a2df595fd3efdd0980c80ff34.
Traceback:
```
pulp_1 | pulp [adbae673f9b7498d8240989c1bba93ff]: django.request:ERROR: Internal Server Error: /v2/myorg/mygroup/ubuntu/blobs/uploads/
pulp_1 | Traceback (most recent call last):
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
pulp_1 | response = get_response(request)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
pulp_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
pulp_1 | return view_func(*args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py", line 124, in view
pulp_1 | return self.dispatch(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
pulp_1 | response = self.handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 271, in handle_exception
pulp_1 | response = super().handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
pulp_1 | self.raise_uncaught_exception(exc)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
pulp_1 | raise exc
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
pulp_1 | response = handler(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 758, in create
pulp_1 | upload.save()
pulp_1 | File "/usr/lib64/python3.11/contextlib.py", line 81, in inner
pulp_1 | return func(*args, **kwds)
pulp_1 | ^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 196, in save
pulp_1 | self._run_hooked_methods(AFTER_CREATE, **kwargs)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 312, in _run_hooked_methods
pulp_1 | method.run(self)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 46, in run
pulp_1 | self.method(instance)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/decorators.py", line 119, in func
pulp_1 | hooked_method(*args, **kwargs)
pulp_1 | File "/src/pulpcore/pulpcore/app/models/access_policy.py", line 70, in add_perms
pulp_1 | viewset = get_viewset_for_model(self)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulpcore/pulpcore/app/util.py", line 188, in get_viewset_for_model
pulp_1 | raise LookupError("Could not determine ViewSet base name for model {}".format(model_class))
```
This is reproducible always when trying to push any image to the Pulp Container Registry.
Affected code:
https://github.com/pulp/pulp_container/blob/742acc52f8fc44c4d18a41621455b21e2b9133ec/pulp_container/app/models.py#L804
| Solutions considered:
1. make the post create hook not fail in that case
2. teach pulp_container to do something about it
3. we need a common super class without the mixin +1
@mdellweg | 2024-04-17T10:27:58 |
pulp/pulpcore | 5,335 | pulp__pulpcore-5335 | [
"5134"
] | e63328300669cabd73410f4f5ecb26a449b393f0 | diff --git a/pulpcore/app/models/domain.py b/pulpcore/app/models/domain.py
--- a/pulpcore/app/models/domain.py
+++ b/pulpcore/app/models/domain.py
@@ -95,5 +95,5 @@ def disk_usage_callback(domain):
distinct_artifacts = Artifact.objects.filter(pulp_domain=domain).distinct()
total_size = distinct_artifacts.aggregate(size=models.Sum("size", default=0))["size"]
options = yield [ # noqa
- Observation(total_size, {"pulp_href": get_url(domain), "name": domain.name})
+ Observation(total_size, {"pulp_href": get_url(domain), "domain_name": domain.name})
]
diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -547,7 +547,8 @@ def _disk_usage_callback(self):
total_size = artifacts.aggregate(size=Sum("size", default=0))["size"]
options = yield [ # noqa
metrics.Observation(
- total_size, {"pulp_href": get_url(self.domain), "name": self.domain.name}
+ total_size,
+ {"pulp_href": get_url(self.domain), "domain_name": self.domain.name},
)
]
| OTEL: pulp_disk_usage_Bytes reports name instead of domain_name
**Version**
3.49.0
**Describe the bug**
The `pulp_disk_usage_Bytes` metric emitted by Pulp contains a `name` attribute that represents a domain name. It would be better if the attribute was called `domain_name` instead of `name`. Here is what I am currently seeing in Prometheus:
```
pulp_disk_usage_Bytes{endpoint="otel-8889", exported_job="pulp-api", instance="10.129.9.102:8889", job="pulp-otel-collector-svc", name="25c87df8", namespace="pulp-stage", pod="pulp-otel-collector-5dd77b5668-pq8pv", pulp_href="/api/pulp/default/api/v3/domains/018d1440-d486-7722-9319-ac0364cc5408/", service="pulp-otel-collector-svc"}
```
| 2024-05-08T13:27:47 |
||
pulp/pulpcore | 5,365 | pulp__pulpcore-5365 | [
"5363"
] | 03b8caf66e0684d6e741d3adfcc9fbfddca4e246 | diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import (
set_current_user,
@@ -73,6 +73,8 @@ def delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-13T14:50:48 |
||
pulp/pulpcore | 5,368 | pulp__pulpcore-5368 | [
"5367"
] | fb47f3c1f9773c29af13d814ebcbe703a6f72928 | diff --git a/pulpcore/app/migrations/0118_task_core_task_unblock_2276a4_idx_and_more.py b/pulpcore/app/migrations/0118_task_core_task_unblock_2276a4_idx_and_more.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/migrations/0118_task_core_task_unblock_2276a4_idx_and_more.py
@@ -0,0 +1,25 @@
+# Generated by Django 4.2.11 on 2024-05-14 08:07
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ("core", "0117_task_unblocked_at"),
+ ]
+
+ operations = [
+ migrations.AddIndex(
+ model_name="task",
+ index=models.Index(fields=["unblocked_at"], name="core_task_unblock_2276a4_idx"),
+ ),
+ migrations.AddIndex(
+ model_name="task",
+ index=models.Index(fields=["state"], name="core_task_state_61f0ca_idx"),
+ ),
+ migrations.AddIndex(
+ model_name="task",
+ index=models.Index(fields=["state", "pulp_created"], name="core_task_state_3742f2_idx"),
+ ),
+ ]
diff --git a/pulpcore/app/models/task.py b/pulpcore/app/models/task.py
--- a/pulpcore/app/models/task.py
+++ b/pulpcore/app/models/task.py
@@ -278,7 +278,12 @@ def refresh_from_db(self, using=None, fields=None, **kwargs):
super().refresh_from_db(using, fields, **kwargs)
class Meta:
- indexes = [models.Index(fields=["pulp_created"])]
+ indexes = [
+ models.Index(fields=["pulp_created"]),
+ models.Index(fields=["unblocked_at"]),
+ models.Index(fields=["state"]),
+ models.Index(fields=["state", "pulp_created"]),
+ ]
permissions = [
("manage_roles_task", "Can manage role assignments on task"),
]
| Add indices to task table to speed up worker queries.
Specifically dispatching with `immediate=True` yields a query like:
`SELECT 1 AS "a" FROM "core_task" WHERE ("core_task"."pulp_created" < '2024-05-13 23:29:28.153293+00:00'::timestamptz AND "core_task"."state" IN ('waiting', 'running', 'canceling') AND "core_task"."reserved_resources_record" && (ARRAY['/pulp/api/v3/domains/018f5c78-b36b-7b26-9b62-46d8a412c1a3/', '/api/v3/distributions/', 'shared:/api/v3/distributions/'])::text[]) LIMIT 1;`
It is not entirely clear if an index on `state` only or a combined index with `state`, `pulp_created` and `unblocked_at` will perform best.
| 2024-05-14T08:12:07 |
||
pulp/pulpcore | 5,370 | pulp__pulpcore-5370 | [
"5369"
] | 455ce9a8442c380d6e0d6a2853152e6c0be77d34 | diff --git a/pulpcore/app/migrations/0119_grouprole_core_groupr_object__250e22_idx_and_more.py b/pulpcore/app/migrations/0119_grouprole_core_groupr_object__250e22_idx_and_more.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/migrations/0119_grouprole_core_groupr_object__250e22_idx_and_more.py
@@ -0,0 +1,21 @@
+# Generated by Django 4.2.11 on 2024-05-14 08:30
+
+from django.db import migrations, models
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ("core", "0118_task_core_task_unblock_2276a4_idx_and_more"),
+ ]
+
+ operations = [
+ migrations.AddIndex(
+ model_name="grouprole",
+ index=models.Index(fields=["object_id"], name="core_groupr_object__250e22_idx"),
+ ),
+ migrations.AddIndex(
+ model_name="userrole",
+ index=models.Index(fields=["object_id"], name="core_userro_object__adb04a_idx"),
+ ),
+ ]
diff --git a/pulpcore/app/models/role.py b/pulpcore/app/models/role.py
--- a/pulpcore/app/models/role.py
+++ b/pulpcore/app/models/role.py
@@ -51,7 +51,10 @@ class UserRole(BaseModel):
class Meta:
unique_together = (("user", "role", "content_type", "object_id", "domain"),)
- indexes = [models.Index(fields=["content_type", "object_id"])]
+ indexes = [
+ models.Index(fields=["object_id"]),
+ models.Index(fields=["content_type", "object_id"]),
+ ]
class GroupRole(BaseModel):
@@ -76,4 +79,7 @@ class GroupRole(BaseModel):
class Meta:
unique_together = (("group", "role", "content_type", "object_id", "domain"),)
- indexes = [models.Index(fields=["content_type", "object_id"])]
+ indexes = [
+ models.Index(fields=["object_id"]),
+ models.Index(fields=["content_type", "object_id"]),
+ ]
| object_id on user/group_roles tables should be indexed
| 2024-05-14T08:37:13 |
||
pulp/pulpcore | 5,371 | pulp__pulpcore-5371 | [
"5363"
] | 2a0660d8e66d00ec61f20cad0bd9ab54d9763225 | diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import set_current_user, set_domain, configure_analytics, configure_cleanup
from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP
@@ -68,6 +68,8 @@ def delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T09:31:55 |
||
pulp/pulpcore | 5,372 | pulp__pulpcore-5372 | [
"5363"
] | f18620a8eb46997ca25370908e6f93779a024e0c | diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import (
set_current_user,
@@ -73,6 +73,8 @@ def delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T09:32:16 |
||
pulp/pulpcore | 5,373 | pulp__pulpcore-5373 | [
"5363"
] | 98ea1571ac11dbe2fc4175239c917dd2f67ed181 | diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import (
set_current_user,
@@ -73,6 +73,8 @@ def delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T09:32:28 |
||
pulp/pulpcore | 5,374 | pulp__pulpcore-5374 | [
"5363"
] | 8c5711196fccd1fa01a70a486301593911e479c6 | diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -4,7 +4,7 @@
from django.db import transaction
from django.db import connection
-from pulpcore.app.models import Task
+from pulpcore.app.models import Artifact, Content, Task
from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES
_logger = logging.getLogger(__name__)
@@ -55,6 +55,8 @@ def _delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T11:22:26 |
||
pulp/pulpcore | 5,376 | pulp__pulpcore-5376 | [
"5363"
] | ac4458003b75405537103088996c029c0ca6a404 | diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -4,7 +4,7 @@
from django.db import transaction
from django.db import connection
-from pulpcore.app.models import Task
+from pulpcore.app.models import Artifact, Content, Task
from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES
_logger = logging.getLogger(__name__)
@@ -55,6 +55,8 @@ def _delete_incomplete_resources(task):
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T12:12:11 |
||
pulp/pulpcore | 5,377 | pulp__pulpcore-5377 | [
"5363"
] | 151432a3a22c2c0c618c4736df31d4173b577e31 | diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -4,7 +4,7 @@
from django.db import transaction
from django.db import connection
-from pulpcore.app.models import Task
+from pulpcore.app.models import Artifact, Content, Task
from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
_logger = logging.getLogger(__name__)
@@ -60,6 +60,8 @@ def _delete_incomplete_resources(task):
if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:
raise RuntimeError(_("Task must be canceled."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
| Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
| 2024-05-14T12:27:06 |
||
pulp/pulpcore | 5,379 | pulp__pulpcore-5379 | [
"5378"
] | 865e5c9d6b31d594327d256453489c89f5f27fa1 | diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -256,8 +256,11 @@
# how long to protect ephemeral items in minutes
ORPHAN_PROTECTION_TIME = 24 * 60
-# if set to 0, upload and tmpfile cleanup task is disabled
+
+# Custom cleaup intervals
+# for the following, if set to 0, the corresponding cleanup task is disabled
UPLOAD_PROTECTION_TIME = 0
+TASK_PROTECTION_TIME = 0
TMPFILE_PROTECTION_TIME = 0
REMOTE_USER_ENVIRON_NAME = "REMOTE_USER"
diff --git a/pulpcore/app/tasks/purge.py b/pulpcore/app/tasks/purge.py
--- a/pulpcore/app/tasks/purge.py
+++ b/pulpcore/app/tasks/purge.py
@@ -1,7 +1,9 @@
from gettext import gettext as _
from logging import getLogger
+from django.conf import settings
from django.db.models.deletion import ProtectedError
+from django.utils import timezone
from pulpcore.app.models import (
ProgressReport,
@@ -9,7 +11,7 @@
)
from pulpcore.app.role_util import get_objects_for_user
from pulpcore.app.util import get_domain, get_current_authenticated_user
-from pulpcore.constants import TASK_STATES
+from pulpcore.constants import TASK_STATES, TASK_FINAL_STATES
log = getLogger(__name__)
@@ -55,7 +57,7 @@ def _details_reporting(current_reports, current_details, totals_pb):
return current_reports
-def purge(finished_before, states):
+def purge(finished_before=None, states=None):
"""
This task purges from the database records of tasks which finished prior to the specified time.
@@ -70,10 +72,15 @@ def purge(finished_before, states):
by deleting a Task.
Args:
- finished_before (DateTime): Earliest finished-time to **NOT** purge.
- states (List[str]): List of task-states we want to purge.
+ finished_before (Optional[DateTime]): Earliest finished-time to **NOT** purge.
+ states (Optional[List[str]]): List of task-states we want to purge.
"""
+ if finished_before is None:
+ assert settings.TASK_PROTECTION_TIME > 0
+ finished_before = timezone.now() - timezone.timedelta(minutes=settings.TASK_PROTECTION_TIME)
+ if states is None:
+ states = TASK_FINAL_STATES
current_user = get_current_authenticated_user()
domain = get_domain()
# Tasks, prior to the specified date, in the specified state, owned by the current-user, in the
diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -379,6 +379,7 @@ def configure_cleanup():
"pulpcore.app.tasks.orphan.tmpfile_cleanup",
settings.TMPFILE_PROTECTION_TIME,
),
+ ("tasks", "pulpcore.app.tasks.purge.purge", settings.TASK_PROTECTION_TIME),
]:
if protection_time > 0:
dispatch_interval = timedelta(minutes=protection_time)
| Add a scheduled cleanup task to keep the tasks table reasonable
We have seen substancial performance degredation on tasks tables that are several million rows long. While this is probably not uncommon in a usual Pulp installation it is a tedious task for the admin to call purge tasks regularly.
We should just schedule it. To give admins the control they are used to, this should be guarded by a setting (maybe down to different tasks states) for how long tasks are supposed to be finished before vacuumed.
Question: Are we confident to add a reasonable time as default, or does this feature need to be off by default?
| I just want to point out that the purge is opened not only to admin but also to regular users too.
> Question: Are we confident to add a reasonable time as default, or does this feature need to be off by default?
I would pitch to have this feature off by default just because it is very difficult to come up with a 'reasonable' one size-fits-all value for the (a) task interval and (b) for how long to keep tasks since they are in their final state.
> should be guarded by a setting (maybe down to different tasks states)
I would not distinguish between the task states and purge all of them ( final states).
| 2024-05-14T14:50:19 |
|
pulp/pulpcore | 5,388 | pulp__pulpcore-5388 | [
"5375"
] | 914c3e3f6193907214304db234b98e5845b354a2 | diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py
--- a/pulpcore/app/importexport.py
+++ b/pulpcore/app/importexport.py
@@ -9,6 +9,7 @@
from django.db.models.query import QuerySet
from pulpcore.app.apps import get_plugin_config
+from pulpcore.app.models.content import Artifact
from pulpcore.app.models.progress import ProgressReport
from pulpcore.app.models.repository import Repository
from pulpcore.app.modelresource import (
@@ -50,25 +51,21 @@ def _write_export(the_tarfile, resource, dest_dir=None):
temp_file.write("[")
def process_batch(batch):
- dataset = resource.export(batch)
+ model = resource.queryset.model
+ queryset = model.objects.filter(pk__in=batch)
+ dataset = resource.export(queryset)
# Strip "[" and "]" as we are writing the dataset in batch
temp_file.write(dataset.json.lstrip("[").rstrip("]"))
- batch = []
- needs_comma = False
- for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
- batch.append(item)
- if needs_comma:
- # Write "," if not last loop
- temp_file.write(", ")
- needs_comma = False
-
- if len(batch) >= EXPORT_BATCH_SIZE:
- process_batch(batch)
- batch.clear()
- needs_comma = True
+ first_loop = True
+ resource_pks = resource.queryset.values_list("pk", flat=True)
+ for offset in range(0, len(resource_pks), EXPORT_BATCH_SIZE):
+ batch = resource_pks[offset : offset + EXPORT_BATCH_SIZE]
- if batch:
+ if not first_loop:
+ temp_file.write(", ")
+ else:
+ first_loop = False
process_batch(batch)
temp_file.write("]")
@@ -102,39 +99,48 @@ def export_versions(export, version_info):
export.tarfile.addfile(info, io.BytesIO(version_json))
-def export_artifacts(export, artifacts):
+def export_artifacts(export, artifact_pks):
"""
Export a set of Artifacts, ArtifactResources, and RepositoryResources
Args:
export (django.db.models.PulpExport): export instance that's doing the export
- artifacts (django.db.models.Artifacts): List of artifacts in all repos being exported
+ artifact_pks (django.db.models.Artifacts): List of artifact_pks in all repos being exported
Raises:
ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting
"""
- data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifacts))
+ data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifact_pks))
with ProgressReport(**data) as pb:
pb.BATCH_INTERVAL = 5000
if settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem":
with tempfile.TemporaryDirectory(dir=".") as temp_dir:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
- # TODO: this looks like a memory usage threat
- # TODO: it's also probably horrificaly slow, going one-by-one over the net
- # TODO: probably we could skip the temp file entirely and add
- # artifact.file.read() directly to the tarfile with tarfile.addfile()
- temp_file.write(artifact.file.read())
- temp_file.flush()
- artifact.file.close()
- export.tarfile.add(temp_file.name, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
+ # TODO: this looks like a memory usage threat
+ # TODO: it's probably very slow, going one-by-one over the net
+ # TODO: probably we could skip the temp file entirely and add
+ # artifact.file.read() directly to the tarfile with
+ # tarfile.addfile()
+ temp_file.write(artifact.file.read())
+ temp_file.flush()
+ artifact.file.close()
+ export.tarfile.add(temp_file.name, artifact.file.name)
else:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- export.tarfile.add(artifact.file.path, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ export.tarfile.add(artifact.file.path, artifact.file.name)
resource = ArtifactResource()
- resource.queryset = artifacts
+ resource.queryset = Artifact.objects.filter(pk__in=artifact_pks)
_write_export(export.tarfile, resource)
resource = RepositoryResource()
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -25,7 +25,7 @@
RepositoryVersion,
Task,
)
-from pulpcore.app.models.content import Artifact, ContentArtifact
+from pulpcore.app.models.content import ContentArtifact
from pulpcore.app.serializers import PulpExportSerializer
from pulpcore.app.util import compute_file_hash, Crc32Hasher
@@ -509,7 +509,7 @@ def _do_export(pulp_exporter, tar, the_export):
# Export the top-level entities (artifacts and repositories)
# Note: we've already handled "what about incrementals" when building the 'artifacts' list
- export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
+ export_artifacts(the_export, list(artifact_pks))
del artifact_pks
# Export the repository-version data, per-version
| Export of large amount of packages will result in errors
**Version**
- Katello 4.12
- pulpcore 3.39.11
- pulp_rpm 3.23.3
- pulp_deb 3.0.2
**Describe the bug**
Exporting data (in this case Content Views in Katello) with large amount of packages will fail:
`Error: sending query and params failed: number of parameters must be between 0 and 65535`
There are actually at least two different issues both of which will throw the same error:
1. If the Content View has more than 65535 packages the export task will fail (usually pretty quickly).
2. If the Content View has less than 65535 but more than around half of that (tested with ~40k packages) the task will actually succeed but fail in writing the data with the same error. This will take long since the export task is actually completed.
Debian and RPM content are both affected by this.
**To Reproduce**
Steps to reproduce the behavior:
- Create a Content View with more than 65535 packages (or with more than half that but lower than 65535)
- Export the Content View via:
`hammer content-export complete version --chunk-size-gb=40 --content-view="CV_NAME" --organization-id=1 --version=1.0`
**Expected behavior**
The export to be successful.
**Additional context**
We suspect that this has something to do with the upgrade to Django 4.2 or more specifically with the upgrade from psycopg2 to psycopg3. We found an issue that parallels our findings https://github.com/psycopg/psycopg/issues/620
We can also confirm that on older systems that still use older pulpcore versions (pre django 4.2) the export still works on these types of Content Views.
For more context here are the pulp tasks:
1. For Content Views with more than 65535 packages
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f767f-1b93-72cb-aa23-32c9c2650ef9/"
pulp_created: '2024-05-14T09:46:32.725+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: f34cbfee-3afb-4f6a-aff6-83b83e586f0a
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:46:33.085+00:00'
finished_at: '2024-05-14T09:46:55.152+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 512, in _do_export
export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 133, in export_artifacts
for artifact in pb.iter(artifacts.only("file").iterator()):
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/progress.py", line 296, in iter
for x in iter:
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-a49a-7797-a12a-836eb2e6a13f/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: failed
total: 66040
done: 0
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/exports/018f767f-1f26-75cc-921f-b43b68856d32/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-54d9-7791-b6cf-df2913e135b7/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-3e03-756e-b478-d7118b396463/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-6a8a-77d2-9bcc-7fea35a5e6fb/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 15
failed: 1
```
2. For Content Views with around 40k packages:
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f768b-022a-764f-a0e1-7ee4e6256654/"
pulp_created: '2024-05-14T09:59:32.650+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: e0774285-1b15-4539-a9dc-30f2eb8f1764
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:59:32.861+00:00'
finished_at: '2024-05-14T10:27:41.082+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 517, in _do_export
export_content(the_export, version)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 185, in export_content
_write_export(export.tarfile, resource, dest_dir)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 59, in _write_export
for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-accc-7c2e-ba0d-0c381a66be93/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: completed
total: 48052
done: 48052
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/exports/018f768b-0448-7bcd-abd8-052060d1d70d/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 123
failed: 1
```
| ~Most likely the issue is that psycopg3 now uses [server-side bindings](https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#server-side-binding) instead of client-side which it apparently used it with the older versions. Resulting here in a limitation for SQL itself~
Actually [Django still uses the client-side cursors by default](https://docs.djangoproject.com/en/4.2/ref/databases/#server-side-parameters-binding).
Just one more slight clarification of what was already said above: The bug triggers (in the first version) as soon as the export includes more than ~65k artifacts. This was observed for both deb and rpm content exports. In the Katello context, it is enough for all the repos in a content view combined to add up to those more than ~65k artifacts and then export that content view version.
The second version of the bug occurs if there are less than ~65k artifacts but more than ~65k content units. For deb content there tends to be roughly twice as much content as actual artifacts because for each .deb package there is also a "package release component" content. ~~I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.~~ Edit: Apparently @hstct also saw this happen with ~40k rpm content.
> I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.
I ran into the same issue with rpm with about ~40k artifacts in the export so its still an issue here too
Does this issue mean we somehow need to batch the `artifact_pks` from the following querry?
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L512
What happens if you tweak the `Artifacts` query with `.values_list("pk", flat=True)` | 2024-05-15T16:00:13 |
|
pulp/pulpcore | 5,416 | pulp__pulpcore-5416 | [
"5375"
] | 851a71d4596cb9e236486d8cda268cb8e22d996e | diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py
--- a/pulpcore/app/importexport.py
+++ b/pulpcore/app/importexport.py
@@ -9,6 +9,7 @@
from django.db.models.query import QuerySet
from pulpcore.app.apps import get_plugin_config
+from pulpcore.app.models.content import Artifact
from pulpcore.app.models.progress import ProgressReport
from pulpcore.app.models.repository import Repository
from pulpcore.app.modelresource import (
@@ -50,25 +51,21 @@ def _write_export(the_tarfile, resource, dest_dir=None):
temp_file.write("[")
def process_batch(batch):
- dataset = resource.export(batch)
+ model = resource.queryset.model
+ queryset = model.objects.filter(pk__in=batch)
+ dataset = resource.export(queryset)
# Strip "[" and "]" as we are writing the dataset in batch
temp_file.write(dataset.json.lstrip("[").rstrip("]"))
- batch = []
- needs_comma = False
- for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
- batch.append(item)
- if needs_comma:
- # Write "," if not last loop
- temp_file.write(", ")
- needs_comma = False
-
- if len(batch) >= EXPORT_BATCH_SIZE:
- process_batch(batch)
- batch.clear()
- needs_comma = True
+ first_loop = True
+ resource_pks = resource.queryset.values_list("pk", flat=True)
+ for offset in range(0, len(resource_pks), EXPORT_BATCH_SIZE):
+ batch = resource_pks[offset : offset + EXPORT_BATCH_SIZE]
- if batch:
+ if not first_loop:
+ temp_file.write(", ")
+ else:
+ first_loop = False
process_batch(batch)
temp_file.write("]")
@@ -102,39 +99,48 @@ def export_versions(export, version_info):
export.tarfile.addfile(info, io.BytesIO(version_json))
-def export_artifacts(export, artifacts):
+def export_artifacts(export, artifact_pks):
"""
Export a set of Artifacts, ArtifactResources, and RepositoryResources
Args:
export (django.db.models.PulpExport): export instance that's doing the export
- artifacts (django.db.models.Artifacts): List of artifacts in all repos being exported
+ artifact_pks (django.db.models.Artifacts): List of artifact_pks in all repos being exported
Raises:
ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting
"""
- data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifacts))
+ data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifact_pks))
with ProgressReport(**data) as pb:
pb.BATCH_INTERVAL = 5000
if settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem":
with tempfile.TemporaryDirectory(dir=".") as temp_dir:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
- # TODO: this looks like a memory usage threat
- # TODO: it's also probably horrificaly slow, going one-by-one over the net
- # TODO: probably we could skip the temp file entirely and add
- # artifact.file.read() directly to the tarfile with tarfile.addfile()
- temp_file.write(artifact.file.read())
- temp_file.flush()
- artifact.file.close()
- export.tarfile.add(temp_file.name, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
+ # TODO: this looks like a memory usage threat
+ # TODO: it's probably very slow, going one-by-one over the net
+ # TODO: probably we could skip the temp file entirely and add
+ # artifact.file.read() directly to the tarfile with
+ # tarfile.addfile()
+ temp_file.write(artifact.file.read())
+ temp_file.flush()
+ artifact.file.close()
+ export.tarfile.add(temp_file.name, artifact.file.name)
else:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- export.tarfile.add(artifact.file.path, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ export.tarfile.add(artifact.file.path, artifact.file.name)
resource = ArtifactResource()
- resource.queryset = artifacts
+ resource.queryset = Artifact.objects.filter(pk__in=artifact_pks)
_write_export(export.tarfile, resource)
resource = RepositoryResource()
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -25,7 +25,7 @@
RepositoryVersion,
Task,
)
-from pulpcore.app.models.content import Artifact, ContentArtifact
+from pulpcore.app.models.content import ContentArtifact
from pulpcore.app.serializers import PulpExportSerializer
from pulpcore.app.util import compute_file_hash, Crc32Hasher
@@ -509,7 +509,7 @@ def _do_export(pulp_exporter, tar, the_export):
# Export the top-level entities (artifacts and repositories)
# Note: we've already handled "what about incrementals" when building the 'artifacts' list
- export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
+ export_artifacts(the_export, list(artifact_pks))
del artifact_pks
# Export the repository-version data, per-version
| Export of large amount of packages will result in errors
**Version**
- Katello 4.12
- pulpcore 3.39.11
- pulp_rpm 3.23.3
- pulp_deb 3.0.2
**Describe the bug**
Exporting data (in this case Content Views in Katello) with large amount of packages will fail:
`Error: sending query and params failed: number of parameters must be between 0 and 65535`
There are actually at least two different issues both of which will throw the same error:
1. If the Content View has more than 65535 packages the export task will fail (usually pretty quickly).
2. If the Content View has less than 65535 but more than around half of that (tested with ~40k packages) the task will actually succeed but fail in writing the data with the same error. This will take long since the export task is actually completed.
Debian and RPM content are both affected by this.
**To Reproduce**
Steps to reproduce the behavior:
- Create a Content View with more than 65535 packages (or with more than half that but lower than 65535)
- Export the Content View via:
`hammer content-export complete version --chunk-size-gb=40 --content-view="CV_NAME" --organization-id=1 --version=1.0`
**Expected behavior**
The export to be successful.
**Additional context**
We suspect that this has something to do with the upgrade to Django 4.2 or more specifically with the upgrade from psycopg2 to psycopg3. We found an issue that parallels our findings https://github.com/psycopg/psycopg/issues/620
We can also confirm that on older systems that still use older pulpcore versions (pre django 4.2) the export still works on these types of Content Views.
For more context here are the pulp tasks:
1. For Content Views with more than 65535 packages
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f767f-1b93-72cb-aa23-32c9c2650ef9/"
pulp_created: '2024-05-14T09:46:32.725+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: f34cbfee-3afb-4f6a-aff6-83b83e586f0a
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:46:33.085+00:00'
finished_at: '2024-05-14T09:46:55.152+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 512, in _do_export
export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 133, in export_artifacts
for artifact in pb.iter(artifacts.only("file").iterator()):
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/progress.py", line 296, in iter
for x in iter:
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-a49a-7797-a12a-836eb2e6a13f/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: failed
total: 66040
done: 0
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/exports/018f767f-1f26-75cc-921f-b43b68856d32/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-54d9-7791-b6cf-df2913e135b7/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-3e03-756e-b478-d7118b396463/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-6a8a-77d2-9bcc-7fea35a5e6fb/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 15
failed: 1
```
2. For Content Views with around 40k packages:
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f768b-022a-764f-a0e1-7ee4e6256654/"
pulp_created: '2024-05-14T09:59:32.650+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: e0774285-1b15-4539-a9dc-30f2eb8f1764
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:59:32.861+00:00'
finished_at: '2024-05-14T10:27:41.082+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 517, in _do_export
export_content(the_export, version)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 185, in export_content
_write_export(export.tarfile, resource, dest_dir)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 59, in _write_export
for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-accc-7c2e-ba0d-0c381a66be93/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: completed
total: 48052
done: 48052
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/exports/018f768b-0448-7bcd-abd8-052060d1d70d/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 123
failed: 1
```
| ~Most likely the issue is that psycopg3 now uses [server-side bindings](https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#server-side-binding) instead of client-side which it apparently used it with the older versions. Resulting here in a limitation for SQL itself~
Actually [Django still uses the client-side cursors by default](https://docs.djangoproject.com/en/4.2/ref/databases/#server-side-parameters-binding).
Just one more slight clarification of what was already said above: The bug triggers (in the first version) as soon as the export includes more than ~65k artifacts. This was observed for both deb and rpm content exports. In the Katello context, it is enough for all the repos in a content view combined to add up to those more than ~65k artifacts and then export that content view version.
The second version of the bug occurs if there are less than ~65k artifacts but more than ~65k content units. For deb content there tends to be roughly twice as much content as actual artifacts because for each .deb package there is also a "package release component" content. ~~I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.~~ Edit: Apparently @hstct also saw this happen with ~40k rpm content.
> I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.
I ran into the same issue with rpm with about ~40k artifacts in the export so its still an issue here too
Does this issue mean we somehow need to batch the `artifact_pks` from the following querry?
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L512
What happens if you tweak the `Artifacts` query with `.values_list("pk", flat=True)` | 2024-05-23T13:46:00 |
|
pulp/pulpcore | 5,418 | pulp__pulpcore-5418 | [
"5375"
] | 6cfa72cb322730d3caf7de6657dddc838a9c62a9 | diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py
--- a/pulpcore/app/importexport.py
+++ b/pulpcore/app/importexport.py
@@ -9,6 +9,7 @@
from django.db.models.query import QuerySet
from pulpcore.app.apps import get_plugin_config
+from pulpcore.app.models.content import Artifact
from pulpcore.app.models.progress import ProgressReport
from pulpcore.app.models.repository import Repository
from pulpcore.app.modelresource import (
@@ -50,25 +51,21 @@ def _write_export(the_tarfile, resource, dest_dir=None):
temp_file.write("[")
def process_batch(batch):
- dataset = resource.export(batch)
+ model = resource.queryset.model
+ queryset = model.objects.filter(pk__in=batch)
+ dataset = resource.export(queryset)
# Strip "[" and "]" as we are writing the dataset in batch
temp_file.write(dataset.json.lstrip("[").rstrip("]"))
- batch = []
- needs_comma = False
- for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
- batch.append(item)
- if needs_comma:
- # Write "," if not last loop
- temp_file.write(", ")
- needs_comma = False
-
- if len(batch) >= EXPORT_BATCH_SIZE:
- process_batch(batch)
- batch.clear()
- needs_comma = True
+ first_loop = True
+ resource_pks = resource.queryset.values_list("pk", flat=True)
+ for offset in range(0, len(resource_pks), EXPORT_BATCH_SIZE):
+ batch = resource_pks[offset : offset + EXPORT_BATCH_SIZE]
- if batch:
+ if not first_loop:
+ temp_file.write(", ")
+ else:
+ first_loop = False
process_batch(batch)
temp_file.write("]")
@@ -102,39 +99,48 @@ def export_versions(export, version_info):
export.tarfile.addfile(info, io.BytesIO(version_json))
-def export_artifacts(export, artifacts):
+def export_artifacts(export, artifact_pks):
"""
Export a set of Artifacts, ArtifactResources, and RepositoryResources
Args:
export (django.db.models.PulpExport): export instance that's doing the export
- artifacts (django.db.models.Artifacts): List of artifacts in all repos being exported
+ artifact_pks (django.db.models.Artifacts): List of artifact_pks in all repos being exported
Raises:
ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting
"""
- data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifacts))
+ data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifact_pks))
with ProgressReport(**data) as pb:
pb.BATCH_INTERVAL = 5000
if settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem":
with tempfile.TemporaryDirectory(dir=".") as temp_dir:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
- # TODO: this looks like a memory usage threat
- # TODO: it's also probably horrificaly slow, going one-by-one over the net
- # TODO: probably we could skip the temp file entirely and add
- # artifact.file.read() directly to the tarfile with tarfile.addfile()
- temp_file.write(artifact.file.read())
- temp_file.flush()
- artifact.file.close()
- export.tarfile.add(temp_file.name, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
+ # TODO: this looks like a memory usage threat
+ # TODO: it's probably very slow, going one-by-one over the net
+ # TODO: probably we could skip the temp file entirely and add
+ # artifact.file.read() directly to the tarfile with
+ # tarfile.addfile()
+ temp_file.write(artifact.file.read())
+ temp_file.flush()
+ artifact.file.close()
+ export.tarfile.add(temp_file.name, artifact.file.name)
else:
- for artifact in pb.iter(artifacts.only("file").iterator()):
- export.tarfile.add(artifact.file.path, artifact.file.name)
+ for offset in range(0, len(artifact_pks), EXPORT_BATCH_SIZE):
+ batch = artifact_pks[offset : offset + EXPORT_BATCH_SIZE]
+ batch_qs = Artifact.objects.filter(pk__in=batch).only("file")
+
+ for artifact in pb.iter(batch_qs.iterator()):
+ export.tarfile.add(artifact.file.path, artifact.file.name)
resource = ArtifactResource()
- resource.queryset = artifacts
+ resource.queryset = Artifact.objects.filter(pk__in=artifact_pks)
_write_export(export.tarfile, resource)
resource = RepositoryResource()
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -25,7 +25,7 @@
RepositoryVersion,
Task,
)
-from pulpcore.app.models.content import Artifact, ContentArtifact
+from pulpcore.app.models.content import ContentArtifact
from pulpcore.app.serializers import PulpExportSerializer
from pulpcore.app.util import compute_file_hash, Crc32Hasher
@@ -509,7 +509,7 @@ def _do_export(pulp_exporter, tar, the_export):
# Export the top-level entities (artifacts and repositories)
# Note: we've already handled "what about incrementals" when building the 'artifacts' list
- export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
+ export_artifacts(the_export, list(artifact_pks))
del artifact_pks
# Export the repository-version data, per-version
| Export of large amount of packages will result in errors
**Version**
- Katello 4.12
- pulpcore 3.39.11
- pulp_rpm 3.23.3
- pulp_deb 3.0.2
**Describe the bug**
Exporting data (in this case Content Views in Katello) with large amount of packages will fail:
`Error: sending query and params failed: number of parameters must be between 0 and 65535`
There are actually at least two different issues both of which will throw the same error:
1. If the Content View has more than 65535 packages the export task will fail (usually pretty quickly).
2. If the Content View has less than 65535 but more than around half of that (tested with ~40k packages) the task will actually succeed but fail in writing the data with the same error. This will take long since the export task is actually completed.
Debian and RPM content are both affected by this.
**To Reproduce**
Steps to reproduce the behavior:
- Create a Content View with more than 65535 packages (or with more than half that but lower than 65535)
- Export the Content View via:
`hammer content-export complete version --chunk-size-gb=40 --content-view="CV_NAME" --organization-id=1 --version=1.0`
**Expected behavior**
The export to be successful.
**Additional context**
We suspect that this has something to do with the upgrade to Django 4.2 or more specifically with the upgrade from psycopg2 to psycopg3. We found an issue that parallels our findings https://github.com/psycopg/psycopg/issues/620
We can also confirm that on older systems that still use older pulpcore versions (pre django 4.2) the export still works on these types of Content Views.
For more context here are the pulp tasks:
1. For Content Views with more than 65535 packages
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f767f-1b93-72cb-aa23-32c9c2650ef9/"
pulp_created: '2024-05-14T09:46:32.725+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: f34cbfee-3afb-4f6a-aff6-83b83e586f0a
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:46:33.085+00:00'
finished_at: '2024-05-14T09:46:55.152+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 512, in _do_export
export_artifacts(the_export, Artifact.objects.filter(pk__in=artifact_pks))
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 133, in export_artifacts
for artifact in pb.iter(artifacts.only("file").iterator()):
File "/usr/lib/python3.11/site-packages/pulpcore/app/models/progress.py", line 296, in iter
for x in iter:
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-a49a-7797-a12a-836eb2e6a13f/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: failed
total: 66040
done: 0
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/exports/018f767f-1f26-75cc-921f-b43b68856d32/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f767f-19d8-7c16-9aa4-be0917d2c868/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-54d9-7791-b6cf-df2913e135b7/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-3e03-756e-b478-d7118b396463/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-6a8a-77d2-9bcc-7fea35a5e6fb/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 15
failed: 1
```
2. For Content Views with around 40k packages:
```
---
pulp_tasks:
- pulp_href: "/pulp/api/v3/tasks/018f768b-022a-764f-a0e1-7ee4e6256654/"
pulp_created: '2024-05-14T09:59:32.650+00:00'
state: failed
name: pulpcore.app.tasks.export.pulp_export
logging_cid: e0774285-1b15-4539-a9dc-30f2eb8f1764
created_by: "/pulp/api/v3/users/1/"
started_at: '2024-05-14T09:59:32.861+00:00'
finished_at: '2024-05-14T10:27:41.082+00:00'
error:
traceback: |2
File "/usr/lib/python3.11/site-packages/pulpcore/tasking/tasks.py", line 61, in _execute_task
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 417, in pulp_export
_do_export(pulp_exporter, tar, the_export)
File "/usr/lib/python3.11/site-packages/pulpcore/app/tasks/export.py", line 517, in _do_export
export_content(the_export, version)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 185, in export_content
_write_export(export.tarfile, resource, dest_dir)
File "/usr/lib/python3.11/site-packages/pulpcore/app/importexport.py", line 59, in _write_export
for item in resource.queryset.iterator(chunk_size=EXPORT_BATCH_SIZE):
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 516, in _iterator
yield from iterable
File "/usr/lib/python3.11/site-packages/django/db/models/query.py", line 91, in __iter__
results = compiler.execute_sql(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1562, in execute_sql
cursor.execute(sql, params)
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 80, in _execute_with_wrappers
return executor(sql, params, many, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 84, in _execute
with self.db.wrap_database_errors:
File "/usr/lib/python3.11/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/lib/python3.11/site-packages/django/db/backends/utils.py", line 89, in _execute
return self.cursor.execute(sql, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/psycopg/server_cursor.py", line 294, in execute
raise ex.with_traceback(None)
description: 'sending query and params failed: number of parameters must be between
0 and 65535'
worker: "/pulp/api/v3/workers/018f73c9-accc-7c2e-ba0d-0c381a66be93/"
child_tasks: []
progress_reports:
- message: Exporting Artifacts
code: export.artifacts
state: completed
total: 48052
done: 48052
created_resources:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/exports/018f768b-0448-7bcd-abd8-052060d1d70d/"
reserved_resources_record:
- "/pulp/api/v3/exporters/core/pulp/018f768b-0112-7365-bfd8-e30a0cd8eeb6/"
- shared:/pulp/api/v3/repositories/rpm/rpm/018e4331-81e1-723a-bc88-54750f778760/
- shared:/pulp/api/v3/domains/018e42cc-5411-73e1-932c-cc158854d906/
task_groups: []
poll_attempts:
total: 123
failed: 1
```
| ~Most likely the issue is that psycopg3 now uses [server-side bindings](https://www.psycopg.org/psycopg3/docs/basic/from_pg2.html#server-side-binding) instead of client-side which it apparently used it with the older versions. Resulting here in a limitation for SQL itself~
Actually [Django still uses the client-side cursors by default](https://docs.djangoproject.com/en/4.2/ref/databases/#server-side-parameters-binding).
Just one more slight clarification of what was already said above: The bug triggers (in the first version) as soon as the export includes more than ~65k artifacts. This was observed for both deb and rpm content exports. In the Katello context, it is enough for all the repos in a content view combined to add up to those more than ~65k artifacts and then export that content view version.
The second version of the bug occurs if there are less than ~65k artifacts but more than ~65k content units. For deb content there tends to be roughly twice as much content as actual artifacts because for each .deb package there is also a "package release component" content. ~~I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.~~ Edit: Apparently @hstct also saw this happen with ~40k rpm content.
> I suspect the window in export size to hit this second version of the problem is pretty small for rpm exports.
I ran into the same issue with rpm with about ~40k artifacts in the export so its still an issue here too
Does this issue mean we somehow need to batch the `artifact_pks` from the following querry?
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/export.py#L512
What happens if you tweak the `Artifacts` query with `.values_list("pk", flat=True)` | 2024-05-23T13:46:13 |
|
pulp/pulpcore | 5,463 | pulp__pulpcore-5463 | [
"5462"
] | c31a39c186fd734e9e0409321735d7d6509a12ee | diff --git a/pulpcore/app/checks.py b/pulpcore/app/checks.py
--- a/pulpcore/app/checks.py
+++ b/pulpcore/app/checks.py
@@ -1,7 +1,22 @@
from pathlib import Path
from django.conf import settings
-from django.core.checks import Warning as CheckWarning, register
+from django.core.checks import Error as CheckError, Warning as CheckWarning, register
+
+
+@register(deploy=True)
+def content_origin_check(app_configs, **kwargs):
+ messages = []
+ if not getattr(settings, "CONTENT_ORIGIN", None):
+ messages.append(
+ CheckError(
+ "CONTENT_ORIGIN is a required setting but it was not configured. This may be "
+ "caused by invalid read permissions of the settings file. Note that "
+ "CONTENT_ORIGIN is set by the installation automatically.",
+ id="pulpcore.E001",
+ )
+ )
+ return messages
@register(deploy=True)
diff --git a/pulpcore/app/management/commands/openapi.py b/pulpcore/app/management/commands/openapi.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/openapi.py
@@ -0,0 +1,99 @@
+from textwrap import dedent
+
+from django.core.management.base import BaseCommand, CommandError
+from django.http import HttpRequest
+from django.utils import translation
+from django.utils.module_loading import import_string
+
+from rest_framework.request import Request
+
+from drf_spectacular.renderers import OpenApiJsonRenderer, OpenApiYamlRenderer
+from drf_spectacular.settings import patched_settings
+from drf_spectacular.validation import validate_schema
+
+from pulpcore.openapi import PulpSchemaGenerator
+
+
+class SchemaValidationError(CommandError):
+ pass
+
+
+class Command(BaseCommand):
+ help = dedent(
+ """
+ Generate OpenAPI3 schema for the Pulp API.
+
+ The type of schema generated can be modified by providing some options.
+ --component <str> comma separated list of app_labels
+ --bindings flag, to produce operation ids used for bindings generation
+ --pk-path flag, whether paths are presented with PK or href variable
+ """
+ )
+
+ requires_system_checks = []
+
+ def add_arguments(self, parser):
+ parser.add_argument("--component", dest="component", default=None, type=str)
+ parser.add_argument("--bindings", dest="bindings", action="store_true")
+ parser.add_argument("--pk-path", dest="pk_path", action="store_true")
+ parser.add_argument(
+ "--format",
+ dest="format",
+ choices=["openapi", "openapi-json"],
+ default="openapi-json",
+ type=str,
+ )
+ parser.add_argument("--urlconf", dest="urlconf", default=None, type=str)
+ parser.add_argument("--file", dest="file", default=None, type=str)
+ parser.add_argument("--validate", dest="validate", default=False, action="store_true")
+ parser.add_argument("--lang", dest="lang", default=None, type=str)
+ parser.add_argument("--custom-settings", dest="custom_settings", default=None, type=str)
+
+ def handle(self, *args, **options):
+ generator = PulpSchemaGenerator(
+ urlconf=options["urlconf"],
+ )
+
+ if options["custom_settings"]:
+ custom_settings = import_string(options["custom_settings"])
+ else:
+ custom_settings = None
+
+ with patched_settings(custom_settings):
+ request = Request(HttpRequest())
+ request.META["SERVER_NAME"] = "localhost"
+ request.META["SERVER_PORT"] = "24817"
+ if options["component"]:
+ request.query_params["component"] = options["component"]
+ if options["bindings"]:
+ request.query_params["bindings"] = 1
+ if options["pk_path"]:
+ request.query_params["pk_path"] = 1
+
+ if options["lang"]:
+ with translation.override(options["lang"]):
+ schema = generator.get_schema(request=request, public=True)
+ else:
+ schema = generator.get_schema(request=request, public=True)
+
+ if options["validate"]:
+ try:
+ validate_schema(schema)
+ except Exception as e:
+ raise SchemaValidationError(e)
+
+ renderer = self.get_renderer(options["format"])
+ output = renderer.render(schema, renderer_context={})
+
+ if options["file"]:
+ with open(options["file"], "wb") as f:
+ f.write(output)
+ else:
+ self.stdout.write(output.decode())
+
+ def get_renderer(self, format):
+ renderer_cls = {
+ "openapi": OpenApiYamlRenderer,
+ "openapi-json": OpenApiJsonRenderer,
+ }[format]
+ return renderer_cls()
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -332,17 +332,6 @@
from dynaconf import DjangoDynaconf, Validator # noqa
# Validators
-content_origin_validator = Validator(
- "CONTENT_ORIGIN",
- must_exist=True,
- messages={
- "must_exist_true": (
- "CONTENT_ORIGIN is a required setting but it was not configured. This may be caused "
- "by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by "
- "the installation automatically."
- )
- },
-)
storage_validator = (
Validator("REDIRECT_TO_OBJECT_STORAGE", eq=False)
| Validator("DEFAULT_FILE_STORAGE", eq="pulpcore.app.models.storage.FileSystem")
@@ -441,7 +430,6 @@
validators=[
api_root_validator,
cache_validator,
- content_origin_validator,
sha256_validator,
storage_validator,
unknown_algs_validator,
@@ -455,9 +443,8 @@
if not (
- Path(sys.argv[0]).name == "pytest"
- or Path(sys.argv[0]).name == "sphinx-build"
- or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
+ Path(sys.argv[0]).name in ["pytest", "sphinx-build"]
+ or (len(sys.argv) >= 2 and sys.argv[1] in ["collectstatic", "openapi"])
):
try:
with open(DB_ENCRYPTION_KEY, "rb") as key_file:
@@ -474,7 +461,12 @@
ALLOWED_CONTENT_CHECKSUMS
)
-_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = ["handle-artifact-checksums", "migrate", "collectstatic"]
+_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [
+ "handle-artifact-checksums",
+ "migrate",
+ "collectstatic",
+ "openapi",
+]
if not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):
try:
| diff --git a/pulpcore/tests/unit/test_settings.py b/pulpcore/tests/unit/test_settings.py
--- a/pulpcore/tests/unit/test_settings.py
+++ b/pulpcore/tests/unit/test_settings.py
@@ -2,14 +2,6 @@
from dynaconf.validator import ValidationError
-def test_content_origin(settings):
- """Test validation error is raised when CONTENT_ORIGIN is missing."""
- # force needs to be True in order to remove CONTENT_ORIGIN since keep makes it a default
- settings.unset("CONTENT_ORIGIN", force=True)
- with pytest.raises(ValidationError):
- settings.validators.validate()
-
-
def test_cache_enabled(settings):
"""Test that when CACHE_ENABLED is set REDIS_URL or REDIS_HOST & REDIS_PORT."""
settings.set("CACHE_ENABLED", True)
| Provide a management command to generate the api.json files
This command should allow to generate all relevant flavors for api bindings generation.
Parameters the user must be able to specify:
* components
* pk_path
* bindings
| 2024-06-10T11:48:22 |
|
pulp/pulpcore | 5,465 | pulp__pulpcore-5465 | [
"5464"
] | 9e9198ca115d14229c85a4db4da1d811846e7462 | diff --git a/pulpcore/app/replica.py b/pulpcore/app/replica.py
--- a/pulpcore/app/replica.py
+++ b/pulpcore/app/replica.py
@@ -94,7 +94,9 @@ def remote_extra_fields(self, upstream_distribution):
return {}
def create_or_update_remote(self, upstream_distribution):
- if not upstream_distribution["repository"] and not upstream_distribution["publication"]:
+ if not upstream_distribution.get("repository") and not upstream_distribution.get(
+ "publication"
+ ):
return None
url = self.url(upstream_distribution)
remote_fields_dict = {"url": url}
| Replication assumes every plugin supports Publications
The ```Replicator``` class, which is being subclassed inside plugins to support the replication feature, assumes that every plugin supports publications as it tries to access ```upstream_distribution["publication"]``` inside one of its methods (for some plugins, the dictionary simply doesn't contain the "publication" key so an exception gets raised). This forces certain subclasses of ```Replicator``` to create workarounds or rewrite the given method.
I propose making the method more general, removing such assumptions.
Relevant code: https://github.com/pulp/pulpcore/blob/c31a39c186fd734e9e0409321735d7d6509a12ee/pulpcore/app/replica.py#L97C9-L97C96
| 2024-06-10T15:43:00 |
||
pulp/pulpcore | 5,472 | pulp__pulpcore-5472 | [
"5462"
] | 1bb4c6618833f02edd57fd724f2783cec7517bd0 | diff --git a/pulpcore/app/checks.py b/pulpcore/app/checks.py
--- a/pulpcore/app/checks.py
+++ b/pulpcore/app/checks.py
@@ -1,7 +1,22 @@
from pathlib import Path
from django.conf import settings
-from django.core.checks import Warning as CheckWarning, register
+from django.core.checks import Error as CheckError, Warning as CheckWarning, register
+
+
+@register(deploy=True)
+def content_origin_check(app_configs, **kwargs):
+ messages = []
+ if not getattr(settings, "CONTENT_ORIGIN", None):
+ messages.append(
+ CheckError(
+ "CONTENT_ORIGIN is a required setting but it was not configured. This may be "
+ "caused by invalid read permissions of the settings file. Note that "
+ "CONTENT_ORIGIN is set by the installation automatically.",
+ id="pulpcore.E001",
+ )
+ )
+ return messages
@register(deploy=True)
diff --git a/pulpcore/app/management/commands/openapi.py b/pulpcore/app/management/commands/openapi.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/openapi.py
@@ -0,0 +1,99 @@
+from textwrap import dedent
+
+from django.core.management.base import BaseCommand, CommandError
+from django.http import HttpRequest
+from django.utils import translation
+from django.utils.module_loading import import_string
+
+from rest_framework.request import Request
+
+from drf_spectacular.renderers import OpenApiJsonRenderer, OpenApiYamlRenderer
+from drf_spectacular.settings import patched_settings
+from drf_spectacular.validation import validate_schema
+
+from pulpcore.openapi import PulpSchemaGenerator
+
+
+class SchemaValidationError(CommandError):
+ pass
+
+
+class Command(BaseCommand):
+ help = dedent(
+ """
+ Generate OpenAPI3 schema for the Pulp API.
+
+ The type of schema generated can be modified by providing some options.
+ --component <str> comma separated list of app_labels
+ --bindings flag, to produce operation ids used for bindings generation
+ --pk-path flag, whether paths are presented with PK or href variable
+ """
+ )
+
+ requires_system_checks = []
+
+ def add_arguments(self, parser):
+ parser.add_argument("--component", dest="component", default=None, type=str)
+ parser.add_argument("--bindings", dest="bindings", action="store_true")
+ parser.add_argument("--pk-path", dest="pk_path", action="store_true")
+ parser.add_argument(
+ "--format",
+ dest="format",
+ choices=["openapi", "openapi-json"],
+ default="openapi-json",
+ type=str,
+ )
+ parser.add_argument("--urlconf", dest="urlconf", default=None, type=str)
+ parser.add_argument("--file", dest="file", default=None, type=str)
+ parser.add_argument("--validate", dest="validate", default=False, action="store_true")
+ parser.add_argument("--lang", dest="lang", default=None, type=str)
+ parser.add_argument("--custom-settings", dest="custom_settings", default=None, type=str)
+
+ def handle(self, *args, **options):
+ generator = PulpSchemaGenerator(
+ urlconf=options["urlconf"],
+ )
+
+ if options["custom_settings"]:
+ custom_settings = import_string(options["custom_settings"])
+ else:
+ custom_settings = None
+
+ with patched_settings(custom_settings):
+ request = Request(HttpRequest())
+ request.META["SERVER_NAME"] = "localhost"
+ request.META["SERVER_PORT"] = "24817"
+ if options["component"]:
+ request.query_params["component"] = options["component"]
+ if options["bindings"]:
+ request.query_params["bindings"] = 1
+ if options["pk_path"]:
+ request.query_params["pk_path"] = 1
+
+ if options["lang"]:
+ with translation.override(options["lang"]):
+ schema = generator.get_schema(request=request, public=True)
+ else:
+ schema = generator.get_schema(request=request, public=True)
+
+ if options["validate"]:
+ try:
+ validate_schema(schema)
+ except Exception as e:
+ raise SchemaValidationError(e)
+
+ renderer = self.get_renderer(options["format"])
+ output = renderer.render(schema, renderer_context={})
+
+ if options["file"]:
+ with open(options["file"], "wb") as f:
+ f.write(output)
+ else:
+ self.stdout.write(output.decode())
+
+ def get_renderer(self, format):
+ renderer_cls = {
+ "openapi": OpenApiYamlRenderer,
+ "openapi-json": OpenApiJsonRenderer,
+ }[format]
+ return renderer_cls()
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -332,17 +332,6 @@
from dynaconf import DjangoDynaconf, Validator # noqa
# Validators
-content_origin_validator = Validator(
- "CONTENT_ORIGIN",
- must_exist=True,
- messages={
- "must_exist_true": (
- "CONTENT_ORIGIN is a required setting but it was not configured. This may be caused "
- "by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by "
- "the installation automatically."
- )
- },
-)
storage_validator = (
Validator("REDIRECT_TO_OBJECT_STORAGE", eq=False)
| Validator("DEFAULT_FILE_STORAGE", eq="pulpcore.app.models.storage.FileSystem")
@@ -441,7 +430,6 @@
validators=[
api_root_validator,
cache_validator,
- content_origin_validator,
sha256_validator,
storage_validator,
unknown_algs_validator,
@@ -455,9 +443,8 @@
if not (
- Path(sys.argv[0]).name == "pytest"
- or Path(sys.argv[0]).name == "sphinx-build"
- or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
+ Path(sys.argv[0]).name in ["pytest", "sphinx-build"]
+ or (len(sys.argv) >= 2 and sys.argv[1] in ["collectstatic", "openapi"])
):
try:
with open(DB_ENCRYPTION_KEY, "rb") as key_file:
@@ -474,7 +461,12 @@
ALLOWED_CONTENT_CHECKSUMS
)
-_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = ["handle-artifact-checksums", "migrate", "collectstatic"]
+_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [
+ "handle-artifact-checksums",
+ "migrate",
+ "collectstatic",
+ "openapi",
+]
if not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):
try:
| diff --git a/pulpcore/tests/unit/test_settings.py b/pulpcore/tests/unit/test_settings.py
--- a/pulpcore/tests/unit/test_settings.py
+++ b/pulpcore/tests/unit/test_settings.py
@@ -2,14 +2,6 @@
from dynaconf.validator import ValidationError
-def test_content_origin(settings):
- """Test validation error is raised when CONTENT_ORIGIN is missing."""
- # force needs to be True in order to remove CONTENT_ORIGIN since keep makes it a default
- settings.unset("CONTENT_ORIGIN", force=True)
- with pytest.raises(ValidationError):
- settings.validators.validate()
-
-
def test_cache_enabled(settings):
"""Test that when CACHE_ENABLED is set REDIS_URL or REDIS_HOST & REDIS_PORT."""
settings.set("CACHE_ENABLED", True)
| Provide a management command to generate the api.json files
This command should allow to generate all relevant flavors for api bindings generation.
Parameters the user must be able to specify:
* components
* pk_path
* bindings
| 2024-06-13T08:26:22 |
|
pulp/pulpcore | 5,473 | pulp__pulpcore-5473 | [
"5462"
] | 1d305f1b8413ab18d4b29a0ac1383cc8d3a00b8d | diff --git a/pulpcore/app/checks.py b/pulpcore/app/checks.py
--- a/pulpcore/app/checks.py
+++ b/pulpcore/app/checks.py
@@ -1,7 +1,22 @@
from pathlib import Path
from django.conf import settings
-from django.core.checks import Warning as CheckWarning, register
+from django.core.checks import Error as CheckError, Warning as CheckWarning, register
+
+
+@register(deploy=True)
+def content_origin_check(app_configs, **kwargs):
+ messages = []
+ if not getattr(settings, "CONTENT_ORIGIN", None):
+ messages.append(
+ CheckError(
+ "CONTENT_ORIGIN is a required setting but it was not configured. This may be "
+ "caused by invalid read permissions of the settings file. Note that "
+ "CONTENT_ORIGIN is set by the installation automatically.",
+ id="pulpcore.E001",
+ )
+ )
+ return messages
@register(deploy=True)
diff --git a/pulpcore/app/management/commands/openapi.py b/pulpcore/app/management/commands/openapi.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/openapi.py
@@ -0,0 +1,99 @@
+from textwrap import dedent
+
+from django.core.management.base import BaseCommand, CommandError
+from django.http import HttpRequest
+from django.utils import translation
+from django.utils.module_loading import import_string
+
+from rest_framework.request import Request
+
+from drf_spectacular.renderers import OpenApiJsonRenderer, OpenApiYamlRenderer
+from drf_spectacular.settings import patched_settings
+from drf_spectacular.validation import validate_schema
+
+from pulpcore.openapi import PulpSchemaGenerator
+
+
+class SchemaValidationError(CommandError):
+ pass
+
+
+class Command(BaseCommand):
+ help = dedent(
+ """
+ Generate OpenAPI3 schema for the Pulp API.
+
+ The type of schema generated can be modified by providing some options.
+ --component <str> comma separated list of app_labels
+ --bindings flag, to produce operation ids used for bindings generation
+ --pk-path flag, whether paths are presented with PK or href variable
+ """
+ )
+
+ requires_system_checks = []
+
+ def add_arguments(self, parser):
+ parser.add_argument("--component", dest="component", default=None, type=str)
+ parser.add_argument("--bindings", dest="bindings", action="store_true")
+ parser.add_argument("--pk-path", dest="pk_path", action="store_true")
+ parser.add_argument(
+ "--format",
+ dest="format",
+ choices=["openapi", "openapi-json"],
+ default="openapi-json",
+ type=str,
+ )
+ parser.add_argument("--urlconf", dest="urlconf", default=None, type=str)
+ parser.add_argument("--file", dest="file", default=None, type=str)
+ parser.add_argument("--validate", dest="validate", default=False, action="store_true")
+ parser.add_argument("--lang", dest="lang", default=None, type=str)
+ parser.add_argument("--custom-settings", dest="custom_settings", default=None, type=str)
+
+ def handle(self, *args, **options):
+ generator = PulpSchemaGenerator(
+ urlconf=options["urlconf"],
+ )
+
+ if options["custom_settings"]:
+ custom_settings = import_string(options["custom_settings"])
+ else:
+ custom_settings = None
+
+ with patched_settings(custom_settings):
+ request = Request(HttpRequest())
+ request.META["SERVER_NAME"] = "localhost"
+ request.META["SERVER_PORT"] = "24817"
+ if options["component"]:
+ request.query_params["component"] = options["component"]
+ if options["bindings"]:
+ request.query_params["bindings"] = 1
+ if options["pk_path"]:
+ request.query_params["pk_path"] = 1
+
+ if options["lang"]:
+ with translation.override(options["lang"]):
+ schema = generator.get_schema(request=request, public=True)
+ else:
+ schema = generator.get_schema(request=request, public=True)
+
+ if options["validate"]:
+ try:
+ validate_schema(schema)
+ except Exception as e:
+ raise SchemaValidationError(e)
+
+ renderer = self.get_renderer(options["format"])
+ output = renderer.render(schema, renderer_context={})
+
+ if options["file"]:
+ with open(options["file"], "wb") as f:
+ f.write(output)
+ else:
+ self.stdout.write(output.decode())
+
+ def get_renderer(self, format):
+ renderer_cls = {
+ "openapi": OpenApiYamlRenderer,
+ "openapi-json": OpenApiJsonRenderer,
+ }[format]
+ return renderer_cls()
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -317,17 +317,6 @@
from dynaconf import DjangoDynaconf, Validator # noqa
# Validators
-content_origin_validator = Validator(
- "CONTENT_ORIGIN",
- must_exist=True,
- messages={
- "must_exist_true": (
- "CONTENT_ORIGIN is a required setting but it was not configured. This may be caused "
- "by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by "
- "the installation automatically."
- )
- },
-)
storage_validator = (
Validator("REDIRECT_TO_OBJECT_STORAGE", eq=False)
| Validator("DEFAULT_FILE_STORAGE", eq="pulpcore.app.models.storage.FileSystem")
@@ -392,7 +381,6 @@
validators=[
api_root_validator,
cache_validator,
- content_origin_validator,
sha256_validator,
storage_validator,
unknown_algs_validator,
@@ -404,9 +392,8 @@
if not (
- Path(sys.argv[0]).name == "pytest"
- or Path(sys.argv[0]).name == "sphinx-build"
- or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
+ Path(sys.argv[0]).name in ["pytest", "sphinx-build"]
+ or (len(sys.argv) >= 2 and sys.argv[1] in ["collectstatic", "openapi"])
):
try:
with open(DB_ENCRYPTION_KEY, "rb") as key_file:
@@ -423,7 +410,12 @@
ALLOWED_CONTENT_CHECKSUMS
)
-_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = ["handle-artifact-checksums", "migrate", "collectstatic"]
+_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [
+ "handle-artifact-checksums",
+ "migrate",
+ "collectstatic",
+ "openapi",
+]
if not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):
try:
| diff --git a/pulpcore/tests/unit/test_settings.py b/pulpcore/tests/unit/test_settings.py
--- a/pulpcore/tests/unit/test_settings.py
+++ b/pulpcore/tests/unit/test_settings.py
@@ -2,14 +2,6 @@
from dynaconf.validator import ValidationError
-def test_content_origin(settings):
- """Test validation error is raised when CONTENT_ORIGIN is missing."""
- # force needs to be True in order to remove CONTENT_ORIGIN since keep makes it a default
- settings.unset("CONTENT_ORIGIN", force=True)
- with pytest.raises(ValidationError):
- settings.validators.validate()
-
-
def test_cache_enabled(settings):
"""Test that when CACHE_ENABLED is set REDIS_URL or REDIS_HOST & REDIS_PORT."""
settings.set("CACHE_ENABLED", True)
| Provide a management command to generate the api.json files
This command should allow to generate all relevant flavors for api bindings generation.
Parameters the user must be able to specify:
* components
* pk_path
* bindings
| 2024-06-13T09:20:52 |
|
pulp/pulpcore | 5,474 | pulp__pulpcore-5474 | [
"5462"
] | 8d4bcb3354ae8c1573ccc58ed2cbb9fc8119b631 | diff --git a/pulpcore/app/checks.py b/pulpcore/app/checks.py
--- a/pulpcore/app/checks.py
+++ b/pulpcore/app/checks.py
@@ -1,7 +1,22 @@
from pathlib import Path
from django.conf import settings
-from django.core.checks import Warning as CheckWarning, register
+from django.core.checks import Error as CheckError, Warning as CheckWarning, register
+
+
+@register(deploy=True)
+def content_origin_check(app_configs, **kwargs):
+ messages = []
+ if not getattr(settings, "CONTENT_ORIGIN", None):
+ messages.append(
+ CheckError(
+ "CONTENT_ORIGIN is a required setting but it was not configured. This may be "
+ "caused by invalid read permissions of the settings file. Note that "
+ "CONTENT_ORIGIN is set by the installation automatically.",
+ id="pulpcore.E001",
+ )
+ )
+ return messages
@register(deploy=True)
diff --git a/pulpcore/app/management/commands/openapi.py b/pulpcore/app/management/commands/openapi.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/openapi.py
@@ -0,0 +1,99 @@
+from textwrap import dedent
+
+from django.core.management.base import BaseCommand, CommandError
+from django.http import HttpRequest
+from django.utils import translation
+from django.utils.module_loading import import_string
+
+from rest_framework.request import Request
+
+from drf_spectacular.renderers import OpenApiJsonRenderer, OpenApiYamlRenderer
+from drf_spectacular.settings import patched_settings
+from drf_spectacular.validation import validate_schema
+
+from pulpcore.openapi import PulpSchemaGenerator
+
+
+class SchemaValidationError(CommandError):
+ pass
+
+
+class Command(BaseCommand):
+ help = dedent(
+ """
+ Generate OpenAPI3 schema for the Pulp API.
+
+ The type of schema generated can be modified by providing some options.
+ --component <str> comma separated list of app_labels
+ --bindings flag, to produce operation ids used for bindings generation
+ --pk-path flag, whether paths are presented with PK or href variable
+ """
+ )
+
+ requires_system_checks = []
+
+ def add_arguments(self, parser):
+ parser.add_argument("--component", dest="component", default=None, type=str)
+ parser.add_argument("--bindings", dest="bindings", action="store_true")
+ parser.add_argument("--pk-path", dest="pk_path", action="store_true")
+ parser.add_argument(
+ "--format",
+ dest="format",
+ choices=["openapi", "openapi-json"],
+ default="openapi-json",
+ type=str,
+ )
+ parser.add_argument("--urlconf", dest="urlconf", default=None, type=str)
+ parser.add_argument("--file", dest="file", default=None, type=str)
+ parser.add_argument("--validate", dest="validate", default=False, action="store_true")
+ parser.add_argument("--lang", dest="lang", default=None, type=str)
+ parser.add_argument("--custom-settings", dest="custom_settings", default=None, type=str)
+
+ def handle(self, *args, **options):
+ generator = PulpSchemaGenerator(
+ urlconf=options["urlconf"],
+ )
+
+ if options["custom_settings"]:
+ custom_settings = import_string(options["custom_settings"])
+ else:
+ custom_settings = None
+
+ with patched_settings(custom_settings):
+ request = Request(HttpRequest())
+ request.META["SERVER_NAME"] = "localhost"
+ request.META["SERVER_PORT"] = "24817"
+ if options["component"]:
+ request.query_params["component"] = options["component"]
+ if options["bindings"]:
+ request.query_params["bindings"] = 1
+ if options["pk_path"]:
+ request.query_params["pk_path"] = 1
+
+ if options["lang"]:
+ with translation.override(options["lang"]):
+ schema = generator.get_schema(request=request, public=True)
+ else:
+ schema = generator.get_schema(request=request, public=True)
+
+ if options["validate"]:
+ try:
+ validate_schema(schema)
+ except Exception as e:
+ raise SchemaValidationError(e)
+
+ renderer = self.get_renderer(options["format"])
+ output = renderer.render(schema, renderer_context={})
+
+ if options["file"]:
+ with open(options["file"], "wb") as f:
+ f.write(output)
+ else:
+ self.stdout.write(output.decode())
+
+ def get_renderer(self, format):
+ renderer_cls = {
+ "openapi": OpenApiYamlRenderer,
+ "openapi-json": OpenApiJsonRenderer,
+ }[format]
+ return renderer_cls()
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -320,17 +320,6 @@
from dynaconf import DjangoDynaconf, Validator # noqa
# Validators
-content_origin_validator = Validator(
- "CONTENT_ORIGIN",
- must_exist=True,
- messages={
- "must_exist_true": (
- "CONTENT_ORIGIN is a required setting but it was not configured. This may be caused "
- "by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by "
- "the installation automatically."
- )
- },
-)
storage_validator = (
Validator("REDIRECT_TO_OBJECT_STORAGE", eq=False)
| Validator("DEFAULT_FILE_STORAGE", eq="pulpcore.app.models.storage.FileSystem")
@@ -419,7 +408,6 @@
validators=[
api_root_validator,
cache_validator,
- content_origin_validator,
sha256_validator,
storage_validator,
unknown_algs_validator,
@@ -432,9 +420,8 @@
if not (
- Path(sys.argv[0]).name == "pytest"
- or Path(sys.argv[0]).name == "sphinx-build"
- or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
+ Path(sys.argv[0]).name in ["pytest", "sphinx-build"]
+ or (len(sys.argv) >= 2 and sys.argv[1] in ["collectstatic", "openapi"])
):
try:
with open(DB_ENCRYPTION_KEY, "rb") as key_file:
@@ -451,7 +438,12 @@
ALLOWED_CONTENT_CHECKSUMS
)
-_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = ["handle-artifact-checksums", "migrate", "collectstatic"]
+_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [
+ "handle-artifact-checksums",
+ "migrate",
+ "collectstatic",
+ "openapi",
+]
if not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):
try:
| diff --git a/pulpcore/tests/unit/test_settings.py b/pulpcore/tests/unit/test_settings.py
--- a/pulpcore/tests/unit/test_settings.py
+++ b/pulpcore/tests/unit/test_settings.py
@@ -2,14 +2,6 @@
from dynaconf.validator import ValidationError
-def test_content_origin(settings):
- """Test validation error is raised when CONTENT_ORIGIN is missing."""
- # force needs to be True in order to remove CONTENT_ORIGIN since keep makes it a default
- settings.unset("CONTENT_ORIGIN", force=True)
- with pytest.raises(ValidationError):
- settings.validators.validate()
-
-
def test_cache_enabled(settings):
"""Test that when CACHE_ENABLED is set REDIS_URL or REDIS_HOST & REDIS_PORT."""
settings.set("CACHE_ENABLED", True)
| Provide a management command to generate the api.json files
This command should allow to generate all relevant flavors for api bindings generation.
Parameters the user must be able to specify:
* components
* pk_path
* bindings
| 2024-06-13T09:21:13 |
|
pulp/pulpcore | 5,475 | pulp__pulpcore-5475 | [
"5462"
] | 5524b9f5c63018ef33214346f793bc2c07f03f35 | diff --git a/pulpcore/app/checks.py b/pulpcore/app/checks.py
--- a/pulpcore/app/checks.py
+++ b/pulpcore/app/checks.py
@@ -1,7 +1,22 @@
from pathlib import Path
from django.conf import settings
-from django.core.checks import Warning as CheckWarning, register
+from django.core.checks import Error as CheckError, Warning as CheckWarning, register
+
+
+@register(deploy=True)
+def content_origin_check(app_configs, **kwargs):
+ messages = []
+ if not getattr(settings, "CONTENT_ORIGIN", None):
+ messages.append(
+ CheckError(
+ "CONTENT_ORIGIN is a required setting but it was not configured. This may be "
+ "caused by invalid read permissions of the settings file. Note that "
+ "CONTENT_ORIGIN is set by the installation automatically.",
+ id="pulpcore.E001",
+ )
+ )
+ return messages
@register(deploy=True)
diff --git a/pulpcore/app/management/commands/openapi.py b/pulpcore/app/management/commands/openapi.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/openapi.py
@@ -0,0 +1,99 @@
+from textwrap import dedent
+
+from django.core.management.base import BaseCommand, CommandError
+from django.http import HttpRequest
+from django.utils import translation
+from django.utils.module_loading import import_string
+
+from rest_framework.request import Request
+
+from drf_spectacular.renderers import OpenApiJsonRenderer, OpenApiYamlRenderer
+from drf_spectacular.settings import patched_settings
+from drf_spectacular.validation import validate_schema
+
+from pulpcore.openapi import PulpSchemaGenerator
+
+
+class SchemaValidationError(CommandError):
+ pass
+
+
+class Command(BaseCommand):
+ help = dedent(
+ """
+ Generate OpenAPI3 schema for the Pulp API.
+
+ The type of schema generated can be modified by providing some options.
+ --component <str> comma separated list of app_labels
+ --bindings flag, to produce operation ids used for bindings generation
+ --pk-path flag, whether paths are presented with PK or href variable
+ """
+ )
+
+ requires_system_checks = []
+
+ def add_arguments(self, parser):
+ parser.add_argument("--component", dest="component", default=None, type=str)
+ parser.add_argument("--bindings", dest="bindings", action="store_true")
+ parser.add_argument("--pk-path", dest="pk_path", action="store_true")
+ parser.add_argument(
+ "--format",
+ dest="format",
+ choices=["openapi", "openapi-json"],
+ default="openapi-json",
+ type=str,
+ )
+ parser.add_argument("--urlconf", dest="urlconf", default=None, type=str)
+ parser.add_argument("--file", dest="file", default=None, type=str)
+ parser.add_argument("--validate", dest="validate", default=False, action="store_true")
+ parser.add_argument("--lang", dest="lang", default=None, type=str)
+ parser.add_argument("--custom-settings", dest="custom_settings", default=None, type=str)
+
+ def handle(self, *args, **options):
+ generator = PulpSchemaGenerator(
+ urlconf=options["urlconf"],
+ )
+
+ if options["custom_settings"]:
+ custom_settings = import_string(options["custom_settings"])
+ else:
+ custom_settings = None
+
+ with patched_settings(custom_settings):
+ request = Request(HttpRequest())
+ request.META["SERVER_NAME"] = "localhost"
+ request.META["SERVER_PORT"] = "24817"
+ if options["component"]:
+ request.query_params["component"] = options["component"]
+ if options["bindings"]:
+ request.query_params["bindings"] = 1
+ if options["pk_path"]:
+ request.query_params["pk_path"] = 1
+
+ if options["lang"]:
+ with translation.override(options["lang"]):
+ schema = generator.get_schema(request=request, public=True)
+ else:
+ schema = generator.get_schema(request=request, public=True)
+
+ if options["validate"]:
+ try:
+ validate_schema(schema)
+ except Exception as e:
+ raise SchemaValidationError(e)
+
+ renderer = self.get_renderer(options["format"])
+ output = renderer.render(schema, renderer_context={})
+
+ if options["file"]:
+ with open(options["file"], "wb") as f:
+ f.write(output)
+ else:
+ self.stdout.write(output.decode())
+
+ def get_renderer(self, format):
+ renderer_cls = {
+ "openapi": OpenApiYamlRenderer,
+ "openapi-json": OpenApiJsonRenderer,
+ }[format]
+ return renderer_cls()
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -315,17 +315,6 @@
from dynaconf import DjangoDynaconf, Validator # noqa
# Validators
-content_origin_validator = Validator(
- "CONTENT_ORIGIN",
- must_exist=True,
- messages={
- "must_exist_true": (
- "CONTENT_ORIGIN is a required setting but it was not configured. This may be caused "
- "by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by "
- "the installer automatically."
- )
- },
-)
storage_validator = (
Validator("REDIRECT_TO_OBJECT_STORAGE", eq=False)
| Validator("DEFAULT_FILE_STORAGE", eq="pulpcore.app.models.storage.FileSystem")
@@ -390,7 +379,6 @@
validators=[
api_root_validator,
cache_validator,
- content_origin_validator,
sha256_validator,
storage_validator,
unknown_algs_validator,
@@ -402,9 +390,8 @@
if not (
- Path(sys.argv[0]).name == "pytest"
- or Path(sys.argv[0]).name == "sphinx-build"
- or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
+ Path(sys.argv[0]).name in ["pytest", "sphinx-build"]
+ or (len(sys.argv) >= 2 and sys.argv[1] in ["collectstatic", "openapi"])
):
try:
with open(DB_ENCRYPTION_KEY, "rb") as key_file:
@@ -421,7 +408,12 @@
ALLOWED_CONTENT_CHECKSUMS
)
-_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = ["handle-artifact-checksums", "migrate", "collectstatic"]
+_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [
+ "handle-artifact-checksums",
+ "migrate",
+ "collectstatic",
+ "openapi",
+]
if not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):
try:
| diff --git a/pulpcore/tests/unit/test_settings.py b/pulpcore/tests/unit/test_settings.py
--- a/pulpcore/tests/unit/test_settings.py
+++ b/pulpcore/tests/unit/test_settings.py
@@ -2,14 +2,6 @@
from dynaconf.validator import ValidationError
-def test_content_origin(settings):
- """Test validation error is raised when CONTENT_ORIGIN is missing."""
- # force needs to be True in order to remove CONTENT_ORIGIN since keep makes it a default
- settings.unset("CONTENT_ORIGIN", force=True)
- with pytest.raises(ValidationError):
- settings.validators.validate()
-
-
def test_cache_enabled(settings):
"""Test that when CACHE_ENABLED is set REDIS_URL or REDIS_HOST & REDIS_PORT."""
settings.set("CACHE_ENABLED", True)
| Provide a management command to generate the api.json files
This command should allow to generate all relevant flavors for api bindings generation.
Parameters the user must be able to specify:
* components
* pk_path
* bindings
| 2024-06-13T09:57:38 |
|
pulp/pulpcore | 5,483 | pulp__pulpcore-5483 | [
"5464"
] | 41703c72b282abfcb1225a51a89d257fd02c0343 | diff --git a/pulpcore/app/replica.py b/pulpcore/app/replica.py
--- a/pulpcore/app/replica.py
+++ b/pulpcore/app/replica.py
@@ -99,7 +99,9 @@ def remote_extra_fields(self, upstream_distribution):
return {}
def create_or_update_remote(self, upstream_distribution):
- if not upstream_distribution["repository"] and not upstream_distribution["publication"]:
+ if not upstream_distribution.get("repository") and not upstream_distribution.get(
+ "publication"
+ ):
return None
url = self.url(upstream_distribution)
remote_fields_dict = {"url": url}
| Replication assumes every plugin supports Publications
The ```Replicator``` class, which is being subclassed inside plugins to support the replication feature, assumes that every plugin supports publications as it tries to access ```upstream_distribution["publication"]``` inside one of its methods (for some plugins, the dictionary simply doesn't contain the "publication" key so an exception gets raised). This forces certain subclasses of ```Replicator``` to create workarounds or rewrite the given method.
I propose making the method more general, removing such assumptions.
Relevant code: https://github.com/pulp/pulpcore/blob/c31a39c186fd734e9e0409321735d7d6509a12ee/pulpcore/app/replica.py#L97C9-L97C96
| 2024-06-14T15:40:11 |
||
pulp/pulpcore | 5,484 | pulp__pulpcore-5484 | [
"5464"
] | 001fd54423c4e10327deab99c8d1515264ec791c | diff --git a/pulpcore/app/replica.py b/pulpcore/app/replica.py
--- a/pulpcore/app/replica.py
+++ b/pulpcore/app/replica.py
@@ -99,7 +99,9 @@ def remote_extra_fields(self, upstream_distribution):
return {}
def create_or_update_remote(self, upstream_distribution):
- if not upstream_distribution["repository"] and not upstream_distribution["publication"]:
+ if not upstream_distribution.get("repository") and not upstream_distribution.get(
+ "publication"
+ ):
return None
url = self.url(upstream_distribution)
remote_fields_dict = {"url": url}
| Replication assumes every plugin supports Publications
The ```Replicator``` class, which is being subclassed inside plugins to support the replication feature, assumes that every plugin supports publications as it tries to access ```upstream_distribution["publication"]``` inside one of its methods (for some plugins, the dictionary simply doesn't contain the "publication" key so an exception gets raised). This forces certain subclasses of ```Replicator``` to create workarounds or rewrite the given method.
I propose making the method more general, removing such assumptions.
Relevant code: https://github.com/pulp/pulpcore/blob/c31a39c186fd734e9e0409321735d7d6509a12ee/pulpcore/app/replica.py#L97C9-L97C96
| 2024-06-14T15:40:22 |
||
getsentry/snuba | 558 | getsentry__snuba-558 | [
"383"
] | 770d43704ea7d71b5bd3b53a7cf86a7997e5e9ad | diff --git a/snuba/redis.py b/snuba/redis.py
--- a/snuba/redis.py
+++ b/snuba/redis.py
@@ -37,11 +37,13 @@ def execute_command(self, *args, **kwargs):
redis_client = RetryingStrictRedisCluster(
startup_nodes=startup_nodes,
socket_keepalive=True,
+ password=settings.REDIS_PASSWORD,
)
else:
redis_client = StrictRedis(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
+ password=settings.REDIS_PASSWORD,
db=settings.REDIS_DB,
socket_keepalive=True,
)
diff --git a/snuba/settings_base.py b/snuba/settings_base.py
--- a/snuba/settings_base.py
+++ b/snuba/settings_base.py
@@ -28,6 +28,7 @@
REDIS_CLUSTER_STARTUP_NODES = None
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = 6379
+REDIS_PASSWORD = None
REDIS_DB = 1
# Query Recording Options
diff --git a/snuba/settings_docker.py b/snuba/settings_docker.py
--- a/snuba/settings_docker.py
+++ b/snuba/settings_docker.py
@@ -9,5 +9,6 @@
REDIS_HOST = env('REDIS_HOST', 'localhost')
REDIS_PORT = int(env('REDIS_PORT', 6379))
+REDIS_PASSWORD = env('REDIS_PASSWORD')
REDIS_DB = int(env('REDIS_DB', 1))
USE_REDIS_CLUSTER = False
| Support Redis Authentication
I'm trying to install Snuba on my Kubernetes instance alongside Sentry.
Sentry's Helm chart installs Redis with a password (It generates a secret), and there was no option for me to specify that password for Snuba.
I opened up the source code and it looks like a simple solution:
Another setting (REDIS_PASSWORD) that would be passed to startup_nodes and to StrictRedis' constructor on the snuba/redis.py module.
| 2019-10-31T13:37:55 |
||
getsentry/snuba | 709 | getsentry__snuba-709 | [
"671"
] | 0f93d6f6baf6476add22ce7a9170e0cb73a0dfaa | diff --git a/snuba/cli/bootstrap.py b/snuba/cli/bootstrap.py
--- a/snuba/cli/bootstrap.py
+++ b/snuba/cli/bootstrap.py
@@ -57,22 +57,28 @@ def bootstrap(
raise
time.sleep(1)
- topics = []
+ topics = {}
for name in DATASET_NAMES:
dataset = get_dataset(name)
table_writer = dataset.get_table_writer()
if table_writer:
stream_loader = table_writer.get_stream_loader()
for topic_spec in stream_loader.get_all_topic_specs():
- topics.append(
- NewTopic(
- topic_spec.topic_name,
- num_partitions=topic_spec.partitions_number,
- replication_factor=topic_spec.replication_factor,
- )
+ if topic_spec.topic_name in topics:
+ continue
+ logger.debug(
+ "Adding topic %s to creation list", topic_spec.topic_name
+ )
+ topics[topic_spec.topic_name] = NewTopic(
+ topic_spec.topic_name,
+ num_partitions=topic_spec.partitions_number,
+ replication_factor=topic_spec.replication_factor,
)
- for topic, future in client.create_topics(topics).items():
+ logger.debug("Initiating topic creation")
+ for topic, future in client.create_topics(
+ list(topics.values()), operation_timeout=1
+ ).items():
try:
future.result()
logger.info("Topic %s created", topic)
diff --git a/snuba/cli/migrate.py b/snuba/cli/migrate.py
--- a/snuba/cli/migrate.py
+++ b/snuba/cli/migrate.py
@@ -23,13 +23,15 @@ def migrate(*, log_level: str, dataset_name: Optional[str]) -> None:
logging.basicConfig(
level=getattr(logging, log_level.upper()), format="%(asctime)s %(message)s"
)
+
+ if not local_dataset_mode():
+ logger.error("The migration tool can only work on local dataset mode.")
+ sys.exit(1)
+
dataset_names = [dataset_name] if dataset_name else DATASET_NAMES
for name in dataset_names:
dataset = get_dataset(name)
logger.info("Migrating dataset %s", name)
- if not local_dataset_mode():
- logger.error("The migration tool can only work on local dataset mode.")
- sys.exit(1)
clickhouse = Client(
host=settings.CLICKHOUSE_HOST, port=settings.CLICKHOUSE_PORT,
| Segmentation Fault when running bootstrap
So I've seen this happen in a couple different scenarios. I'm trying to run Sentry in Kubernetes. After getting everything installed, I go to bootstrap Snuba (create the Kafka topics). I've experienced this with both Confluent Kafka as well as Apache Kafka (and multiple versions of each). I've also experienced this in both Minikube and AWS EKS clusters.
```
/usr/src/snuba# LOG_LEVEL=debug snuba bootstrap --force
2020-01-08 18:39:41,151 Using Kafka with ('kafka-cp-kafka-headless.kafka:9092',)
2020-01-08 18:39:41,165 Attempting to connect to Kafka (attempt 0)
Segmentation fault (core dumped)
```
But if I add some debug log statements, it starts to work....
Here is my `git diff` which caused it to work suddenly
```
diff --git a/snuba/cli/bootstrap.py b/snuba/cli/bootstrap.py
index 28f52f8..23a85fb 100644
--- a/snuba/cli/bootstrap.py
+++ b/snuba/cli/bootstrap.py
@@ -35,7 +35,6 @@ def bootstrap(
if kafka:
logger.debug("Using Kafka with %r", bootstrap_server)
from confluent_kafka.admin import AdminClient, NewTopic
attempts = 0
while True:
try:
@@ -58,6 +57,7 @@ def bootstrap(
time.sleep(1)
topics = []
+ logger.debug("Made Connection Successfully")
for name in DATASET_NAMES:
dataset = get_dataset(name)
table_writer = dataset.get_table_writer()
@@ -71,14 +71,14 @@ def bootstrap(
replication_factor=topic_spec.replication_factor,
)
)
-
+ print("Created Topics")
for topic, future in client.create_topics(topics).items():
try:
future.result()
logger.info("Topic %s created", topic)
except Exception as e:
logger.error("Failed to create topic %s", topic, exc_info=e)
-
+ print("Actually created topics now")
from snuba.clickhouse.native import ClickhousePool
attempts = 0
```
It started to work after the 3rd log statement was added.
Has anyone else experienced this?
| Hi @nickrichardson-presto, I'm running into the same situation right now... Have you found another valid workaround, e.g. via CLI? I would like to stay on the original snuba image.
I found that editing that file above and adding the print statements fixes it every time. No idea why
I see the same behaviour indeed ! Really weird. Has any of you guys made a public docker image ?
Thanks!
I have not. I’m running in kubernetes and just exec-ed into the pod, make the change, than ran bootstrap
Fair enough !
I pushed a temporary image on docker hub for those interested ;)
https://hub.docker.com/layers/theomathieu/snuba/r1/images/sha256-61610b24aa5e6b8f04ea98f1eddc9027e5a751f0ca0d3cfb537759dc61e2b6f4
Ok, I've made more tests and I don't think those logs are changing anything.
I managed to complete the bootstrap process by force bruting `LOG_LEVEL=debug snuba bootstrap --force`.
So the image won't change anything.
I created a job that run the bootstrap command 40 times.`LOG_LEVEL=debug snuba bootstrap --force`. And it works 👍
I found out that if you execute the command with parameter `--no-kafka` at first something happens. Unfortunately I haven't figured out if that will lead to any further restrictions or issues with sentry... At least the `sentry_local` table is created this way.
`snuba bootstrap --no-kafka --force --log-level debug || snuba bootstrap --force --log-level debug || snuba migrate --log-level debug || echo true`
>I found out that if you execute the command with parameter --no-kafka at first something happens. Unfortunately I haven't figured out if that will lead to any further restrictions or issues with sentry... At least the sentry_local table is created this way.
I think there's an issue with waiting on Kafka topic creation futures and I'm working on a patch to hopefully resolve this. Stay tuned. | 2020-01-20T20:35:14 |
|
getsentry/snuba | 1,759 | getsentry__snuba-1759 | [
"1671"
] | 2776892d082ec10ddf9ee592f579b01d49028065 | diff --git a/snuba/cli/bootstrap.py b/snuba/cli/bootstrap.py
--- a/snuba/cli/bootstrap.py
+++ b/snuba/cli/bootstrap.py
@@ -3,16 +3,20 @@
import click
+from confluent_kafka import KafkaError, KafkaException
from snuba.datasets.factory import ACTIVE_DATASET_NAMES, get_dataset
from snuba.environment import setup_logging
from snuba.migrations.connect import check_clickhouse_connections
from snuba.migrations.runner import Runner
+from snuba.utils.logging import pylog_to_syslog_level
from snuba.utils.streams.backends.kafka import get_default_kafka_configuration
@click.command()
@click.option(
- "--bootstrap-server", multiple=True, help="Kafka bootstrap server to use.",
+ "--bootstrap-server",
+ multiple=True,
+ help="Kafka bootstrap server to use.",
)
@click.option("--kafka/--no-kafka", default=True)
@click.option("--migrate/--no-migrate", default=True)
@@ -42,27 +46,40 @@ def bootstrap(
logger.debug("Using Kafka with %r", bootstrap_server)
from confluent_kafka.admin import AdminClient, NewTopic
+ override_params = {
+ # Same as above: override socket timeout as we expect Kafka
+ # to not getting ready for a while
+ "socket.timeout.ms": 1000,
+ }
+ if logger.getEffectiveLevel() != logging.DEBUG:
+ # Override rdkafka loglevel to be critical unless we are
+ # debugging as we expect failures when trying to connect
+ # (Kafka may not be up yet)
+ override_params["log_level"] = pylog_to_syslog_level(logging.CRITICAL)
+
attempts = 0
while True:
try:
- logger.debug("Attempting to connect to Kafka (attempt %d)", attempts)
+ logger.info("Attempting to connect to Kafka (attempt %d)...", attempts)
client = AdminClient(
get_default_kafka_configuration(
bootstrap_servers=bootstrap_server,
- override_params={"socket.timeout.ms": 1000},
+ override_params=override_params,
)
)
client.list_topics(timeout=1)
break
- except Exception as e:
- logger.error(
- "Connection to Kafka failed (attempt %d)", attempts, exc_info=e
+ except KafkaException as err:
+ logger.debug(
+ "Connection to Kafka failed (attempt %d)", attempts, exc_info=err
)
attempts += 1
if attempts == 60:
raise
time.sleep(1)
+ logger.info("Connected to Kafka on attempt %d", attempts)
+
topics = {}
for name in ACTIVE_DATASET_NAMES:
dataset = get_dataset(name)
@@ -83,15 +100,16 @@ def bootstrap(
replication_factor=topic_spec.replication_factor,
)
- logger.debug("Initiating topic creation")
+ logger.info("Creating Kafka topics...")
for topic, future in client.create_topics(
list(topics.values()), operation_timeout=1
).items():
try:
future.result()
logger.info("Topic %s created", topic)
- except Exception as e:
- logger.error("Failed to create topic %s", topic, exc_info=e)
+ except KafkaException as err:
+ if err.args[0].code() != KafkaError.TOPIC_ALREADY_EXISTS:
+ logger.error("Failed to create topic %s", topic, exc_info=err)
if migrate:
check_clickhouse_connections()
diff --git a/snuba/utils/logging.py b/snuba/utils/logging.py
new file mode 100644
--- /dev/null
+++ b/snuba/utils/logging.py
@@ -0,0 +1,15 @@
+from logging import NOTSET, DEBUG, INFO, WARNING, ERROR, CRITICAL
+
+
+PYTHON_TO_SYSLOG_MAP = {
+ NOTSET: 7,
+ DEBUG: 7,
+ INFO: 6,
+ WARNING: 4,
+ ERROR: 3,
+ CRITICAL: 2,
+}
+
+
+def pylog_to_syslog_level(level: int) -> int:
+ return PYTHON_TO_SYSLOG_MAP.get(level, 7)
diff --git a/snuba/utils/streams/backends/kafka.py b/snuba/utils/streams/backends/kafka.py
--- a/snuba/utils/streams/backends/kafka.py
+++ b/snuba/utils/streams/backends/kafka.py
@@ -40,6 +40,7 @@
from snuba import settings
from snuba.datasets.storages import StorageKey
from snuba.utils.concurrent import execute
+from snuba.utils.logging import pylog_to_syslog_level
from snuba.utils.retries import NoRetryPolicy, RetryPolicy
from snuba.utils.streams.backends.abstract import (
Consumer,
@@ -408,7 +409,9 @@ def poll(self, timeout: Optional[float] = None) -> Optional[Message[KafkaPayload
Partition(Topic(message.topic()), message.partition()),
message.offset(),
KafkaPayload(
- message.key(), message.value(), headers if headers is not None else [],
+ message.key(),
+ message.value(),
+ headers if headers is not None else [],
),
datetime.utcfromtimestamp(message.timestamp()[1] / 1000.0),
)
@@ -693,8 +696,14 @@ def get_default_kafka_configuration(
f"The `{configuration_key}` configuration key is not supported."
)
+ broker_config["log_level"] = pylog_to_syslog_level(logger.getEffectiveLevel())
+
+ if settings.DEBUG:
+ broker_config["debug"] = "all"
+
if override_params:
broker_config.update(override_params)
+
return broker_config
@@ -845,7 +854,9 @@ def __delivery_callback(
future.set_exception(error)
def produce(
- self, destination: Union[Topic, Partition], payload: KafkaPayload,
+ self,
+ destination: Union[Topic, Partition],
+ payload: KafkaPayload,
) -> Future[Message[KafkaPayload]]:
if self.__shutdown_requested.is_set():
raise RuntimeError("producer has been closed")
| Make Snuba bootstrap less noisy
Currently, the `bootstrap` command prints out transitional errors to the console for debugging purposes. This causes confusion for self-hosted users when they are installing.
Instead of printing these at all times, we should only show these if the logging level is set to debug/verbose.
| 2021-03-16T09:20:57 |
||
getsentry/snuba | 1,794 | getsentry__snuba-1794 | [
"1793"
] | 9a3de084d6c3457206d3eb6996783552d63e56d5 | diff --git a/snuba/cli/cleanup.py b/snuba/cli/cleanup.py
--- a/snuba/cli/cleanup.py
+++ b/snuba/cli/cleanup.py
@@ -50,7 +50,8 @@ def cleanup(
(clickhouse_user, clickhouse_password,) = storage.get_cluster().get_credentials()
- database = storage.get_cluster().get_database()
+ cluster = storage.get_cluster()
+ database = cluster.get_database()
if clickhouse_host and clickhouse_port:
connection = ClickhousePool(
@@ -60,12 +61,12 @@ def cleanup(
clickhouse_password,
database,
)
- elif not storage.get_cluster().is_single_node():
+ elif not cluster.is_single_node():
raise click.ClickException("Provide ClickHouse host and port for cleanup")
else:
- connection = storage.get_cluster().get_query_connection(
+ connection = cluster.get_query_connection(
ClickhouseClientSettings.CLEANUP
)
num_dropped = run_cleanup(connection, storage, database, dry_run=dry_run)
- logger.info("Dropped %s partitions on %s" % (num_dropped, clickhouse_host))
+ logger.info("Dropped %s partitions on %s" % (num_dropped, cluster))
| Snuba cleanup for sentry onpremise
### Environment
Sentry self-hosted 21.3.0 (based on docker-compose from here https://github.com/getsentry/onpremise/blob/21.3.0/docker-compose.yml)
### Steps to Reproduce
1) Setup all containers and up snuba-cleanup container
2) Check logs for snuba-cleanup: Every 5 minutes in log - `Dropped 0 partitions on None`
It looks like variable CLICKHOUSE_HOST is ignored here
https://github.com/getsentry/snuba/blob/41d7fe76aaf8a594e8f6e84015607dcde3f67ad4/snuba/cli/cleanup.py#L13
After manual run command in container - `snuba cleanup --clickhouse-host CLICKHOUSE_HOST_HERE --dry-run True`
i got `Dropped 0 partitions on CLICKHOUSE_HOST_HERE`
### Expected Result
Pass variable https://github.com/getsentry/onpremise/blob/bdd2686021cfea07507bc07d2756ac34a775c680/docker-compose.yml#L44 into cleanup command
### Actual Result
variable is `None` instead of clickhouse host
I'am not sure, bug this or not.
| This just seems like a bug on the log statement here: https://github.com/getsentry/snuba/blob/41d7fe76aaf8a594e8f6e84015607dcde3f67ad4/snuba/cli/cleanup.py#L71
Which assumes `clickhouse_host` is always set unless an exception was raised above. I'll see if I can get a quick fix. | 2021-03-31T10:02:51 |
|
getsentry/snuba | 3,697 | getsentry__snuba-3697 | [
"3696"
] | 79a770d514f7a0abd232870953060a21242f0c24 | diff --git a/snuba/cli/subscriptions_scheduler_executor.py b/snuba/cli/subscriptions_scheduler_executor.py
--- a/snuba/cli/subscriptions_scheduler_executor.py
+++ b/snuba/cli/subscriptions_scheduler_executor.py
@@ -24,7 +24,7 @@
"--dataset",
"dataset_name",
required=True,
- type=click.Choice(["events", "transactions", "metrics"]),
+ type=click.Choice(["events", "transactions", "metrics", "sessions"]),
help="The dataset to target.",
)
@click.option(
@@ -32,7 +32,9 @@
"entity_names",
required=True,
multiple=True,
- type=click.Choice(["events", "transactions", "metrics_counters", "metrics_sets"]),
+ type=click.Choice(
+ ["events", "transactions", "metrics_counters", "metrics_sets", "sessions"]
+ ),
help="The entity to target.",
)
@click.option(
| About the sessions-subscription-results subscription issue
### Environment
- sentry | snuba version :23.1.1
https://github.com/getsentry/snuba/pull/2737 ,@lynnagara Hello, I have a question about this pr, hope to get your answer, thank you very much
- After removing subscriptions-scheduler-executor-session support in snuba, how to write data to the topic of sessions-subscription-results? Because I see that the crash rate warning code in sentry is still there and has not changed, for example
- https://github.com/getsentry/sentry/pull/28526
https://github.com/getsentry/sentry/blob/8e00dcdf463d916b9ca79ddbe13e99f161d58db1/src/sentry/snuba/query_subscription_consumer.py#L61-L61
My original question is as follows, I have enabled the organizations:incidents function in sentry and subscribed to sessions-results through the following script
```bash
sentry
run
query-subscription-consumer
--topic=sessions-subscription-results
```
Because there is no data in the sessions-subscription-results topic, the crash rate alarm cannot work
<img width="1568" alt="image" src="https://user-images.githubusercontent.com/18591662/216570393-64748a25-1cd4-4980-966c-f7665dc8482b.png">
| 2023-02-03T19:43:38 |
||
getsentry/snuba | 4,343 | getsentry__snuba-4343 | [
"4223"
] | 942b025577062c4825e86e5bc03b2f02390454eb | diff --git a/snuba/redis.py b/snuba/redis.py
--- a/snuba/redis.py
+++ b/snuba/redis.py
@@ -77,6 +77,7 @@ def _initialize_redis_cluster(config: settings.RedisClusterConfig) -> RedisClien
port=config["port"],
password=config["password"],
db=config["db"],
+ ssl=config.get("ssl", False),
socket_keepalive=True,
)
@@ -89,6 +90,7 @@ def _initialize_redis_cluster(config: settings.RedisClusterConfig) -> RedisClien
"port": settings.REDIS_PORT,
"password": settings.REDIS_PASSWORD,
"db": settings.REDIS_DB,
+ "ssl": settings.REDIS_SSL,
"reinitialize_steps": settings.REDIS_REINITIALIZE_STEPS,
}
)
diff --git a/snuba/settings/__init__.py b/snuba/settings/__init__.py
--- a/snuba/settings/__init__.py
+++ b/snuba/settings/__init__.py
@@ -127,6 +127,7 @@ class RedisClusterConfig(TypedDict):
port: int
password: str | None
db: int
+ ssl: bool
reinitialize_steps: int
@@ -140,6 +141,7 @@ class RedisClusterConfig(TypedDict):
REDIS_PORT = int(os.environ.get("REDIS_PORT", 6379))
REDIS_PASSWORD = os.environ.get("REDIS_PASSWORD")
REDIS_DB = int(os.environ.get("REDIS_DB", 1))
+REDIS_SSL = bool(os.environ.get("REDIS_SSL", False))
REDIS_INIT_MAX_RETRIES = 3
REDIS_REINITIALIZE_STEPS = 10
| Support Redis TLS in Snuba
Hi,
Sentry Relay got support for Redis TLS in release 23.1.1.
Sentry got support for Redis TLS in release 23.4.0 - or at least I think, but please correct me if I'm wrong.
When can we get Redis TLS support for Snuba as currently, I don't see a way to define this setting based on this block of code extracted from the `master` branch (link = https://github.com/getsentry/snuba/blob/master/snuba/redis.py#L84-L94)
```
_default_redis_client: RedisClientType = _initialize_redis_cluster(
{
"use_redis_cluster": settings.USE_REDIS_CLUSTER,
"cluster_startup_nodes": settings.REDIS_CLUSTER_STARTUP_NODES,
"host": settings.REDIS_HOST,
"port": settings.REDIS_PORT,
"password": settings.REDIS_PASSWORD,
"db": settings.REDIS_DB,
"reinitialize_steps": settings.REDIS_REINITIALIZE_STEPS,
}
)
```
Thanks!
| 2023-06-14T13:20:15 |
||
pytorch/text | 14 | pytorch__text-14 | [
"13"
] | 9a3df9d192bfb8351110fb106d41bfda03f420f7 | diff --git a/torchtext/data.py b/torchtext/data.py
--- a/torchtext/data.py
+++ b/torchtext/data.py
@@ -1,4 +1,5 @@
from __future__ import print_function
+import six
import torch
import torch.utils.data
from torch.autograd import Variable
@@ -113,7 +114,7 @@ def __init__(
self, sequential=True, use_vocab=True, init_token=None,
eos_token=None, fix_length=None, tensor_type=torch.LongTensor,
preprocessing=None, postprocessing=None, lower=False,
- tokenize=(lambda s: s.split())):
+ tokenize=(lambda s: s.split()), include_lengths=False):
self.sequential = sequential
self.use_vocab = use_vocab
self.fix_length = fix_length
@@ -122,6 +123,7 @@ def __init__(
self.pad_token = '<pad>' if self.sequential else None
self.tokenize = get_tokenizer(tokenize)
self.lower = lower
+ self.include_lengths = include_lengths
self.preprocessing = (Pipeline() if preprocessing
is None else preprocessing)
self.postprocessing = (Pipeline() if postprocessing
@@ -133,7 +135,7 @@ def preprocess(self, x):
if self.sequential and isinstance(x, str):
x = self.tokenize(x)
if self.lower:
- x = Pipeline(str.lower)(x)
+ x = Pipeline(six.text_type.lower)(x)
return self.preprocessing(x)
def pad(self, minibatch):
@@ -141,7 +143,8 @@ def pad(self, minibatch):
Pads to self.fix_length if provided, otherwise pads to the length of
the longest example in the batch. Prepends self.init_token and appends
- self.eos_token if those attributes are not None.
+ self.eos_token if those attributes are not None. Returns a tuple of the
+ padded list and a list containing lengths of each example.
"""
minibatch = list(minibatch)
if not self.sequential:
@@ -151,13 +154,16 @@ def pad(self, minibatch):
else:
max_len = self.fix_length + (
self.init_token, self.eos_token).count(None) - 2
- padded = []
+ padded, lengths = [], []
for x in minibatch:
padded.append(
([] if self.init_token is None else [self.init_token]) +
list(x[:max_len]) +
([] if self.eos_token is None else [self.eos_token]) +
['<pad>'] * max(0, max_len - len(x)))
+ lengths.append(len(padded[-1]) - max(0, max_len - len(x)))
+ if self.include_lengths:
+ return (padded, lengths)
return padded
def build_vocab(self, *args, **kwargs):
@@ -193,14 +199,20 @@ def build_vocab(self, *args, **kwargs):
def numericalize(self, arr, device=None, train=True):
"""Turn a batch of examples that use this field into a Variable.
+ If the field has include_lengths=True, a tensor of lengths will be
+ included in the return value.
+
Arguments:
- arr: List of tokenized and padded examples.
+ arr: List of tokenized and padded examples, or tuple of a padded
+ list and a list of lengths if self.include_lengths is True.
device: Device to create the Variable's Tensor on. Use -1 for
CPU and None for the currently active GPU device. Default:
None.
train: Whether the batch is for a training set. If False, the
Variable will be created with volatile=True. Default: True.
"""
+ if isinstance(arr, tuple):
+ arr, lengths = arr
if self.use_vocab:
if self.sequential:
arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
@@ -210,6 +222,8 @@ def numericalize(self, arr, device=None, train=True):
else:
arr = self.postprocessing(arr, train)
arr = self.tensor_type(arr)
+ if self.include_lengths:
+ lengths = torch.LongTensor(lengths)
if self.sequential:
arr.t_()
if device == -1:
@@ -218,6 +232,10 @@ def numericalize(self, arr, device=None, train=True):
else:
with torch.cuda.device(device):
arr = arr.cuda()
+ if self.include_lengths:
+ lengths = lengths.cuda()
+ if self.include_lengths:
+ return Variable(arr, volatile=not train), lengths
return Variable(arr, volatile=not train)
@@ -234,11 +252,13 @@ def fromJSON(cls, data, fields):
@classmethod
def fromdict(cls, data, fields):
ex = cls()
- for key, val in data.items():
- if key in fields:
- name, field = fields[key]
- if field is not None:
- setattr(ex, name, field.preprocess(val))
+ for key, vals in fields.items():
+ if key in data and vals is not None:
+ if not isinstance(vals, list):
+ vals = [vals]
+ for val in vals:
+ name, field = val
+ setattr(ex, name, field.preprocess(data[key]))
return ex
@classmethod
@@ -371,7 +391,6 @@ def download_or_unzip(cls, root):
return os.path.join(path, '')
-
class TabularDataset(Dataset):
"""Defines a Dataset of columns stored in CSV, TSV, or JSON format."""
@@ -398,7 +417,12 @@ def __init__(self, path, format, fields, **kwargs):
examples = [make_example(line, fields) for line in f]
if make_example in (Example.fromdict, Example.fromJSON):
- fields = fields.values()
+ fields, field_dict = [], fields
+ for field in field_dict.values():
+ if isinstance(field, list):
+ fields.extend(field)
+ else:
+ fields.append(field)
super(TabularDataset, self).__init__(examples, fields, **kwargs)
@@ -536,6 +560,8 @@ def data(self):
def init_epoch(self):
"""Set up the batch generator for a new epoch."""
self.batches = batch(self.data(), self.batch_size)
+ if not self.repeat:
+ self.iterations = 0
@property
def epoch(self):
@@ -563,12 +589,13 @@ class BucketIterator(Iterator):
"""
def init_epoch(self):
- if self.repeat:
+ if self.sort:
+ self.batches = batch(self.data(), self.batch_size)
+ else:
self.batches = pool(self.data(), self.batch_size,
self.sort_key)
- else:
+ if not self.repeat:
self.iterations = 0
- self.batches = batch(self.data(), self.batch_size)
class BPTTIterator(Iterator):
diff --git a/torchtext/datasets/snli.py b/torchtext/datasets/snli.py
--- a/torchtext/datasets/snli.py
+++ b/torchtext/datasets/snli.py
@@ -2,6 +2,28 @@
from .. import data
+
+class ShiftReduceField(data.Field):
+
+ def __init__(self):
+
+ super(ShiftReduceField, self).__init__(preprocessing=lambda parse: [
+ 'reduce' if t == ')' else 'shift' for t in parse if t != '('])
+
+ self.build_vocab([['reduce'], ['shift']])
+
+
+class ParsedTextField(data.Field):
+
+ def __init__(self, eos_token='<pad>', lower=False):
+
+ super(ParsedTextField, self).__init__(
+ eos_token=eos_token, lower=lower, preprocessing=lambda parse: [
+ t for t in parse if t not in ('(', ')')],
+ postprocessing=lambda parse, _, __: [
+ list(reversed(p)) for p in parse])
+
+
class SNLI(data.ZipDataset, data.TabularDataset):
url = 'http://nlp.stanford.edu/projects/snli/snli_1.0.zip'
@@ -14,7 +36,7 @@ def sort_key(ex):
len(ex.premise), len(ex.hypothesis))
@classmethod
- def splits(cls, text_field, label_field, root='.', trees=False,
+ def splits(cls, text_field, label_field, parse_field=None, root='.',
train='train.jsonl', validation='dev.jsonl', test='test.jsonl'):
"""Create dataset objects for splits of the SNLI dataset.
@@ -24,8 +46,8 @@ def splits(cls, text_field, label_field, root='.', trees=False,
text_field: The field that will be used for premise and hypothesis
data.
label_field: The field that will be used for label data.
- trees: Whether to include the parse trees provided with the SNLI
- dataset (not implemented).
+ parse_field: The field that will be used for shift-reduce parser
+ transitions, or None to not include them.
root: The root directory that the dataset's zip archive will be
expanded into; therefore the directory in whose snli_1.0
subdirectory the data files will be stored.
@@ -35,20 +57,28 @@ def splits(cls, text_field, label_field, root='.', trees=False,
test: The filename of the test data, or None to not load the test
set. Default: 'test.jsonl'.
"""
- if trees:
- # TODO
- raise NotImplementedError
path = cls.download_or_unzip(root)
+ if parse_field is None:
+ return super(SNLI, cls).splits(
+ os.path.join(path, 'snli_1.0_'), train, validation, test,
+ format='json', fields={'sentence1': ('premise', text_field),
+ 'sentence2': ('hypothesis', text_field),
+ 'gold_label': ('label', label_field)},
+ filter_pred=lambda ex: ex.label != '-')
return super(SNLI, cls).splits(
os.path.join(path, 'snli_1.0_'), train, validation, test,
- format='json', fields={'sentence1': ('premise', text_field),
- 'sentence2': ('hypothesis', text_field),
+ format='json', fields={'sentence1_binary_parse':
+ [('premise', text_field),
+ ('premise_transitions', parse_field)],
+ 'sentence2_binary_parse':
+ [('hypothesis', text_field),
+ ('hypothesis_transitions', parse_field)],
'gold_label': ('label', label_field)},
filter_pred=lambda ex: ex.label != '-')
@classmethod
def iters(cls, batch_size=32, device=0, root='.', wv_dir='.',
- wv_type=None, wv_dim='300d', **kwargs):
+ wv_type=None, wv_dim='300d', trees=False, **kwargs):
"""Create iterator objects for splits of the SNLI dataset.
This is the simplest way to use the dataset, and assumes common
@@ -64,12 +94,20 @@ def iters(cls, batch_size=32, device=0, root='.', wv_dir='.',
wv_dir, wv_type, wv_dim: Passed to the Vocab constructor for the
text field. The word vectors are accessible as
train.dataset.fields['text'].vocab.vectors.
+ trees: Whether to include shift-reduce parser transitions.
+ Default: False.
Remaining keyword arguments: Passed to the splits method.
"""
- TEXT = data.Field(tokenize='spacy')
+ if trees:
+ TEXT = ParsedTextField()
+ TRANSITIONS = ShiftReduceField()
+ else:
+ TEXT = data.Field(tokenize='spacy')
+ TRANSITIONS = None
LABEL = data.Field(sequential=False)
- train, val, test = cls.splits(TEXT, LABEL, root=root, **kwargs)
+ train, val, test = cls.splits(
+ TEXT, LABEL, TRANSITIONS, root=root, **kwargs)
TEXT.build_vocab(train, wv_dir=wv_dir, wv_type=wv_type, wv_dim=wv_dim)
LABEL.build_vocab(train)
| py2 fails with snli example
Just FYI, seems to fail on master.
Consider adding a travis contbuild like `vision` or `tnt` packages to catch these early.
```
~/local/examples/snli] python train.py
downloading
extracting
Traceback (most recent call last):
File "train.py", line 22, in <module>
train, dev, test = datasets.SNLI.splits(inputs, answers)
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/datasets/snli.py", line 47, in splits
filter_pred=lambda ex: ex.label != '-')
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 324, in splits
train_data = None if train is None else cls(path + train, **kwargs)
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 398, in __init__
examples = [make_example(line, fields) for line in f]
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 232, in fromJSON
return cls.fromdict(json.loads(data), fields)
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 241, in fromdict
setattr(ex, name, field.preprocess(val))
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 136, in preprocess
x = Pipeline(str.lower)(x)
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 30, in __call__
x = pipe.call(x)
File "/home/soumith/local/miniconda2/lib/python2.7/site-packages/torchtext/data.py", line 36, in call
return self.convert_token(x, *args)
TypeError: descriptor 'lower' requires a 'str' object but received a 'unicode'
```
| 2017-03-09T05:11:23 |
||
pytorch/text | 58 | pytorch__text-58 | [
"53"
] | 6696d92ed227a8e43ae949c296f05646cb977406 | diff --git a/torchtext/data/example.py b/torchtext/data/example.py
--- a/torchtext/data/example.py
+++ b/torchtext/data/example.py
@@ -49,8 +49,8 @@ def fromtree(cls, data, fields, subtrees=False):
try:
from nltk.tree import Tree
except ImportError:
- print('''Please install NLTK:
- $ pip install nltk''')
+ print("Please install NLTK. "
+ "See the docs at http://nltk.org for more information.")
raise
tree = Tree.fromstring(data)
if subtrees:
diff --git a/torchtext/data/utils.py b/torchtext/data/utils.py
--- a/torchtext/data/utils.py
+++ b/torchtext/data/utils.py
@@ -1,7 +1,7 @@
def get_tokenizer(tokenizer):
if callable(tokenizer):
return tokenizer
- if tokenizer == 'spacy':
+ if tokenizer == "spacy":
try:
import spacy
spacy_en = spacy.load('en')
@@ -14,10 +14,24 @@ def get_tokenizer(tokenizer):
print("Please install SpaCy and the SpaCy English tokenizer. "
"See the docs at https://spacy.io for more information.")
raise
+ elif tokenizer == "moses":
+ try:
+ from nltk.tokenize.moses import MosesTokenizer
+ moses_tokenizer = MosesTokenizer()
+ return moses_tokenizer.tokenize
+ except ImportError:
+ print("Please install NLTK. "
+ "See the docs at http://nltk.org for more information.")
+ raise
+ except LookupError:
+ print("Please install the necessary NLTK corpora. "
+ "See the docs at http://nltk.org for more information.")
+ raise
raise ValueError("Requested tokenizer {}, valid choices are a "
- "callable that takes a single string as input "
- "and \"spacy\" for the SpaCy English "
- "tokenizer.".format(tokenizer))
+ "callable that takes a single string as input, "
+ "\"spacy\" for the SpaCy English tokenizer, or "
+ "\"moses\" for the NLTK port of the Moses tokenization "
+ "script.".format(tokenizer))
def interleave_keys(a, b):
| diff --git a/test/data/test_field.py b/test/data/test_field.py
--- a/test/data/test_field.py
+++ b/test/data/test_field.py
@@ -1,6 +1,5 @@
from unittest import TestCase
-import six
import torchtext.data as data
@@ -88,20 +87,3 @@ def test_pad(self):
field = data.Field(init_token="<bos>", eos_token="<eos>",
sequential=False, include_lengths=True)
assert field.pad(minibatch) == minibatch
-
- def test_get_tokenizer(self):
- # Test the default case with str.split
- assert data.get_tokenizer(str.split) == str.split
- test_str = "A string, particularly one with slightly complex punctuation."
- assert data.get_tokenizer(str.split)(test_str) == str.split(test_str)
-
- # Test SpaCy option, and verify it properly handles punctuation.
- assert data.get_tokenizer("spacy")(six.text_type(test_str)) == [
- "A", "string", ",", "particularly", "one", "with", "slightly",
- "complex", "punctuation", "."]
-
- # Test that errors are raised for invalid input arguments.
- with self.assertRaises(ValueError):
- data.get_tokenizer(1)
- with self.assertRaises(ValueError):
- data.get_tokenizer("some other string")
diff --git a/test/data/test_utils.py b/test/data/test_utils.py
new file mode 100644
--- /dev/null
+++ b/test/data/test_utils.py
@@ -0,0 +1,33 @@
+from unittest import TestCase
+
+import six
+import torchtext.data as data
+
+
+class TestUtils(TestCase):
+ def test_get_tokenizer(self):
+ # Test the default case with str.split
+ assert data.get_tokenizer(str.split) == str.split
+ test_str = "A string, particularly one with slightly complex punctuation."
+ assert data.get_tokenizer(str.split)(test_str) == str.split(test_str)
+
+ # Test SpaCy option, and verify it properly handles punctuation.
+ assert data.get_tokenizer("spacy")(six.text_type(test_str)) == [
+ "A", "string", ",", "particularly", "one", "with", "slightly",
+ "complex", "punctuation", "."]
+
+ # Test Moses option. Test strings taken from NLTK doctests.
+ # Note that internally, MosesTokenizer converts to unicode if applicable
+ moses_tokenizer = data.get_tokenizer("moses")
+ assert moses_tokenizer(test_str) == [
+ "A", "string", ",", "particularly", "one", "with", "slightly",
+ "complex", "punctuation", "."]
+
+ # Nonbreaking prefixes should tokenize the final period.
+ assert moses_tokenizer(six.text_type("abc def.")) == ["abc", "def", "."]
+
+ # Test that errors are raised for invalid input arguments.
+ with self.assertRaises(ValueError):
+ data.get_tokenizer(1)
+ with self.assertRaises(ValueError):
+ data.get_tokenizer("some other string")
| Include Moses Tokenizer
| I know NLTK has a Moses tokenizer, but unsure if it's a good port / I've always used tokenize.pl anyway
For the reference, for large corpora (e.g MT) I've found it faster to pretokenize your data and feed it into TorchText with the str.split tokenizer, versus having to tokenize on every run.
@nelson-liu Is there an option to include a bash script to tokenize your data as part of the library?
This issue mentions that Moses in NLTK was fixed https://github.com/nltk/nltk/issues/1214
Hmm, not directly. Ostensibly you could write a function that takes an input string and runs the bash script on it (e.g with subprocess) and parse the output for tokenization, but that feels like a lot of overhead
Edited my comment above. Looks like NLTK contributors fixed Moses in this merged pull request: https://github.com/nltk/nltk/pull/1553
I think including the NLTK version of the Moses tokenizer is a good idea, though it shouldn’t be difficult to use the existing API to call it for now
+1, slighty unrelated but wonder if it's worth including the other NLTK tokenizers (e.g punkt or PTB) / having some sort of public facing API for using the spacy tokenizers in other languages. This seems like a slippery slope API design-wise, though...
Yeah, I think I’d rather show in the docs etc how easy it is to call them yourself. But lots of people (including me until a few minutes ago!) don’t know that NLTK now has a Moses-compatible tokenizer, so it might be worth including that one so more people know they can move away from the perl script preprocessing approach.
Fair enough, I can open a PR for this in a bit / add a custom tokenizer
example.
On Sun, Jul 9, 2017 at 1:30 PM jekbradbury <[email protected]> wrote:
> Yeah, I think I’d rather show in the docs etc how easy it is to call them
> yourself. But lots of people (including me until a few minutes ago!) don’t
> know that NLTK now has a Moses-compatible tokenizer, so it might be worth
> including that one so more people know they can move away from the perl
> script preprocessing approach.
>
> —
> You are receiving this because you were mentioned.
>
>
> Reply to this email directly, view it on GitHub
> <https://github.com/pytorch/text/issues/53#issuecomment-313952582>, or mute
> the thread
> <https://github.com/notifications/unsubscribe-auth/AG72X-odRaXKqFDmR7OocaMGSfxQYN0Nks5sMThygaJpZM4OSKBj>
> .
>
| 2017-07-09T22:50:41 |
pytorch/text | 65 | pytorch__text-65 | [
"64"
] | b148dbb0c9589c5b92c59ecc99477121c2547ec6 | diff --git a/torchtext/data/field.py b/torchtext/data/field.py
--- a/torchtext/data/field.py
+++ b/torchtext/data/field.py
@@ -87,7 +87,7 @@ def preprocess(self, x):
`preprocessing` Pipeline."""
if six.PY2 and isinstance(x, six.string_types):
x = Pipeline(lambda s: unicode(s, encoding='utf-8'))(x)
- if self.sequential:
+ if self.sequential and isinstance(x, six.text_type):
x = self.tokenize(x)
if self.lower:
x = Pipeline(six.text_type.lower)(x)
| Possible bug in LanguageModelingDataset
In the code for [`LanguageModelingDataset`](https://github.com/pytorch/text/blob/master/torchtext/datasets/language_modeling.py), the original text seems to be pre-processed twice, viz.:
- `text += text_field.preprocess(line)` [at line 22](https://github.com/pytorch/text/blob/master/torchtext/datasets/language_modeling.py#L22)
- `examples = [data.Example.fromlist([text], fields)]` [at line 26](https://github.com/pytorch/text/blob/master/torchtext/datasets/language_modeling.py#L26), which in turn calls
`setattr(ex, name, field.preprocess(val))` [at line 44 of example.py](https://github.com/pytorch/text/blob/master/torchtext/data/example.py#L44)
In fact, if I try to create a simple LanguageModelingDataset, I am getting an error as follows:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/riddasgu/.local/lib/python2.7/site-packages/torchtext/datasets/language_modeling.py", line 28, in __init__
examples = [data.Example.fromlist([text], fields)]
File "/home/riddasgu/.local/lib/python2.7/site-packages/torchtext/data/example.py", line 44, in fromlist
setattr(ex, name, field.preprocess(val))
File "/home/riddasgu/.local/lib/python2.7/site-packages/torchtext/data/field.py", line 91, in preprocess
x = self.tokenize(x)
File "/home/riddasgu/.local/lib/python2.7/site-packages/torchtext/data/field.py", line 63, in <lambda>
tokenize=(lambda s: s.split()), include_lengths=False,
AttributeError: 'list' object has no attribute 'split'
```
| Looks like that was my mistake in a recent patch. What I think is supposed to happen here is that the first `preprocess` does the tokenization (it has to be this way because we need to split the corpus into segments with equal numbers of tokens) and the second one should be a no-op because [this line](https://github.com/pytorch/text/blob/master/torchtext/data/field.py#L90) used to include a check that the field data was a string. I'll add that back in. | 2017-07-12T20:26:04 |
|
pytorch/text | 67 | pytorch__text-67 | [
"67"
] | 0859fd96419311b6f73077b30620e070102d5bd3 | diff --git a/torchtext/data/field.py b/torchtext/data/field.py
--- a/torchtext/data/field.py
+++ b/torchtext/data/field.py
@@ -85,7 +85,8 @@ def preprocess(self, x):
first. If `sequential=True`, it will be tokenized. Then the input
will be optionally lowercased and passed to the user-provided
`preprocessing` Pipeline."""
- if six.PY2 and isinstance(x, six.string_types):
+ if (six.PY2 and isinstance(x, six.string_types) and not
+ isinstance(x, unicode)):
x = Pipeline(lambda s: unicode(s, encoding='utf-8'))(x)
if self.sequential and isinstance(x, six.text_type):
x = self.tokenize(x)
| diff --git a/test/data/test_field.py b/test/data/test_field.py
--- a/test/data/test_field.py
+++ b/test/data/test_field.py
@@ -24,6 +24,11 @@ def test_preprocess(self):
preprocessing=preprocess_pipeline)
assert field_not_sequential.preprocess("Test string.") == "test string.!"
+ # Non-regression test that we do not try to decode unicode strings to unicode
+ field_not_sequential = data.Field(sequential=False, lower=True,
+ preprocessing=preprocess_pipeline)
+ assert field_not_sequential.preprocess(u"Test string.") == "test string.!"
+
def test_pad(self):
# Default case.
field = data.Field()
| Prevent decoding unicode strs to unicode in Py2
Fixes #67
| 2017-07-14T22:42:28 |
|
pytorch/text | 75 | pytorch__text-75 | [
"71"
] | df7b391d3c02471a2095170ee83c9de4586930e7 | diff --git a/torchtext/vocab.py b/torchtext/vocab.py
--- a/torchtext/vocab.py
+++ b/torchtext/vocab.py
@@ -126,7 +126,7 @@ def __init__(self, counter, max_size=None, min_freq=1, wv_dir=os.getcwd(),
self.itos = ['<unk>'] + specials
counter.subtract({tok: counter[tok] for tok in ['<unk>'] + specials})
- max_size = None if max_size is None else max_size - len(self.itos)
+ max_size = None if max_size is None else max_size + len(self.itos)
# sort by frequency, then alphabetically
words = sorted(counter.items(), key=lambda tup: tup[0])
| max_size vocab is not consistent.
**Context:**
Num field includes the numbers 0 - 9. I set `max_size=10`. Then I print the vocab that was built:
```
num_field.build_vocab(train, max_size=10)
print(num_field.vocab.itos)
# ['<unk>', '<pad>', '<s>', '</s>', u'1', u'2']
print(len(num_field.vocab.itos))
# 6
```
Then I checked the `words` created from tokenization:
```
print(words)
# [(u'1', 11308), (u'2', 11270), (u'9', 11058), (u'0', 11020), (u'5', 10952), (u'4', 10942), (u'6', 10914), (u'8', 10820), (u'3', 10766), (u'7', 10706), ('</s>', 0), ('<pad>', 0), ('<s>', 0), ('<unk>', 0)]
```
Looks like the vocab built includes only 6 tokens yet the max_size is 10 while there are 14 possible tokens.
**Problem:**
If the number of tokens is larger than `max_size`, `build_vocab` does not fill up the vocabulary up till `max_size`.
**Possible Solution:**
Update https://github.com/pytorch/text/blob/master/torchtext/vocab.py#L129 to not subtract `len(self.itos)` from `max_size`.
| i'm a bit confused by your code example --- when you say "`num_field` includes the number 1-9", do you mean that the train set contains 1-9? Could you give a full snippet to reproduce?
@nelson-liu Yup!
So the training data, when tokenized includes the tokens 0 - 9.
The goal is to train a RNN to reverse a string of numbers `"7 1 5 5 2 5 5 0"` => `"0 5 5 2 5 5 1 7"`.
Example `train.txt`.
Format: `source sequence + \t + target sequence`
Tokenized by spaces (`"5 0"` => `["5", "0"]`)
```
5 0 0 5
4 4 0 0 4 4
2 2
5 2 7 0 0 7 2 5
5 6 6 5
7 1 5 5 2 5 5 0 0 5 5 2 5 5 1 7
9 5 2 1 8 0 4 5 5 4 0 8 1 2 5 9
6 6
3 4 0 6 1 5 4 2 2 4 5 1 6 0 4 3
5 8 0 3 3 0 8 5
2 7 1 1 7 2
7 7 8 8 8 6 2 2 6 8 8 8 7 7
1 7 7 1
1 8 5 1 1 5 8 1
0 4 2 3 3 2 4 0
2 4 9 1 8 8 1 9 4 2
4 9 6 6 6 5 5 0 0 5 5 6 6 6 9 4
6 0 2 2 0 6
1 0 6 4 4 6 0 1
0 9 4 2 7 1 5 5 1 7 2 4 9 0
8 2 3 6 9 2 4 6 8 1 1 8 6 4 2 9 6 3 2 8
2 9 7 4 3 3 4 7 9 2
4 3 6 2 8 4 0 2 7 3 3 7 2 0 4 8 2 6 3 4
6 3 3 6
0 0 0 4 9 9 4 0 0 0
2 2 0 1 4 6 6 4 1 0 2 2
9 3 3 9
8 4 9 3 4 6 8 1 3 1 1 3 1 8 6 4 3 9 4 8
1 2 2 5 7 5 8 8 8 1 1 8 8 8 5 7 5 2 2 1
0 7 0 8 4 6 8 1 1 8 6 4 8 0 7 0
9 7 9 9 7 9
3 3 0 6 4 7 0 0 7 4 6 0 3 3
1 0 0 1
1 6 1 8 8 3 9 8 8 9 3 8 8 1 6 1
5 6 0 9 3 7 6 1 4 8 8 4 1 6 7 3 9 0 6 5
0 5 5 0
4 4 3 7 6 8 0 0 8 6 7 3 4 4
5 0 8 9 9 8 0 5
9 9 5 9 6 0 2 8 4 5 5 4 8 2 0 6 9 5 9 9
8 1 2 4 5 5 3 9 9 3 5 5 4 2 1 8
5 0 9 7 1 1 9 5 5 9 1 1 7 9 0 5
9 2 4 1 1 4 2 9
8 0 0 0 3 1 5 4 6 4 4 6 4 5 1 3 0 0 0 8
4 4
5 1 7 1 2 4 8 8 4 2 1 7 1 5
1 4 6 1 2 2 2 2 1 6 4 1
5 0 9 0 4 1 0 0 1 4 0 9 0 5
5 8 7 7 8 5
4 8 1 5 5 5 5 1 8 4
3 3
7 8 6 7 0 1 6 6 1 0 7 6 8 7
8 3 3 8
9 1 6 5 0 9 6 8 9 5 5 9 8 6 9 0 5 6 1 9
3 8 9 7 3 3 7 5 5 7 3 3 7 9 8 3
5 9 2 6 4 4 6 2 9 5
6 0 4 4 0 6
1 2 3 1 2 5 0 0 2 7 7 2 0 0 5 2 1 3 2 1
3 4 5 2 0 7 5 9 1 4 4 1 9 5 7 0 2 5 4 3
```
Definitely a mistake! Thanks for noticing; this is the kind of breaking bug fix change we need to get in as soon as possible.
I guess this means I should ask for input on the decision whose buggy implementation is at issue here: should `max_vocab=10` include special tokens in that 10 or not? As you can see, I was leaning towards not including them in the 10, but I don't feel strongly either way.
> As you can see, I was leaning towards not including them in the 10
+1
@jekbradbury Consider renaming `max_vocab` to `max_tokens`? Then define `tokens` to be distinct from `special_tokens` or `specials`.
@jekbradbury @nelson-liu Do you think we should add a label "contributions welcome" to these issues.
> Do you think we should add a label "contributions welcome" to these issues.
sure, i think it's a bit implicit but could be worth labeling as well. I feel like anything is "contributions welcome", but there aren't any formal contribution guidelines or anything which might make it hard for first-time contributors to OSS in general. For the reference, I'm also just a user who cares about making the library better, and i've found @jekbradbury to be quite receptive to PRs and discussion. (so thanks for that!)
> Consider renaming `max_vocab` to `max_tokens`? Then define `tokens` to be distinct from `special_tokens` or `specials`.
This could work as well, but it might be a bit complex. IMO, the set of specials is small enough that it isn't generally a big deal to me whether my embedding matrix/other stuff is of size `|tokens|` or `|tokens + specials|` (in your definitions of the terms). Better to just pick one and run with it i think.
Additionally, when i think of the semantics of the word `token`, I'm under the impression that it includes specials (since eos, bos, and padding are all tokens as well). `max_vocab` seems clearer to me in that regard, but this is largely just personal preference. | 2017-07-18T02:25:43 |
|
pytorch/text | 76 | pytorch__text-76 | [
"69"
] | df7b391d3c02471a2095170ee83c9de4586930e7 | diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -110,10 +110,10 @@ def splits(cls, datasets, batch_sizes=None, **kwargs):
def data(self):
"""Return the examples in the dataset in order, sorted, or shuffled."""
- if self.shuffle:
- xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]
- elif self.sort:
+ if self.sort:
xs = sorted(self.dataset, key=self.sort_key)
+ elif self.shuffle:
+ xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]
else:
xs = self.dataset
return xs
| Consistency with sorting: `sort=True`
**Problem:**
```
train_iter, dev_iter, test_iter = data.BucketIterator.splits(
(train, dev, test),
batch_sizes=(32, 256, 256),
sort_key=lambda x: len(x.input),
sort=True,
device=-1) # Use CPU
```
If `sort=True` and `train=True`, then the `train_iter` batches are shuffled. This behavior is unexpected.
**Cause:**
Because by default `self.shuffle=True` is `train=True`. Then https://github.com/pytorch/text/blob/master/torchtext/data/iterator.py#L113 `shuffle` overrides `sort`.
**Possible Solution:**
`sort=True` should override `shuffle=None and train=True`.
| I think you're right, but I'll look into it more on Monday (a couple times a seemingly obvious, one-line change like this has broken existing code and I want to make sure that's unlikely here). | 2017-07-18T02:27:39 |
|
pytorch/text | 81 | pytorch__text-81 | [
"79"
] | e81c0d09ad6e0c1cce42b73c8300c0f48be82fb1 | diff --git a/torchtext/vocab.py b/torchtext/vocab.py
--- a/torchtext/vocab.py
+++ b/torchtext/vocab.py
@@ -113,6 +113,7 @@ def __init__(self, counter, max_size=None, min_freq=1, wv_dir=os.getcwd(),
"""
self.freqs = counter.copy()
self.unk_init = unk_init
+ min_freq = max(min_freq, 1)
counter.update(['<unk>'] + specials)
if wv_type is not None:
| min_freq=0 bug
**Noticed:**
```
>>>some_field.build_vocab(some_dataset, min_freq=0)
>>>padding_idx = some_field.vocab.stoi['<pad'>]
>>>print(padding_idx, '<pad>')
12 <pad>
```
Looks like <pad> is not equal to 1 which is not okay.
Printed `stoi` and `itos` as well:
```
>>>print(some_field.vocab.stoi)
defaultdict(<function Vocab.__init__.<locals>.<lambda> at 0x103f4f0d0>, {'<pad>': 12, '1': 2, '2': 3, '9': 4, '0': 5, '5': 6, '4': 7, '6': 8, '8': 9, '3': 10, '7': 11, '<unk>': 13})
>>>print(some_field.vocab.itos)
['<unk>', '<pad>', '1', '2', '9', '0', '5', '4', '6', '8', '3', '7', '<pad>', '<unk>']
```
**Possible reason:**
Counter subtract does remove the specials but puts their count at 0.
`counter.subtract({tok: counter[tok] for tok in ['<unk>'] + specials})`
**Possible solution:**
Throw an error if `min_freq < 1`
| Thanks! Will fix. | 2017-07-27T23:58:56 |
|
pytorch/text | 87 | pytorch__text-87 | [
"86"
] | e81c0d09ad6e0c1cce42b73c8300c0f48be82fb1 | diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -1,3 +1,5 @@
+from __future__ import division
+
import math
import random
from contextlib import contextmanager
| Length of iterator fails in Python 2
The division `len(dataset) / batch_size` will be cast to int in python2, so that `math.ceil` doesn't really work when `len(dataset)` is not a multiple of batch size.
| Yeah, this is a bug. The solution should be to wrap batch_size in `float`. Will fix.
I think it's preferable to just `from __future__ import division`?
Yes, thanks. I'm not very conversant in the Py2 world... | 2017-08-05T07:00:50 |
|
pytorch/text | 99 | pytorch__text-99 | [
"98"
] | a5049b9d70a699986ae839aca178c33376717cde | diff --git a/torchtext/datasets/imdb.py b/torchtext/datasets/imdb.py
--- a/torchtext/datasets/imdb.py
+++ b/torchtext/datasets/imdb.py
@@ -43,7 +43,8 @@ def download(cls, root):
if not os.path.isdir(path):
fpath = os.path.join(path, cls.filename)
if not os.path.isfile(fpath):
- os.makedirs(os.path.dirname(fpath), exist_ok=True)
+ if not os.path.exists(os.path.dirname(fpath)):
+ os.makedirs(os.path.dirname(fpath))
print('downloading {}'.format(cls.filename))
urllib.request.urlretrieve(os.path.join(cls.url, cls.filename), fpath)
with tarfile.open(fpath, 'r:gz') as tar:
diff --git a/torchtext/datasets/trec.py b/torchtext/datasets/trec.py
--- a/torchtext/datasets/trec.py
+++ b/torchtext/datasets/trec.py
@@ -49,7 +49,8 @@ def download(cls, root):
for fn in [cls.train_filename, cls.test_filename]:
fpath = os.path.join(path, fn)
if not os.path.isfile(fpath):
- os.makedirs(os.path.dirname(fpath), exist_ok=True)
+ if not os.path.exists(os.path.dirname(fpath)):
+ os.makedirs(os.path.dirname(fpath))
print('downloading {}'.format(fn))
urllib.request.urlretrieve(os.path.join(cls.url_base, fn), fpath)
return os.path.join(path, '')
diff --git a/torchtext/vocab.py b/torchtext/vocab.py
--- a/torchtext/vocab.py
+++ b/torchtext/vocab.py
@@ -116,9 +116,9 @@ def load_vectors(self, vectors, unk_init=torch.Tensor.zero_, expand_vocab=False)
self.vectors = torch.Tensor(len(self), tot_dim)
for i, token in enumerate(self.itos):
start_dim = 0
- for i, v in enumerate(vectors):
- end_dim = start_dim + vecs[i].dim
- self.vectors[i][start_dim:end_dim] = vecs[i][token]
+ for j, v in enumerate(vectors):
+ end_dim = start_dim + vecs[j].dim
+ self.vectors[i][start_dim:end_dim] = vecs[j][token]
start_dim = end_dim
assert(start_dim == tot_dim)
@@ -149,12 +149,13 @@ def vector_cache(self, url, root, fname):
fname_pt = fname + '.pt'
fname_txt = fname + '.txt'
desc = os.path.basename(fname)
- dest = os.path.join(root, os.path.basename(url))
if not os.path.isfile(fname_pt):
+ dest = os.path.join(root, os.path.basename(url))
if not os.path.isfile(fname_txt):
print('downloading vectors from {}'.format(url))
- os.makedirs(root, exist_ok=True)
+ if not os.path.exists(root):
+ os.makedirs(root)
with tqdm(unit='B', unit_scale=True, miniters=1, desc=desc) as t:
urlretrieve(url, dest, reporthook=reporthook(t))
print('extracting vectors into {}'.format(root))
@@ -204,7 +205,8 @@ class GloVe(Vectors):
'glove.42B': 'http://nlp.stanford.edu/data/glove.42B.300d.zip',
'glove.840B': 'http://nlp.stanford.edu/data/glove.840B.300d.zip',
'glove.twitter.27B': 'http://nlp.stanford.edu/data/glove.twitter.27B.zip',
- 'glove.6B': 'http://nlp.stanford.edu/data/glove.6B.zip'
+ 'glove.6B': 'http://nlp.stanford.edu/data/glove.6B.zip',
+ 'glove.test_twitter.27B': None,
}
def __init__(self, root='.vector_cache', name='840B', dim=300, **kwargs):
| diff --git a/.vector_cache/glove.test_twitter.27B.200d.pt b/.vector_cache/glove.test_twitter.27B.200d.pt
new file mode 100644
Binary files /dev/null and b/.vector_cache/glove.test_twitter.27B.200d.pt differ
diff --git a/.vector_cache/make_test_glove_vectors.py b/.vector_cache/make_test_glove_vectors.py
new file mode 100644
--- /dev/null
+++ b/.vector_cache/make_test_glove_vectors.py
@@ -0,0 +1,19 @@
+
+import torch
+import numpy as np
+
+if __name__ == '__main__':
+ vocab = ['The', 'the', 'hello', 'world', 'Hello', ',', '.']
+ dim = 200
+
+ stoi = {word: i for i, word in enumerate(vocab)}
+
+ vectors = torch.Tensor(len(vocab), dim)
+ with open('glove.twitter.27B.200d.txt', 'r') as fin:
+ for line in fin:
+ columns = line.strip().split()
+ if columns[0] in stoi:
+ vectors[stoi[columns[0]], :] = torch.from_numpy(
+ np.array([float(ele) for ele in columns[1:]])
+ )
+ torch.save((stoi, vectors, dim), 'glove.test_twitter.27B.200d.pt')
diff --git a/test/test_vocab.py b/test/test_vocab.py
new file mode 100644
--- /dev/null
+++ b/test/test_vocab.py
@@ -0,0 +1,33 @@
+import numpy as np
+import unittest
+
+from torchtext import vocab
+from collections import Counter
+
+
+class TestVocab(unittest.TestCase):
+ def test_vocab(self):
+ c = Counter(['hello', 'world'])
+ v = vocab.Vocab(c, vectors='glove.test_twitter.27B.200d')
+
+ self.assertEqual(v.itos, ['<unk>', '<pad>', 'hello', 'world'])
+ vectors = v.vectors.numpy()
+
+ # The first 5 entries in each vector.
+ expected_glove_twitter = {
+ 'hello': [0.34683, -0.19612, -0.34923, -0.28158, -0.75627],
+ 'world': [0.035771, 0.62946, 0.27443, -0.36455, 0.39189],
+ }
+
+ for word in ['hello', 'world']:
+ self.assertTrue(
+ np.allclose(
+ vectors[v.stoi[word], :5], expected_glove_twitter[word]
+ )
+ )
+
+ self.assertTrue(np.allclose(vectors[v.stoi['<unk>'], :], np.zeros(200)))
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/test/vocab.py b/test/vocab.py
deleted file mode 100644
--- a/test/vocab.py
+++ /dev/null
@@ -1,7 +0,0 @@
-from torchtext import vocab
-
-from collections import Counter
-c = Counter(['hello', 'world'])
-v = vocab.Vocab(c, vectors=['glove.twitter.27B.200d', 'charngram.100d'])
-print(v.itos)
-print(v.vectors)
| Loading pre-trained word vectors is broken
The logic to load the pre-trained word vectors seems to be broken. I looked in `test/vocab.py` but didn't see any tests that covered correctness of loaded vectors.
Here's a snippet to reproduce the issue:
```python
from torchtext.vocab import Vocab
from collections import Counter
the_vocab = Vocab(Counter(['the']), vectors='glove.6B.50d')
the_index = the_vocab.stoi['the']
print(the_index)
print(the_vocab.vectors.numpy()[the_index, :])
```
With the master branch this displays:
```
2
[ 3.02532905e-17 1.40129846e-45 2.10194770e-44 0.00000000e+00
1.00185045e-16 1.40129846e-45 1.40129846e-44 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
4.84570079e-17 1.40129846e-45 1.54142831e-44 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 -2.00000000e+00 8.72027897e-07 -4.65774808e-10
4.82658813e-26 1.40129846e-45 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 -8.29709032e-27 3.23318939e+02
9.88949980e-32 1.40129846e-45 1.58111748e-32 1.40129846e-45
-8.15662628e+28 -9.19432299e+27 9.88943397e-32 1.40129846e-45
7.55058708e-26 1.40129846e-45]
```
However, if I manually download and unpack the GloVe 50d file, this is the line with `the` token:
```
the 0.418 0.24968 -0.41242 0.1217 0.34527 -0.044457 -0.49688 -0.17862 -0.00066023 -0.6566 0.27843 -0.14767 -0.55677 0.14658 -0.0095095 0.011658 0.10204 -0.12792 -0.8443 -0.12181 -0.016801 -0.33279 -0.1552 -0.23131 -0.19181 -1.8823 -0.76746 0.099051 -0.42125 -0.19526 4.0071 -0.18594 -0.52287 -0.31681 0.00059213 0.0074449 0.17778 -0.15897 0.012041 -0.054223 -0.29871 -0.15749 -0.34758 -0.045637 -0.44251 0.18785 0.0027849 -0.18411 -0.11514 -0.78581
```
| 2017-08-23T00:26:46 |
|
pytorch/text | 112 | pytorch__text-112 | [
"109"
] | 2c398c7b39a678fb06a00ad5bc3e15ed7da8b4a5 | diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -217,8 +217,8 @@ def __iter__(self):
text = self.dataset[0].text
TEXT = self.dataset.fields['text']
TEXT.eos_token = None
- text = text + ([TEXT.pad_token] * (math.ceil(len(text) / self.batch_size) *
- self.batch_size - len(text)))
+ text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *
+ self.batch_size - len(text)))
data = TEXT.numericalize(
[text], device=self.device, train=self.train)
data = data.view(self.batch_size, -1).t().contiguous()
| python2 iterator bug?
here is the snippet:
```
from __future__ import print_function
from torchtext.datasets import WikiText2
i_train, i_dev, i_test = WikiText2.iters(device='-1', root="data/")
for i in i_train:
print(i)
```
here's the error:
```
Traceback (most recent call last):
File "bug.py", line 5, in <module>
for i in i_train:
File "/u/bosctom/.local/lib/python2.7/site-packages/torchtext/data/iterator.py", line 218, in __iter__
self.batch_size - len(text)))
TypeError: can't multiply sequence by non-int of type 'float'
```
The bug is related to python2's math.ceil() returning a float?
| what version of torchtext are you using? A git hash would be nice, if you have one (or did you install from pip)? I get a different error related to the `WikiText2` dataset not being updated to use the latest Vocab API on Python 2 and 3.
I got that error using the pip version: 0.1.1
yes, you're right --- there needs to be an `int` call at https://github.com/pytorch/text/blob/0634892b91bbac24c15b2d75147f2c47b24fd0dd/torchtext/data/iterator.py#L220. Would you be willing to create a PR, or should i go ahead? | 2017-09-08T00:29:29 |
|
pytorch/text | 119 | pytorch__text-119 | [
"78"
] | b57bab91dce024fbb9ef6ba297c695b007aedbcf | diff --git a/torchtext/data/field.py b/torchtext/data/field.py
--- a/torchtext/data/field.py
+++ b/torchtext/data/field.py
@@ -57,6 +57,28 @@ class Field(object):
pad_token: The string token used as padding. Default: "<pad>".
"""
+ # Dictionary mapping PyTorch tensor types to the appropriate Python
+ # numeric type.
+ tensor_types = {
+ torch.FloatTensor: float,
+ torch.cuda.FloatTensor: float,
+ torch.DoubleTensor: float,
+ torch.cuda.DoubleTensor: float,
+ torch.HalfTensor: float,
+ torch.cuda.HalfTensor: float,
+
+ torch.ByteTensor: int,
+ torch.cuda.ByteTensor: int,
+ torch.CharTensor: int,
+ torch.cuda.CharTensor: int,
+ torch.ShortTensor: int,
+ torch.cuda.ShortTensor: int,
+ torch.IntTensor: int,
+ torch.cuda.IntTensor: int,
+ torch.LongTensor: int,
+ torch.cuda.LongTensor: int
+ }
+
def __init__(
self, sequential=True, use_vocab=True, init_token=None,
eos_token=None, fix_length=None, tensor_type=torch.LongTensor,
@@ -103,7 +125,8 @@ def pad(self, minibatch):
the longest example in the batch. Prepends self.init_token and appends
self.eos_token if those attributes are not None. Returns a tuple of the
padded list and a list containing lengths of each example if
- `self.include_lengths` is `True`, else just returns the padded list.
+ `self.include_lengths` is `True` and `self.sequential` is `True`, else just
+ returns the padded list. If `self.sequential` is `False`, no padding is applied.
"""
minibatch = list(minibatch)
if not self.sequential:
@@ -162,16 +185,25 @@ def numericalize(self, arr, device=None, train=True):
included in the return value.
Arguments:
- arr: List of tokenized and padded examples, or tuple of a padded
- list and a list of lengths if self.include_lengths is True.
- device: Device to create the Variable's Tensor on. Use -1 for
- CPU and None for the currently active GPU device. Default:
- None.
- train: Whether the batch is for a training set. If False, the
- Variable will be created with volatile=True. Default: True.
+ arr (List[List[str]], or tuple of (List[List[str]], List[int])):
+ List of tokenized and padded examples, or tuple of List of
+ tokenized and padded examples and List of lengths of each
+ example if self.include_lengths is True.
+ device (-1 or None): Device to create the Variable's Tensor on.
+ Use -1 for CPU and None for the currently active GPU device.
+ Default: None.
+ train (boolean): Whether the batch is for a training set.
+ If False, the Variable will be created with volatile=True.
+ Default: True.
"""
+ if self.include_lengths and not isinstance(arr, tuple):
+ raise ValueError("Field has include_lengths set to True, but "
+ "input data is not a tuple of "
+ "(data batch, batch lengths).")
if isinstance(arr, tuple):
arr, lengths = arr
+ lengths = torch.LongTensor(lengths)
+
if self.use_vocab:
if self.sequential:
arr = [[self.vocab.stoi[x] for x in ex] for ex in arr]
@@ -180,11 +212,23 @@ def numericalize(self, arr, device=None, train=True):
if self.postprocessing is not None:
arr = self.postprocessing(arr, self.vocab, train)
- elif self.postprocessing is not None:
- arr = self.postprocessing(arr, train)
+ else:
+ if self.tensor_type not in self.tensor_types:
+ raise ValueError(
+ "Specified Field tensor_type {} can not be used with "
+ "use_vocab=False because we do not know how to numericalize it. "
+ "Please raise an issue at "
+ "https://github.com/pytorch/text/issues".format(self.tensor_type))
+ numericalization_func = self.tensor_types[self.tensor_type]
+ # It doesn't make sense to explictly coerce to a numeric type if
+ # the data is sequential, since it's unclear how to coerce padding tokens
+ # to a numeric type.
+ if not self.sequential:
+ arr = [numericalization_func(x) for x in arr]
+ if self.postprocessing is not None:
+ arr = self.postprocessing(arr, train)
+
arr = self.tensor_type(arr)
- if self.include_lengths:
- lengths = torch.LongTensor(lengths)
if self.sequential and not self.batch_first:
arr.t_()
if device == -1:
| diff --git a/test/common/torchtext_test_case.py b/test/common/torchtext_test_case.py
--- a/test/common/torchtext_test_case.py
+++ b/test/common/torchtext_test_case.py
@@ -20,6 +20,8 @@ def setUp(self):
os.path.dirname(os.path.realpath(__file__)), os.pardir, os.pardir)))
self.test_dir = tempfile.mkdtemp()
self.test_ppid_dataset_path = os.path.join(self.test_dir, "test_ppid_dataset")
+ self.test_numerical_features_dataset_path = os.path.join(
+ self.test_dir, "test_numerical_features_dataset")
def tearDown(self):
try:
@@ -54,3 +56,46 @@ def write_test_ppid_dataset(self, data_format="csv"):
example["question2"], example["label"]])))
else:
raise ValueError("Invalid format {}".format(data_format))
+
+ def write_test_numerical_features_dataset(self):
+ with open(self.test_numerical_features_dataset_path,
+ "w") as test_numerical_features_dataset_file:
+ test_numerical_features_dataset_file.write("0.1\t1\tteststring1\n")
+ test_numerical_features_dataset_file.write("0.5\t12\tteststring2\n")
+ test_numerical_features_dataset_file.write("0.2\t0\tteststring3\n")
+ test_numerical_features_dataset_file.write("0.4\t12\tteststring4\n")
+ test_numerical_features_dataset_file.write("0.9\t9\tteststring5\n")
+
+
+def verify_numericalized_example(field, test_example_data,
+ test_example_numericalized,
+ test_example_lengths=None,
+ batch_first=False, train=True):
+ """
+ Function to verify that numericalized example is correct
+ with respect to the Field's Vocab.
+ """
+ if isinstance(test_example_numericalized, tuple):
+ test_example_numericalized, lengths = test_example_numericalized
+ assert test_example_lengths == lengths.tolist()
+ if batch_first:
+ test_example_numericalized.data.t_()
+ # Transpose numericalized example so we can compare over batches
+ for example_idx, numericalized_single_example in enumerate(
+ test_example_numericalized.t()):
+ assert len(test_example_data[example_idx]) == len(numericalized_single_example)
+ assert numericalized_single_example.volatile is not train
+ for token_idx, numericalized_token in enumerate(
+ numericalized_single_example):
+ # Convert from Variable to int
+ numericalized_token = numericalized_token.data[0]
+ test_example_token = test_example_data[example_idx][token_idx]
+ # Check if the numericalized example is correct, taking into
+ # account unknown tokens.
+ if field.vocab.stoi[test_example_token] != 0:
+ # token is in-vocabulary
+ assert (field.vocab.itos[numericalized_token] ==
+ test_example_token)
+ else:
+ # token is OOV and <unk> always has an index of 0
+ assert numericalized_token == 0
diff --git a/test/data/test_field.py b/test/data/test_field.py
--- a/test/data/test_field.py
+++ b/test/data/test_field.py
@@ -1,6 +1,12 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+from collections import Counter
+
+from numpy.testing import assert_allclose
+import torch
import torchtext.data as data
-from ..common.torchtext_test_case import TorchtextTestCase
+from ..common.torchtext_test_case import TorchtextTestCase, verify_numericalized_example
class TestField(TorchtextTestCase):
@@ -27,7 +33,7 @@ def test_preprocess(self):
# Non-regression test that we do not try to decode unicode strings to unicode
field_not_sequential = data.Field(sequential=False, lower=True,
preprocessing=preprocess_pipeline)
- assert field_not_sequential.preprocess(u"Test string.") == "test string.!"
+ assert field_not_sequential.preprocess("ᑌᑎIᑕOᗪᕮ_Tᕮ᙭T") == "ᑌᑎiᑕoᗪᕮ_tᕮ᙭t!"
def test_pad(self):
# Default case.
@@ -92,3 +98,272 @@ def test_pad(self):
field = data.Field(init_token="<bos>", eos_token="<eos>",
sequential=False, include_lengths=True)
assert field.pad(minibatch) == minibatch
+
+ def test_build_vocab(self):
+ # Set up fields
+ question_field = data.Field(sequential=True)
+ label_field = data.Field(sequential=False)
+
+ # Write TSV dataset and construct a Dataset
+ self.write_test_ppid_dataset(data_format="tsv")
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", label_field)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+
+ # Write JSON dataset and construct a Dataset
+ self.write_test_ppid_dataset(data_format="json")
+ json_fields = {"question1": ("q1", question_field),
+ "question2": ("q2", question_field),
+ "label": ("label", label_field)}
+ json_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="json",
+ fields=json_fields)
+
+ # Test build_vocab default
+ question_field.build_vocab(tsv_dataset, json_dataset)
+ assert question_field.vocab.freqs == Counter(
+ {'When': 4, 'do': 4, 'you': 4, 'use': 4, 'instead': 4,
+ 'of': 4, 'was': 4, 'Lincoln': 4, 'born?': 4, 'シ': 2,
+ 'し?': 2, 'Where': 2, 'What': 2, 'is': 2, '2+2': 2,
+ '"&"': 2, '"and"?': 2, 'Which': 2, 'location': 2,
+ 'Abraham': 2, '2+2=?': 2})
+ expected_stoi = {'<unk>': 0, '<pad>': 1, 'Lincoln': 2, 'When': 3,
+ 'born?': 4, 'do': 5, 'instead': 6, 'of': 7,
+ 'use': 8, 'was': 9, 'you': 10, '"&"': 11,
+ '"and"?': 12, '2+2': 13, '2+2=?': 14, 'Abraham': 15,
+ 'What': 16, 'Where': 17, 'Which': 18, 'is': 19,
+ 'location': 20, 'し?': 21, 'シ': 22}
+ assert dict(question_field.vocab.stoi) == expected_stoi
+ # Turn the stoi dictionary into an itos list
+ expected_itos = [x[0] for x in sorted(expected_stoi.items(),
+ key=lambda tup: tup[1])]
+ assert question_field.vocab.itos == expected_itos
+
+ label_field.build_vocab(tsv_dataset, json_dataset)
+ assert label_field.vocab.freqs == Counter({'1': 4, '0': 2})
+ expected_stoi = {'1': 1, '0': 2, '<unk>': 0}
+ assert dict(label_field.vocab.stoi) == expected_stoi
+ # Turn the stoi dictionary into an itos list
+ expected_itos = [x[0] for x in sorted(expected_stoi.items(),
+ key=lambda tup: tup[1])]
+ assert label_field.vocab.itos == expected_itos
+
+ # Test build_vocab default
+ question_field.build_vocab(tsv_dataset, json_dataset)
+ assert question_field.vocab.freqs == Counter(
+ {'When': 4, 'do': 4, 'you': 4, 'use': 4, 'instead': 4,
+ 'of': 4, 'was': 4, 'Lincoln': 4, 'born?': 4, 'シ': 2,
+ 'し?': 2, 'Where': 2, 'What': 2, 'is': 2, '2+2': 2,
+ '"&"': 2, '"and"?': 2, 'Which': 2, 'location': 2,
+ 'Abraham': 2, '2+2=?': 2})
+ expected_stoi = {'<unk>': 0, '<pad>': 1, 'Lincoln': 2, 'When': 3,
+ 'born?': 4, 'do': 5, 'instead': 6, 'of': 7,
+ 'use': 8, 'was': 9, 'you': 10, '"&"': 11,
+ '"and"?': 12, '2+2': 13, '2+2=?': 14, 'Abraham': 15,
+ 'What': 16, 'Where': 17, 'Which': 18, 'is': 19,
+ 'location': 20, 'し?': 21, 'シ': 22}
+ assert dict(question_field.vocab.stoi) == expected_stoi
+ # Turn the stoi dictionary into an itos list
+ expected_itos = [x[0] for x in sorted(expected_stoi.items(),
+ key=lambda tup: tup[1])]
+ assert question_field.vocab.itos == expected_itos
+
+ label_field.build_vocab(tsv_dataset, json_dataset)
+ assert label_field.vocab.freqs == Counter({'1': 4, '0': 2})
+ expected_stoi = {'1': 1, '0': 2, '<unk>': 0}
+ assert dict(label_field.vocab.stoi) == expected_stoi
+ # Turn the stoi dictionary into an itos list
+ expected_itos = [x[0] for x in sorted(expected_stoi.items(),
+ key=lambda tup: tup[1])]
+ assert label_field.vocab.itos == expected_itos
+
+ # Test build_vocab with extra kwargs passed to Vocab
+ question_field.build_vocab(tsv_dataset, json_dataset, max_size=8,
+ min_freq=3)
+ assert question_field.vocab.freqs == Counter(
+ {'When': 4, 'do': 4, 'you': 4, 'use': 4, 'instead': 4,
+ 'of': 4, 'was': 4, 'Lincoln': 4, 'born?': 4, 'シ': 2,
+ 'し?': 2, 'Where': 2, 'What': 2, 'is': 2, '2+2': 2,
+ '"&"': 2, '"and"?': 2, 'Which': 2, 'location': 2,
+ 'Abraham': 2, '2+2=?': 2})
+ expected_stoi = {'<unk>': 0, '<pad>': 1, 'Lincoln': 2, 'When': 3,
+ 'born?': 4, 'do': 5, 'instead': 6, 'of': 7,
+ 'use': 8, 'was': 9}
+ assert dict(question_field.vocab.stoi) == expected_stoi
+ # Turn the stoi dictionary into an itos list
+ expected_itos = [x[0] for x in sorted(expected_stoi.items(),
+ key=lambda tup: tup[1])]
+ assert question_field.vocab.itos == expected_itos
+
+ def test_numericalize_basic(self):
+ self.write_test_ppid_dataset(data_format="tsv")
+ question_field = data.Field(sequential=True)
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+ question_field.build_vocab(tsv_dataset)
+
+ test_example_data = [["When", "do", "you", "use", "シ",
+ "instead", "of", "し?"],
+ ["What", "is", "2+2", "<pad>", "<pad>",
+ "<pad>", "<pad>", "<pad>"],
+ ["Here", "is", "a", "sentence", "with",
+ "some", "oovs", "<pad>"]]
+
+ # Test default
+ default_numericalized = question_field.numericalize(
+ test_example_data, device=-1)
+ verify_numericalized_example(question_field, test_example_data,
+ default_numericalized)
+ # Test with train=False
+ volatile_numericalized = question_field.numericalize(
+ test_example_data, device=-1, train=False)
+ verify_numericalized_example(question_field, test_example_data,
+ volatile_numericalized, train=False)
+
+ def test_numericalize_include_lengths(self):
+ self.write_test_ppid_dataset(data_format="tsv")
+ question_field = data.Field(sequential=True, include_lengths=True)
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+ question_field.build_vocab(tsv_dataset)
+
+ test_example_data = [["When", "do", "you", "use", "シ",
+ "instead", "of", "し?"],
+ ["What", "is", "2+2", "<pad>", "<pad>",
+ "<pad>", "<pad>", "<pad>"],
+ ["Here", "is", "a", "sentence", "with",
+ "some", "oovs", "<pad>"]]
+ test_example_lengths = [8, 3, 7]
+
+ # Test with include_lengths
+ include_lengths_numericalized = question_field.numericalize(
+ (test_example_data, test_example_lengths), device=-1)
+ verify_numericalized_example(question_field,
+ test_example_data,
+ include_lengths_numericalized,
+ test_example_lengths)
+
+ def test_numericalize_batch_first(self):
+ self.write_test_ppid_dataset(data_format="tsv")
+ question_field = data.Field(sequential=True, batch_first=True)
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+ question_field.build_vocab(tsv_dataset)
+
+ test_example_data = [["When", "do", "you", "use", "シ",
+ "instead", "of", "し?"],
+ ["What", "is", "2+2", "<pad>", "<pad>",
+ "<pad>", "<pad>", "<pad>"],
+ ["Here", "is", "a", "sentence", "with",
+ "some", "oovs", "<pad>"]]
+
+ # Test with batch_first
+ include_lengths_numericalized = question_field.numericalize(
+ test_example_data, device=-1)
+ verify_numericalized_example(question_field,
+ test_example_data,
+ include_lengths_numericalized,
+ batch_first=True)
+
+ def test_numericalize_postprocessing(self):
+ self.write_test_ppid_dataset(data_format="tsv")
+
+ def reverse_postprocess(arr, vocab, train):
+ return [list(reversed(sentence)) for sentence in arr]
+
+ question_field = data.Field(sequential=True,
+ postprocessing=reverse_postprocess)
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", None)]
+
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+ question_field.build_vocab(tsv_dataset)
+
+ test_example_data = [["When", "do", "you", "use", "シ",
+ "instead", "of", "し?"],
+ ["What", "is", "2+2", "<pad>", "<pad>",
+ "<pad>", "<pad>", "<pad>"],
+ ["Here", "is", "a", "sentence", "with",
+ "some", "oovs", "<pad>"]]
+ reversed_test_example_data = [list(reversed(sentence)) for sentence in
+ test_example_data]
+
+ postprocessed_numericalized = question_field.numericalize(
+ (test_example_data), device=-1)
+ verify_numericalized_example(question_field,
+ reversed_test_example_data,
+ postprocessed_numericalized)
+
+ def test_numerical_features_no_vocab(self):
+ self.write_test_numerical_features_dataset()
+ # Test basic usage
+ int_field = data.Field(sequential=False, use_vocab=False)
+ float_field = data.Field(sequential=False, use_vocab=False,
+ tensor_type=torch.FloatTensor)
+ tsv_fields = [("int", int_field), ("float", float_field), ("string", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_numerical_features_dataset_path, format="tsv",
+ fields=tsv_fields)
+ int_field.build_vocab(tsv_dataset)
+ float_field.build_vocab(tsv_dataset)
+ test_int_data = ["1", "0", "1", "3", "19"]
+ test_float_data = ["1.1", "0.1", "3.91", "0.2", "10.2"]
+
+ numericalized_int = int_field.numericalize(test_int_data, device=-1)
+ assert_allclose(numericalized_int.data.numpy(), [1, 0, 1, 3, 19])
+ numericalized_float = float_field.numericalize(test_float_data, device=-1)
+ assert_allclose(numericalized_float.data.numpy(), [1.1, 0.1, 3.91, 0.2, 10.2])
+
+ # Test with postprocessing applied
+ int_field = data.Field(sequential=False, use_vocab=False,
+ postprocessing=lambda arr, _: [x + 1 for x in arr])
+ float_field = data.Field(sequential=False, use_vocab=False,
+ tensor_type=torch.FloatTensor,
+ postprocessing=lambda arr, _: [x * 0.5 for x in arr])
+ tsv_fields = [("int", int_field), ("float", float_field), ("string", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_numerical_features_dataset_path, format="tsv",
+ fields=tsv_fields)
+ int_field.build_vocab(tsv_dataset)
+ float_field.build_vocab(tsv_dataset)
+ test_int_data = ["1", "0", "1", "3", "19"]
+ test_float_data = ["1.1", "0.1", "3.91", "0.2", "10.2"]
+
+ numericalized_int = int_field.numericalize(test_int_data, device=-1)
+ assert_allclose(numericalized_int.data.numpy(), [2, 1, 2, 4, 20])
+ numericalized_float = float_field.numericalize(test_float_data, device=-1)
+ assert_allclose(numericalized_float.data.numpy(), [0.55, 0.05, 1.955, 0.1, 5.1])
+
+ def test_errors(self):
+ # Test that passing a non-tuple (of data and length) to numericalize
+ # with Field.include_lengths = True raises an error.
+ with self.assertRaises(ValueError):
+ self.write_test_ppid_dataset(data_format="tsv")
+ question_field = data.Field(sequential=True, include_lengths=True)
+ tsv_fields = [("id", None), ("q1", question_field),
+ ("q2", question_field), ("label", None)]
+ tsv_dataset = data.TabularDataset(
+ path=self.test_ppid_dataset_path, format="tsv",
+ fields=tsv_fields)
+ question_field.build_vocab(tsv_dataset)
+ test_example_data = [["When", "do", "you", "use", "シ",
+ "instead", "of", "し?"],
+ ["What", "is", "2+2", "<pad>", "<pad>",
+ "<pad>", "<pad>", "<pad>"],
+ ["Here", "is", "a", "sentence", "with",
+ "some", "oovs", "<pad>"]]
+ question_field.numericalize(
+ test_example_data, device=-1)
| Using a field representing real numbers with the iterator
I am trying to learn a regressor on text data and I use torchtext in all my other tasks but I see a problem in using it for this use case.
I define the field for targets as follows:
```
TARGETS = data.Field(
sequential=False, tensor_type=torch.DoubleTensor, batch_first=True)
self.fields = [('targets', TARGETS), ('text', TEXT)]
self.train, self.val, self.test = data.TabularDataset.splits(
path=self.path,
train=self.train_suffix,
validation=self.val_suffix,
test=self.test_suffix,
format=formatting,
fields=self.fields)
TEXT.build_vocab(self.train)
```
I have a file that contains tab separate <values>\t<text>
When I make iterators out of it,
```
train_iter, val_iter, test_iter = data.Iterator.splits(
(self.train, self.val, self.test),
batch_sizes=(self.batch_size, self.test_batch_size,
self.test_batch_size),
sort_key=lambda x: len(x.text),
shuffle=True)
print(next(iter(train_iter)))
```
it gives me an error when getting the next batch:
> AttributeError: 'Field' object has no attribute 'vocab'
I know this is because I didn't run .build_vocab for the TARGETS field. But why do I really need to do this? What if I just want to get real numbers and compute losses on them?
Any workaround is appreciated. If I am doing something wrong, please let me know too.
| Found use_vocab argument 😞
Even after setting use_vocab=False. I get
> RuntimeError: already counted a million dimensions in a given sequence. Most likely your items are also sequences and there's no way to infer how many dimension should the tensor have
It is the same error that one gets when you try to do torch.DoubleTensor('1.2'). Is there something I am doing wrong?
Thanks for the issue.
Torchtext needs to convert the string number to an `int` or `float` somewhere down the line and it currently doesn't do this. A quick fix would be to manually add a pipeline to the `postprocessing` argument that converts everything in the `TARGETS` field to int. With a slightly modified version of your code:
Edit: just noticed that your example uses doubles. changed my code accordingly
(tab separated file)
```
$ cat test.txt
1.1 test string
1.2 test string2
1.3 test string3
```
The following works on my machine in the meantime while we patch this:
```python
In [1]: import torch
In [2]: from torchtext import data
In [3]: TEXT = data.Field(batch_first=True)
In [4]: TARGETS = data.Field(sequential=False, tensor_type=torch.DoubleTensor, batch_first=True, use_vocab=False, postprocessing=data.Pipeline(lambda x: float(x)))
In [5]: fields = [('targets', TARGETS), ('text', TEXT)]
In [6]: dataset = data.TabularDataset(path="test.txt", format="tsv", fields=fields)
In [7]: TEXT.build_vocab(dataset)
In [8]: train_iter = data.Iterator(dataset, batch_size=1, sort_key=lambda x: len(x.text), shuffle=True)
In [9]: batch = next(iter(train_iter))
In [10]: batch.targets
Out[10]:
Variable containing:
1.3000
[torch.cuda.DoubleTensor of size 1 (GPU 0)]
```
Hope that helps.
Thanks for the solution @nelson-liu
could you leave this open for now --- there is a bug behind this that would be nice to track (the fact that we do not actually convert values with `use_vocab=False` to numbers). Thanks!
Sure, I agree.
Yeah, I was originally imagining that values would be provided as Python numerical types -- but that isn't really consistent with the nature of the library as loading mostly text values. Certainly if it sees strings it should convert them! | 2017-09-13T06:57:00 |
pytorch/text | 135 | pytorch__text-135 | [
"131"
] | b69dfc6671c7877549b297ad5d689a94bccc39d3 | diff --git a/torchtext/datasets/translation.py b/torchtext/datasets/translation.py
--- a/torchtext/datasets/translation.py
+++ b/torchtext/datasets/translation.py
@@ -1,6 +1,7 @@
import os
import xml.etree.ElementTree as ET
import glob
+import io
from .. import data
@@ -129,7 +130,7 @@ def clean(path):
for f_xml in glob.iglob(os.path.join(path, '*.xml')):
print(f_xml)
f_txt = os.path.splitext(f_xml)[0]
- with open(f_txt, 'w') as fd_txt:
+ with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt:
root = ET.parse(f_xml).getroot()[0]
for doc in root.findall('doc'):
for e in doc.findall('seg'):
@@ -140,7 +141,8 @@ def clean(path):
for f_orig in glob.iglob(os.path.join(path, 'train.tags*')):
print(f_orig)
f_txt = f_orig.replace('.tags', '')
- with open(f_txt, 'w') as fd_txt, open(f_orig) as fd_orig:
+ with io.open(f_txt, mode='w', encoding='utf-8') as fd_txt, \
+ io.open(f_orig, mode='w', encoding='utf-8') as fd_orig:
for l in fd_orig:
if not any(tag in l for tag in xml_tags):
fd_txt.write(l.strip() + '\n')
| ascii vs. utf-8 in torchtext/datasets/translation.py
@nelson-liu: I incorrectly brought this up in pull #52, new issue here
When trying to load splits for IWSLT (in french, german, etc...), the loading process would fail with an ascii encoding/decoding error:
```
.data/iwslt/de-en/IWSLT16.TED.dev2010.de-en.en.xml
.data/iwslt/de-en/IWSLT16.TED.tst2013.de-en.de.xml
Traceback (most recent call last):
File "test.py", line 25, in <module>
train, val, test = datasets.IWSLT.splits(exts=('.de', '.en'), fields=(DE, EN))
File "build/bdist.linux-x86_64/egg/torchtext/datasets/translation.py", line 116, in splits
File "build/bdist.linux-x86_64/egg/torchtext/datasets/translation.py", line 136, in clean
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 60: ordinal not in range(128)
```
These are my library versions:
```
numpy==1.13.3
regex==2017.9.23
spacy==1.9.0
torch==0.2.0.post4
torchtext==0.2.0b0 (just cloned a few minutes before error)
torchvision==0.1.9
```
Here is the code that I was using, from test/translation.py:
```
from torchtext import data
from torchtext import datasets
import re
import spacy
import sys
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
url = re.compile('(<url>.*</url>)')
def tokenize_de(text):
return [tok.text for tok in spacy_de.tokenizer(url.sub('@URL@', text))]
def tokenize_en(text):
return [tok.text for tok in spacy_en.tokenizer(url.sub('@URL@', text))]
# Testing IWSLT
DE = data.Field(tokenize=tokenize_de)
EN = data.Field(tokenize=tokenize_en)
train, val, test = datasets.IWSLT.splits(exts=('.de', '.en'), fields=(DE, EN))
```
The following fixed it for me, in torchtext/datasets/translation.py. Replace opens with io.opens specifying utf-8 for python2. It's worth noting that a friend with python3 did not have this problem.
```
127 @staticmethod
128 def clean(path):
129 for f_xml in glob.iglob(os.path.join(path, '*.xml')):
130 print(f_xml)
131 f_txt = os.path.splitext(f_xml)[0]
132 import io
133 with io.open(f_txt, mode="w", encoding="utf-8") as fd_txt: <--- INSERT
134 #with open(f_txt, 'w') as fd_txt: <--- COMMENT
135 root = ET.parse(f_xml).getroot()[0]
136 for doc in root.findall('doc'):
137 for e in doc.findall('seg'):
138 fd_txt.write(e.text.strip() + '\n')
139 xml_tags = ['<url', '<keywords', '<talkid', '<description',
140 '<reviewer', '<translator', '<title', '<speaker']
141 for f_orig in glob.iglob(os.path.join(path, 'train.tags*')):
142 print(f_orig)
143 f_txt = f_orig.replace('.tags', '')
144 with io.open(f_txt,mode='w',encoding='utf-8') as fd_txt, io.open(f_orig,mode='r',encoding='utf=8') as fd_orig: <--- INSERT
145 #with open(f_txt, 'w') as fd_txt, open(f_orig) as fd_orig: <--- COMMENT
146 for l in fd_orig:
147 if not any(tag in l for tag in xml_tags):
148 fd_txt.write(l.strip() + '\n')
```
@jekbradbury, you were correct in pull #52 that I didn't need the middle block explicitly encoding/decoding (not seen here) since the file is already open as utf-8.
| 2017-10-03T02:50:57 |
||
pytorch/text | 139 | pytorch__text-139 | [
"129"
] | 4f5801b2c3197c7da8cd440b740a91caf014e87d | diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -63,19 +63,28 @@ class Iterator(object):
sort: Whether to sort examples according to self.sort_key.
Note that repeat, shuffle, and sort default to train, train, and
(not train).
+ sort_within_batch: Whether to sort (in descending order according to
+ self.sort_key) within each batch. If None, defaults to self.sort.
+ If self.sort is True and this is False, the batch is left in the
+ original (ascending) sorted order.
device: Device to create batches on. Use -1 for CPU and None for the
currently active GPU device.
"""
def __init__(self, dataset, batch_size, sort_key=None, device=None,
batch_size_fn=lambda new, count, sofar: count, train=True,
- repeat=None, shuffle=None, sort=None):
+ repeat=None, shuffle=None, sort=None,
+ sort_within_batch=None):
self.batch_size, self.train, self.dataset = batch_size, train, dataset
self.batch_size_fn = batch_size_fn
self.iterations = 0
self.repeat = train if repeat is None else repeat
self.shuffle = train if shuffle is None else shuffle
self.sort = not train if sort is None else sort
+ if sort_within_batch is None:
+ self.sort_within_batch = self.sort
+ else:
+ self.sort_within_batch = sort_within_batch
if sort_key is None:
self.sort_key = dataset.sort_key
else:
@@ -157,9 +166,14 @@ def __iter__(self):
continue
self.iterations += 1
self._iterations_this_epoch += 1
- # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by
- # decreasing order, which requires reversing relative to typical sort keys
- minibatch.reverse()
+ if self.sort_within_batch:
+ # NOTE: `rnn.pack_padded_sequence` requires that a minibatch
+ # be sorted by decreasing order, which requires reversing
+ # relative to typical sort keys
+ if self.sort:
+ minibatch.reverse()
+ else:
+ minibatch.sort(key=self.sort_key, reverse=True)
yield Batch(minibatch, self.dataset, self.device,
self.train)
if not self.repeat:
| Unintuitive behavior of Iterator when sort is False
Currently, the following line is executed regardless of `sort` value.
https://github.com/pytorch/text/blob/2980f1bc39ba6af332c5c2783da8bee109796d4c/torchtext/data/iterator.py#L162
It could result in a counter-intuitive behavior when `sort is False`, since one would probably expect that the order of data is kept intact when `sort` is `False`.
I think this should be executed only when `sort` is `True`.
Is it by design, or a bug?
| Hmm it seems that it can't be solved simply --- I think currently sorting is kind of tangled in the implementation.
For example when BucketIterator is used, sorting is done within a batch, whether sort is True or not.
(But with the plain Iterator, sort-within-batch does not occur thus we can't use packed_padded_sequence unless we perform some explicit sorting.)
I think just calling "reverse" function in the iterator cannot completely solve the issue wrt "packed sequence". Currently it works only with BucketIterator.
I suggest the following modification:
1) "sort" flag is used only for indicating "sort the entire data".
2) Add the additional "sort_batch" (or more appropriate name) flag to let the iterator sort the batch.
3) Sort order (increasing or decreasing) should be designated only by sort_key function of a dataset. Implanting "reverse" operation directly into the core Dataset implementation does not seem to be a good way to do this.
For example, if one wants to sort the batch, not the entire data, in the decreasing order of length, they should set `sort=False`, `sort_batch=True`, and `sort_key=lambda ex: -len(ex.text)`.
I agree with most of that, but I still think it's helpful for the order within a batch to default to the opposite of the provided sort_key order (which is used directly as the order between batches). That packed sequences are sorted in decreasing order is essentially a cuDNN implementation detail, while reversing the batch later is difficult to do performantly in the absence of negative stride support in TH/THC core and I think it's strange to ask users to manually provide a reversed sort_key if they want to use packed sequences (since I see sort_key as largely a dataset property that says by which field's length/combination of field lengths a dataset is most naturally sorted).
Ideally I want to satisfy everyone, but this sorting question has been causing issues in OpenNMT for a few weeks, so I think we should come to a decision and release 0.2 to pypi? Personally I'd lean towards a) making the change outlined in the OP and b) also adding a separate flag to allow sorting the batch even when the data isn't otherwise sorted, but retaining the `reverse`. Any thoughts @nelson-liu?
Whoops, this seems to have skipped my inbox. I agree with the conclusion that seems to have been reached (make change described in OP, add flag to allow sorting batch when data isn't otherwise sorted but retaining the reverse)... | 2017-10-10T04:23:39 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.