repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
pulp/pulpcore | 3,133 | pulp__pulpcore-3133 | [
"3111"
] | 37435051cf4785751811ac598f3d41de6fd61237 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -165,23 +165,26 @@ def process_batch():
# on select-for-update. So, we select-for-update, in pulp_id order, the
# rows we're about to update as one db-call, and then do the update in a
# second.
+ #
+ # NOTE: select-for-update requires being in an atomic-transaction. We are
+ # **already in an atomic transaction** at this point as a result of the
+ # "with transaction.atomic():", above.
ids = [k.pulp_id for k in to_update_ca_bulk]
- with transaction.atomic():
- # "len()" forces the QA to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- len(
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- .values_list()
- )
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
+ # work for us - Django is smart enough to either not-order, or even
+ # not-emit, a select-for-update in these cases.
+ #
+ # To maximize performance, we make sure to only ask for pulp_ids, and
+ # avoid instantiating a python-object for the affected CAs by using
+ # values_list()
+ subq = (
+ ContentArtifact.objects.filter(pulp_id__in=ids)
+ .only("pulp_id")
+ .order_by("pulp_id")
+ .select_for_update()
+ )
+ len(subq.values_list())
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
| bulk_update() can still deadlock in content_stages
**Version**
pulpcore/3.14.16 (but problem exists in main)
**Describe the bug**
Syncing multiple repositories with high content-overlaps under high concurrency, continues to hit occasional deadlock.
Traceback and postgres log for the particular failure:
```
2022-08-10 15:51:29 EDT ERROR: deadlock detected
2022-08-10 15:51:29 EDT DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
Process 55740: SELECT ....
Process 55746: COMMIT
2022-08-10 15:51:29 EDT HINT: See server log for query details.
2022-08-10 15:51:29 EDT CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
```
```
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: Task 407bb67b-65d0-4d65-b9c8-b1aa1f2c87fd failed (deadlock detected
pulpcore-worker-5[54158]: DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
pulpcore-worker-5[54158]: Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
pulpcore-worker-5[54158]: HINT: See server log for query details.
pulpcore-worker-5[54158]: CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
pulpcore-worker-5[54158]: )
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: File "/usr/lib/python3.6/site-packages/pulpcore/tasking/pulpcore_worker.py", line 342, in _perform_task
pulpcore-worker-5[54158]: result = func(*args, **kwargs)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/synchronizing.py", line 494, in synchronize
pulpcore-worker-5[54158]: version = dv.create()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/declarative_version.py", line 151, in create
pulpcore-worker-5[54158]: loop.run_until_complete(pipeline)
pulpcore-worker-5[54158]: File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
pulpcore-worker-5[54158]: return future.result()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
pulpcore-worker-5[54158]: await asyncio.gather(*futures)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
pulpcore-worker-5[54158]: await self.run()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/content_stages.py", line 178, in run
pulpcore-worker-5[54158]: .order_by("pulp_id")
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 256, in __len__
pulpcore-worker-5[54158]: self._fetch_all()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 1242, in _fetch_all
pulpcore-worker-5[54158]: self._result_cache = list(self._iterable_class(self))
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 144, in __iter__
pulpcore-worker-5[54158]: return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1094, in results_iter
pulpcore-worker-5[54158]: results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1142, in execute_sql
pulpcore-worker-5[54158]: cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
pulpcore-worker-5[54158]: return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
pulpcore-worker-5[54158]: return executor(sql, params, many, context)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
pulpcore-worker-5[54158]: raise dj_exc_value.with_traceback(traceback) from exc_value
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
```
**To Reproduce**
See the QE test-setup from https://bugzilla.redhat.com/show_bug.cgi?id=2062526. We have not been able to force the problem with a synthetic test case.
**Expected behavior**
All the repositories should sync all their content without any of the sync-processes being failed due to detected deadlocks.
| https://bugzilla.redhat.com/show_bug.cgi?id=2082209 | 2022-08-24T16:22:38 |
|
pulp/pulpcore | 3,134 | pulp__pulpcore-3134 | [
"3111"
] | 991a52e582bf432e69a7d3bbd253140956ea9564 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -163,23 +163,26 @@ async def run(self):
# on select-for-update. So, we select-for-update, in pulp_id order, the
# rows we're about to update as one db-call, and then do the update in a
# second.
+ #
+ # NOTE: select-for-update requires being in an atomic-transaction. We are
+ # **already in an atomic transaction** at this point as a result of the
+ # "with transaction.atomic():", above.
ids = [k.pulp_id for k in to_update_ca_bulk]
- with transaction.atomic():
- # "len()" forces the QA to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- len(
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- .values_list()
- )
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
+ # work for us - Django is smart enough to either not-order, or even
+ # not-emit, a select-for-update in these cases.
+ #
+ # To maximize performance, we make sure to only ask for pulp_ids, and
+ # avoid instantiating a python-object for the affected CAs by using
+ # values_list()
+ subq = (
+ ContentArtifact.objects.filter(pulp_id__in=ids)
+ .only("pulp_id")
+ .order_by("pulp_id")
+ .select_for_update()
+ )
+ len(subq.values_list())
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
| bulk_update() can still deadlock in content_stages
**Version**
pulpcore/3.14.16 (but problem exists in main)
**Describe the bug**
Syncing multiple repositories with high content-overlaps under high concurrency, continues to hit occasional deadlock.
Traceback and postgres log for the particular failure:
```
2022-08-10 15:51:29 EDT ERROR: deadlock detected
2022-08-10 15:51:29 EDT DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
Process 55740: SELECT ....
Process 55746: COMMIT
2022-08-10 15:51:29 EDT HINT: See server log for query details.
2022-08-10 15:51:29 EDT CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
```
```
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: Task 407bb67b-65d0-4d65-b9c8-b1aa1f2c87fd failed (deadlock detected
pulpcore-worker-5[54158]: DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
pulpcore-worker-5[54158]: Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
pulpcore-worker-5[54158]: HINT: See server log for query details.
pulpcore-worker-5[54158]: CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
pulpcore-worker-5[54158]: )
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: File "/usr/lib/python3.6/site-packages/pulpcore/tasking/pulpcore_worker.py", line 342, in _perform_task
pulpcore-worker-5[54158]: result = func(*args, **kwargs)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/synchronizing.py", line 494, in synchronize
pulpcore-worker-5[54158]: version = dv.create()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/declarative_version.py", line 151, in create
pulpcore-worker-5[54158]: loop.run_until_complete(pipeline)
pulpcore-worker-5[54158]: File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
pulpcore-worker-5[54158]: return future.result()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
pulpcore-worker-5[54158]: await asyncio.gather(*futures)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
pulpcore-worker-5[54158]: await self.run()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/content_stages.py", line 178, in run
pulpcore-worker-5[54158]: .order_by("pulp_id")
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 256, in __len__
pulpcore-worker-5[54158]: self._fetch_all()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 1242, in _fetch_all
pulpcore-worker-5[54158]: self._result_cache = list(self._iterable_class(self))
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 144, in __iter__
pulpcore-worker-5[54158]: return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1094, in results_iter
pulpcore-worker-5[54158]: results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1142, in execute_sql
pulpcore-worker-5[54158]: cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
pulpcore-worker-5[54158]: return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
pulpcore-worker-5[54158]: return executor(sql, params, many, context)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
pulpcore-worker-5[54158]: raise dj_exc_value.with_traceback(traceback) from exc_value
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
```
**To Reproduce**
See the QE test-setup from https://bugzilla.redhat.com/show_bug.cgi?id=2062526. We have not been able to force the problem with a synthetic test case.
**Expected behavior**
All the repositories should sync all their content without any of the sync-processes being failed due to detected deadlocks.
| https://bugzilla.redhat.com/show_bug.cgi?id=2082209 | 2022-08-24T16:43:20 |
|
pulp/pulpcore | 3,136 | pulp__pulpcore-3136 | [
"3117"
] | c57bbbe1ef71d9a8d96d81f4f3844e2cfed54a5d | diff --git a/pulpcore/app/migrations/0105_abstract_uuid_gen.py b/pulpcore/app/migrations/0105_abstract_uuid_gen.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/migrations/0105_abstract_uuid_gen.py
@@ -0,0 +1,214 @@
+# Generated by Django 3.2.15 on 2022-09-06 14:33
+
+from django.db import migrations, models
+import pulpcore.app.models.base
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('core', '0104_delete_label'),
+ ]
+
+ operations = [
+ migrations.AlterField(
+ model_name='accesspolicy',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='alternatecontentsource',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='alternatecontentsourcepath',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='artifact',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='basedistribution',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='content',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='contentappstatus',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='contentartifact',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='contentguard',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='createdresource',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='distribution',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='export',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='exportedresource',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='exporter',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='groupprogressreport',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='grouprole',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='import',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='importer',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='progressreport',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='publication',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='publishedartifact',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='pulpimporterrepository',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='pulptemporaryfile',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='remote',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='remoteartifact',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='repository',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='repositorycontent',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='repositoryversion',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='role',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='signingservice',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='systemid',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='task',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='taskgroup',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='taskschedule',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='upload',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='uploadchunk',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='userrole',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='worker',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='domain',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ migrations.AlterField(
+ model_name='upstreampulp',
+ name='pulp_id',
+ field=models.UUIDField(default=pulpcore.app.models.base.pulp_uuid, editable=False, primary_key=True, serialize=False),
+ ),
+ ]
diff --git a/pulpcore/app/models/analytics.py b/pulpcore/app/models/analytics.py
--- a/pulpcore/app/models/analytics.py
+++ b/pulpcore/app/models/analytics.py
@@ -1,12 +1,12 @@
-import uuid
-
from django.db import models
from django_lifecycle import hook, LifecycleModel
+from pulpcore.app.models import pulp_uuid
+
class SystemID(LifecycleModel):
- pulp_id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
+ pulp_id = models.UUIDField(primary_key=True, default=pulp_uuid, editable=False)
@hook("before_save")
def ensure_singleton(self):
diff --git a/pulpcore/app/models/base.py b/pulpcore/app/models/base.py
--- a/pulpcore/app/models/base.py
+++ b/pulpcore/app/models/base.py
@@ -1,5 +1,4 @@
from gettext import gettext as _
-import uuid
from django.contrib.contenttypes.fields import GenericRelation
from django.db import models
@@ -7,15 +6,15 @@
from django.db.models.base import ModelBase
from django_lifecycle import LifecycleModel
from functools import lru_cache
+from uuid6 import uuid7
def pulp_uuid():
- """
- Abstract wrapper for UUID generator.
+ """Abstract wrapper for UUID generator.
Allows the implementation to be swapped without triggering migrations.
"""
- return uuid.uuid4()
+ return uuid7()
class BaseModel(LifecycleModel):
@@ -40,7 +39,7 @@ class BaseModel(LifecycleModel):
"""
- pulp_id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
+ pulp_id = models.UUIDField(primary_key=True, default=pulp_uuid, editable=False)
pulp_created = models.DateTimeField(auto_now_add=True)
pulp_last_updated = models.DateTimeField(auto_now=True, null=True)
user_roles = GenericRelation("core.UserRole")
| Use a less random UUID format
**Version**
All
**Describe the bug**
UUIDv4 is a completely 100% random 128-bit integer, which isn't efficient as far as indexes are concerned. It would be better to use something such as UUIDv7 (draft) where the bits are arranged specifically for database index friendliness, lexicographic ordering and other nice properties.
While UUIDv7 is still in draft and hasn't been accepted yet, UUID is just a 128-bit integer, the actual source of it doesn't matter so much. Postgresql doesn't differentiate. We can in all likelihood use a replacement UUID generator with little effort or risk.
This is something that may have greater benefits at larger scale but be unnoticeable at small scale.
| 2022-08-25T04:43:15 |
||
pulp/pulpcore | 3,158 | pulp__pulpcore-3158 | [
"3157"
] | 42e848d5990ceca47c425b643a58e54e9a50ee11 | diff --git a/pulpcore/plugin/models/__init__.py b/pulpcore/plugin/models/__init__.py
--- a/pulpcore/plugin/models/__init__.py
+++ b/pulpcore/plugin/models/__init__.py
@@ -41,3 +41,6 @@
Upload,
UploadChunk,
)
+
+
+from pulpcore.app.models.fields import EncryptedTextField # noqa
| Export the EncryptedTextField in the plugin API
| 2022-09-01T15:01:16 |
||
pulp/pulpcore | 3,159 | pulp__pulpcore-3159 | [
"2811"
] | 8160ecba48759f206efb95540bf4a45a92cb7aa5 | diff --git a/pulpcore/app/mime_types.py b/pulpcore/app/mime_types.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/mime_types.py
@@ -0,0 +1,197 @@
+import os
+
+# The mapping was retrieved from the following sources:
+# 1. https://docs.python.org/3/library/mimetypes.html#mimetypes.types_map
+# 2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types
+TYPES_MAP = {
+ ".3g2": "audio/3gpp2",
+ ".3gp": "audio/3gpp",
+ ".3gpp": "audio/3gpp",
+ ".3gpp2": "audio/3gpp2",
+ ".7z": "application/x-7z-compressed",
+ ".a": "application/octet-stream",
+ ".aac": "audio/aac",
+ ".abw": "application/x-abiword",
+ ".adts": "audio/aac",
+ ".ai": "application/postscript",
+ ".aif": "audio/x-aiff",
+ ".aifc": "audio/x-aiff",
+ ".aiff": "audio/x-aiff",
+ ".arc": "application/x-freearc",
+ ".ass": "audio/aac",
+ ".au": "audio/basic",
+ ".avif": "image/avif",
+ ".azw": "application/vnd.amazon.ebook",
+ ".bat": "text/plain",
+ ".bcpio": "application/x-bcpio",
+ ".bin": "application/octet-stream",
+ ".bmp": "image/x-ms-bmp",
+ ".bz": "application/x-bzip",
+ ".bz2": "application/x-bzip2",
+ ".c": "text/plain",
+ ".cda": "application/x-cdf",
+ ".cdf": "application/x-netcdf",
+ ".cpio": "application/x-cpio",
+ ".csh": "application/x-csh",
+ ".css": "text/css",
+ ".csv": "text/csv",
+ ".dll": "application/octet-stream",
+ ".doc": "application/msword",
+ ".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
+ ".dot": "application/msword",
+ ".dvi": "application/x-dvi",
+ ".eml": "message/rfc822",
+ ".eot": "application/vnd.ms-fontobject",
+ ".eps": "application/postscript",
+ ".epub": "application/epub+zip",
+ ".etx": "text/x-setext",
+ ".exe": "application/octet-stream",
+ ".gif": "image/gif",
+ ".gtar": "application/x-gtar",
+ ".gz": "application/gzip",
+ ".gzip": "application/gzip",
+ ".h": "text/plain",
+ ".h5": "application/x-hdf5",
+ ".hdf": "application/x-hdf",
+ ".heic": "image/heic",
+ ".heif": "image/heif",
+ ".htm": "text/html",
+ ".html": "text/html",
+ ".ico": "image/vnd.microsoft.icon",
+ ".ics": "text/calendar",
+ ".ief": "image/ief",
+ ".jar": "application/java-archive",
+ ".jpe": "image/jpeg",
+ ".jpeg": "image/jpeg",
+ ".jpg": "image/jpeg",
+ ".js": "application/javascript",
+ ".json": "application/json",
+ ".jsonld": "application/ld+json",
+ ".ksh": "text/plain",
+ ".latex": "application/x-latex",
+ ".loas": "audio/aac",
+ ".m1v": "video/mpeg",
+ ".m3u": "application/vnd.apple.mpegurl",
+ ".m3u8": "application/vnd.apple.mpegurl",
+ ".man": "application/x-troff-man",
+ ".me": "application/x-troff-me",
+ ".mht": "message/rfc822",
+ ".mhtml": "message/rfc822",
+ ".mid": "audio/x-midi",
+ ".midi": "audio/x-midi",
+ ".mif": "application/x-mif",
+ ".mjs": "application/javascript",
+ ".mov": "video/quicktime",
+ ".movie": "video/x-sgi-movie",
+ ".mp2": "audio/mpeg",
+ ".mp3": "audio/mpeg",
+ ".mp4": "video/mp4",
+ ".mpa": "video/mpeg",
+ ".mpe": "video/mpeg",
+ ".mpeg": "video/mpeg",
+ ".mpg": "video/mpeg",
+ ".mpkg": "application/vnd.apple.installer+xml",
+ ".ms": "application/x-troff-ms",
+ ".nc": "application/x-netcdf",
+ ".nws": "message/rfc822",
+ ".o": "application/octet-stream",
+ ".obj": "application/octet-stream",
+ ".oda": "application/oda",
+ ".odp": "application/vnd.oasis.opendocument.presentation",
+ ".ods": "application/vnd.oasis.opendocument.spreadsheet",
+ ".odt": "application/vnd.oasis.opendocument.text",
+ ".oga": "audio/ogg",
+ ".ogv": "video/ogg",
+ ".ogx": "application/ogg",
+ ".opus": "audio/opus",
+ ".otf": "font/otf",
+ ".p12": "application/x-pkcs12",
+ ".p7c": "application/pkcs7-mime",
+ ".pbm": "image/x-portable-bitmap",
+ ".pdf": "application/pdf",
+ ".pfx": "application/x-pkcs12",
+ ".pgm": "image/x-portable-graymap",
+ ".php": "application/x-httpd-php",
+ ".pl": "text/plain",
+ ".png": "image/png",
+ ".pnm": "image/x-portable-anymap",
+ ".pot": "application/vnd.ms-powerpoint",
+ ".ppa": "application/vnd.ms-powerpoint",
+ ".ppm": "image/x-portable-pixmap",
+ ".pps": "application/vnd.ms-powerpoint",
+ ".ppt": "application/vnd.ms-powerpoint",
+ ".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
+ ".ps": "application/postscript",
+ ".pwz": "application/vnd.ms-powerpoint",
+ ".py": "text/x-python",
+ ".pyc": "application/x-python-code",
+ ".pyo": "application/x-python-code",
+ ".qt": "video/quicktime",
+ ".ra": "audio/x-pn-realaudio",
+ ".ram": "application/x-pn-realaudio",
+ ".rar": "application/vnd.rar",
+ ".ras": "image/x-cmu-raster",
+ ".rdf": "application/xml",
+ ".rgb": "image/x-rgb",
+ ".roff": "application/x-troff",
+ ".rtf": "application/rtf",
+ ".rtx": "text/richtext",
+ ".sgm": "text/x-sgml",
+ ".sgml": "text/x-sgml",
+ ".sh": "application/x-sh",
+ ".shar": "application/x-shar",
+ ".snd": "audio/basic",
+ ".so": "application/octet-stream",
+ ".src": "application/x-wais-source",
+ ".sv4cpio": "application/x-sv4cpio",
+ ".sv4crc": "application/x-sv4crc",
+ ".svg": "image/svg+xml",
+ ".swf": "application/x-shockwave-flash",
+ ".t": "application/x-troff",
+ ".tar": "application/x-tar",
+ ".tcl": "application/x-tcl",
+ ".tex": "application/x-tex",
+ ".texi": "application/x-texinfo",
+ ".texinfo": "application/x-texinfo",
+ ".tif": "image/tiff",
+ ".tiff": "image/tiff",
+ ".tr": "application/x-troff",
+ ".ts": "video/mp2t",
+ ".tsv": "text/tab-separated-values",
+ ".ttf": "font/ttf",
+ ".txt": "text/plain",
+ ".ustar": "application/x-ustar",
+ ".vcf": "text/x-vcard",
+ ".vsd": "application/vnd.visio",
+ ".wasm": "application/wasm",
+ ".wav": "audio/x-wav",
+ ".weba": "audio/webm",
+ ".webm": "video/webm",
+ ".webmanifest": "application/manifest+json",
+ ".webp": "image/webp",
+ ".wiz": "application/msword",
+ ".woff": "font/woff",
+ ".woff2": "font/woff2",
+ ".wsdl": "application/xml",
+ ".xbm": "image/x-xbitmap",
+ ".xhtml": "application/xhtml+xml",
+ ".xlb": "application/vnd.ms-excel",
+ ".xls": "application/vnd.ms-excel",
+ ".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
+ ".xml": "text/xml",
+ ".xpdl": "application/xml",
+ ".xpm": "image/x-xpixmap",
+ ".xsl": "application/xml",
+ ".xul": "application/vnd.mozilla.xul+xml",
+ ".xwd": "image/x-xwindowdump",
+ ".xz": "application/x-xz",
+ ".zip": "application/zip",
+ ".zst": "application/zstd",
+ ".zstd": "application/zstd",
+}
+
+
+def get_type(url):
+ """Return the mime-type of a file specified by its URL."""
+ _, ext = os.path.splitext(url)
+ return TYPES_MAP.get(ext.lower())
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -1,6 +1,5 @@
import asyncio
import logging
-import mimetypes
import os
import re
from gettext import gettext as _
@@ -36,6 +35,8 @@
Remote,
RemoteArtifact,
)
+from pulpcore.app import mime_types # noqa: E402: module level not at top of file
+
from pulpcore.exceptions import UnsupportedDigestValidationError # noqa: E402
from jinja2 import Template # noqa: E402: module level not at top of file
@@ -308,7 +309,7 @@ def response_headers(path):
Returns:
headers (dict): A dictionary of response headers.
"""
- content_type, encoding = mimetypes.guess_type(path)
+ content_type = mime_types.get_type(path)
headers = {}
if content_type:
headers["Content-Type"] = content_type
| Wrong ContentType header returned for some files via content app
**Version**
All versions
**Describe the bug**
.xml.gz Files being served by the content app have a `ContentType` header of `text/xml` instead of `application/gzip`.
**To Reproduce**
Sync and publish any RPM repo, use HTTPie to request one of the files, observe the headers
**Expected behavior**
`ContentType` should be `application/gzip`, the type of the outer file rather than that of the decompressed contents.
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2064092
| This is caused by an invalid value returned from `mimetypes.guess_type` (https://docs.python.org/3/library/mimetypes.html#mimetypes.guess_type).
Seems like the `mimetypes` module fetches types from the system (https://stackoverflow.com/a/40540381/3907906).
Should the mimetype not be determined by the content being served? Or maybe the published artifact. Guessing the mimetype from the artifact is probably not a good idea. Also when on cloud storage, it means that we need to load the artifact into pulp for guessing. That is even worse.
There is also a good discussion about the similar problem of serving `.xml.gz` files in pulp2: https://pulp.plan.io/issues/1781#note-24. Adjacent comments reference the PR that resolves the issue.
Their workflow however does not work in our use case. In addition to that, I cannot simply add a new mime type to the existing database of types like this: `mimetypes.add_type("application/gzip", ".xml.gz")`. The type is still not correctly detected and it returns `('text/xml', 'gzip')` all the time.
Loading artifacts from cloud storage is also a no-go for me.
Actually, it looks like we are guessing the mime-type just by means of the filename requested. So it is not a data from cloud issue. But still we are guessing where the content object could tell us the right answer.
> Should the mimetype not be determined by the content being served?
Currently, the type is determined by a file extension: https://github.com/pulp/pulpcore/blob/01557ca70f0863ec006977fb9bca6f8af8285dd6/pulpcore/content/handler.py#L337
| 2022-09-01T19:11:17 |
|
pulp/pulpcore | 3,160 | pulp__pulpcore-3160 | [
"2811"
] | 60918b7e6291a0732b9347d05028f5cd49aec282 | diff --git a/pulpcore/app/mime_types.py b/pulpcore/app/mime_types.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/mime_types.py
@@ -0,0 +1,197 @@
+import os
+
+# The mapping was retrieved from the following sources:
+# 1. https://docs.python.org/3/library/mimetypes.html#mimetypes.types_map
+# 2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types
+TYPES_MAP = {
+ ".3g2": "audio/3gpp2",
+ ".3gp": "audio/3gpp",
+ ".3gpp": "audio/3gpp",
+ ".3gpp2": "audio/3gpp2",
+ ".7z": "application/x-7z-compressed",
+ ".a": "application/octet-stream",
+ ".aac": "audio/aac",
+ ".abw": "application/x-abiword",
+ ".adts": "audio/aac",
+ ".ai": "application/postscript",
+ ".aif": "audio/x-aiff",
+ ".aifc": "audio/x-aiff",
+ ".aiff": "audio/x-aiff",
+ ".arc": "application/x-freearc",
+ ".ass": "audio/aac",
+ ".au": "audio/basic",
+ ".avif": "image/avif",
+ ".azw": "application/vnd.amazon.ebook",
+ ".bat": "text/plain",
+ ".bcpio": "application/x-bcpio",
+ ".bin": "application/octet-stream",
+ ".bmp": "image/x-ms-bmp",
+ ".bz": "application/x-bzip",
+ ".bz2": "application/x-bzip2",
+ ".c": "text/plain",
+ ".cda": "application/x-cdf",
+ ".cdf": "application/x-netcdf",
+ ".cpio": "application/x-cpio",
+ ".csh": "application/x-csh",
+ ".css": "text/css",
+ ".csv": "text/csv",
+ ".dll": "application/octet-stream",
+ ".doc": "application/msword",
+ ".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
+ ".dot": "application/msword",
+ ".dvi": "application/x-dvi",
+ ".eml": "message/rfc822",
+ ".eot": "application/vnd.ms-fontobject",
+ ".eps": "application/postscript",
+ ".epub": "application/epub+zip",
+ ".etx": "text/x-setext",
+ ".exe": "application/octet-stream",
+ ".gif": "image/gif",
+ ".gtar": "application/x-gtar",
+ ".gz": "application/gzip",
+ ".gzip": "application/gzip",
+ ".h": "text/plain",
+ ".h5": "application/x-hdf5",
+ ".hdf": "application/x-hdf",
+ ".heic": "image/heic",
+ ".heif": "image/heif",
+ ".htm": "text/html",
+ ".html": "text/html",
+ ".ico": "image/vnd.microsoft.icon",
+ ".ics": "text/calendar",
+ ".ief": "image/ief",
+ ".jar": "application/java-archive",
+ ".jpe": "image/jpeg",
+ ".jpeg": "image/jpeg",
+ ".jpg": "image/jpeg",
+ ".js": "application/javascript",
+ ".json": "application/json",
+ ".jsonld": "application/ld+json",
+ ".ksh": "text/plain",
+ ".latex": "application/x-latex",
+ ".loas": "audio/aac",
+ ".m1v": "video/mpeg",
+ ".m3u": "application/vnd.apple.mpegurl",
+ ".m3u8": "application/vnd.apple.mpegurl",
+ ".man": "application/x-troff-man",
+ ".me": "application/x-troff-me",
+ ".mht": "message/rfc822",
+ ".mhtml": "message/rfc822",
+ ".mid": "audio/x-midi",
+ ".midi": "audio/x-midi",
+ ".mif": "application/x-mif",
+ ".mjs": "application/javascript",
+ ".mov": "video/quicktime",
+ ".movie": "video/x-sgi-movie",
+ ".mp2": "audio/mpeg",
+ ".mp3": "audio/mpeg",
+ ".mp4": "video/mp4",
+ ".mpa": "video/mpeg",
+ ".mpe": "video/mpeg",
+ ".mpeg": "video/mpeg",
+ ".mpg": "video/mpeg",
+ ".mpkg": "application/vnd.apple.installer+xml",
+ ".ms": "application/x-troff-ms",
+ ".nc": "application/x-netcdf",
+ ".nws": "message/rfc822",
+ ".o": "application/octet-stream",
+ ".obj": "application/octet-stream",
+ ".oda": "application/oda",
+ ".odp": "application/vnd.oasis.opendocument.presentation",
+ ".ods": "application/vnd.oasis.opendocument.spreadsheet",
+ ".odt": "application/vnd.oasis.opendocument.text",
+ ".oga": "audio/ogg",
+ ".ogv": "video/ogg",
+ ".ogx": "application/ogg",
+ ".opus": "audio/opus",
+ ".otf": "font/otf",
+ ".p12": "application/x-pkcs12",
+ ".p7c": "application/pkcs7-mime",
+ ".pbm": "image/x-portable-bitmap",
+ ".pdf": "application/pdf",
+ ".pfx": "application/x-pkcs12",
+ ".pgm": "image/x-portable-graymap",
+ ".php": "application/x-httpd-php",
+ ".pl": "text/plain",
+ ".png": "image/png",
+ ".pnm": "image/x-portable-anymap",
+ ".pot": "application/vnd.ms-powerpoint",
+ ".ppa": "application/vnd.ms-powerpoint",
+ ".ppm": "image/x-portable-pixmap",
+ ".pps": "application/vnd.ms-powerpoint",
+ ".ppt": "application/vnd.ms-powerpoint",
+ ".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
+ ".ps": "application/postscript",
+ ".pwz": "application/vnd.ms-powerpoint",
+ ".py": "text/x-python",
+ ".pyc": "application/x-python-code",
+ ".pyo": "application/x-python-code",
+ ".qt": "video/quicktime",
+ ".ra": "audio/x-pn-realaudio",
+ ".ram": "application/x-pn-realaudio",
+ ".rar": "application/vnd.rar",
+ ".ras": "image/x-cmu-raster",
+ ".rdf": "application/xml",
+ ".rgb": "image/x-rgb",
+ ".roff": "application/x-troff",
+ ".rtf": "application/rtf",
+ ".rtx": "text/richtext",
+ ".sgm": "text/x-sgml",
+ ".sgml": "text/x-sgml",
+ ".sh": "application/x-sh",
+ ".shar": "application/x-shar",
+ ".snd": "audio/basic",
+ ".so": "application/octet-stream",
+ ".src": "application/x-wais-source",
+ ".sv4cpio": "application/x-sv4cpio",
+ ".sv4crc": "application/x-sv4crc",
+ ".svg": "image/svg+xml",
+ ".swf": "application/x-shockwave-flash",
+ ".t": "application/x-troff",
+ ".tar": "application/x-tar",
+ ".tcl": "application/x-tcl",
+ ".tex": "application/x-tex",
+ ".texi": "application/x-texinfo",
+ ".texinfo": "application/x-texinfo",
+ ".tif": "image/tiff",
+ ".tiff": "image/tiff",
+ ".tr": "application/x-troff",
+ ".ts": "video/mp2t",
+ ".tsv": "text/tab-separated-values",
+ ".ttf": "font/ttf",
+ ".txt": "text/plain",
+ ".ustar": "application/x-ustar",
+ ".vcf": "text/x-vcard",
+ ".vsd": "application/vnd.visio",
+ ".wasm": "application/wasm",
+ ".wav": "audio/x-wav",
+ ".weba": "audio/webm",
+ ".webm": "video/webm",
+ ".webmanifest": "application/manifest+json",
+ ".webp": "image/webp",
+ ".wiz": "application/msword",
+ ".woff": "font/woff",
+ ".woff2": "font/woff2",
+ ".wsdl": "application/xml",
+ ".xbm": "image/x-xbitmap",
+ ".xhtml": "application/xhtml+xml",
+ ".xlb": "application/vnd.ms-excel",
+ ".xls": "application/vnd.ms-excel",
+ ".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
+ ".xml": "text/xml",
+ ".xpdl": "application/xml",
+ ".xpm": "image/x-xpixmap",
+ ".xsl": "application/xml",
+ ".xul": "application/vnd.mozilla.xul+xml",
+ ".xwd": "image/x-xwindowdump",
+ ".xz": "application/x-xz",
+ ".zip": "application/zip",
+ ".zst": "application/zstd",
+ ".zstd": "application/zstd",
+}
+
+
+def get_type(url):
+ """Return the mime-type of a file specified by its URL."""
+ _, ext = os.path.splitext(url)
+ return TYPES_MAP.get(ext.lower())
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -1,6 +1,5 @@
import asyncio
import logging
-import mimetypes
import os
import re
from gettext import gettext as _
@@ -36,6 +35,8 @@
Remote,
RemoteArtifact,
)
+from pulpcore.app import mime_types # noqa: E402: module level not at top of file
+
from pulpcore.exceptions import UnsupportedDigestValidationError # noqa: E402
from jinja2 import Template # noqa: E402: module level not at top of file
@@ -307,7 +308,7 @@ def response_headers(path):
Returns:
headers (dict): A dictionary of response headers.
"""
- content_type, encoding = mimetypes.guess_type(path)
+ content_type = mime_types.get_type(path)
headers = {}
if content_type:
headers["Content-Type"] = content_type
| Wrong ContentType header returned for some files via content app
**Version**
All versions
**Describe the bug**
.xml.gz Files being served by the content app have a `ContentType` header of `text/xml` instead of `application/gzip`.
**To Reproduce**
Sync and publish any RPM repo, use HTTPie to request one of the files, observe the headers
**Expected behavior**
`ContentType` should be `application/gzip`, the type of the outer file rather than that of the decompressed contents.
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2064092
| This is caused by an invalid value returned from `mimetypes.guess_type` (https://docs.python.org/3/library/mimetypes.html#mimetypes.guess_type).
Seems like the `mimetypes` module fetches types from the system (https://stackoverflow.com/a/40540381/3907906).
Should the mimetype not be determined by the content being served? Or maybe the published artifact. Guessing the mimetype from the artifact is probably not a good idea. Also when on cloud storage, it means that we need to load the artifact into pulp for guessing. That is even worse.
There is also a good discussion about the similar problem of serving `.xml.gz` files in pulp2: https://pulp.plan.io/issues/1781#note-24. Adjacent comments reference the PR that resolves the issue.
Their workflow however does not work in our use case. In addition to that, I cannot simply add a new mime type to the existing database of types like this: `mimetypes.add_type("application/gzip", ".xml.gz")`. The type is still not correctly detected and it returns `('text/xml', 'gzip')` all the time.
Loading artifacts from cloud storage is also a no-go for me.
Actually, it looks like we are guessing the mime-type just by means of the filename requested. So it is not a data from cloud issue. But still we are guessing where the content object could tell us the right answer.
> Should the mimetype not be determined by the content being served?
Currently, the type is determined by a file extension: https://github.com/pulp/pulpcore/blob/01557ca70f0863ec006977fb9bca6f8af8285dd6/pulpcore/content/handler.py#L337
| 2022-09-01T19:11:17 |
|
pulp/pulpcore | 3,161 | pulp__pulpcore-3161 | [
"2811"
] | 4486558546a6029057c95ad1aeca958f6e0496a3 | diff --git a/pulpcore/app/mime_types.py b/pulpcore/app/mime_types.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/mime_types.py
@@ -0,0 +1,197 @@
+import os
+
+# The mapping was retrieved from the following sources:
+# 1. https://docs.python.org/3/library/mimetypes.html#mimetypes.types_map
+# 2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types
+TYPES_MAP = {
+ ".3g2": "audio/3gpp2",
+ ".3gp": "audio/3gpp",
+ ".3gpp": "audio/3gpp",
+ ".3gpp2": "audio/3gpp2",
+ ".7z": "application/x-7z-compressed",
+ ".a": "application/octet-stream",
+ ".aac": "audio/aac",
+ ".abw": "application/x-abiword",
+ ".adts": "audio/aac",
+ ".ai": "application/postscript",
+ ".aif": "audio/x-aiff",
+ ".aifc": "audio/x-aiff",
+ ".aiff": "audio/x-aiff",
+ ".arc": "application/x-freearc",
+ ".ass": "audio/aac",
+ ".au": "audio/basic",
+ ".avif": "image/avif",
+ ".azw": "application/vnd.amazon.ebook",
+ ".bat": "text/plain",
+ ".bcpio": "application/x-bcpio",
+ ".bin": "application/octet-stream",
+ ".bmp": "image/x-ms-bmp",
+ ".bz": "application/x-bzip",
+ ".bz2": "application/x-bzip2",
+ ".c": "text/plain",
+ ".cda": "application/x-cdf",
+ ".cdf": "application/x-netcdf",
+ ".cpio": "application/x-cpio",
+ ".csh": "application/x-csh",
+ ".css": "text/css",
+ ".csv": "text/csv",
+ ".dll": "application/octet-stream",
+ ".doc": "application/msword",
+ ".docx": "application/vnd.openxmlformats-officedocument.wordprocessingml.document",
+ ".dot": "application/msword",
+ ".dvi": "application/x-dvi",
+ ".eml": "message/rfc822",
+ ".eot": "application/vnd.ms-fontobject",
+ ".eps": "application/postscript",
+ ".epub": "application/epub+zip",
+ ".etx": "text/x-setext",
+ ".exe": "application/octet-stream",
+ ".gif": "image/gif",
+ ".gtar": "application/x-gtar",
+ ".gz": "application/gzip",
+ ".gzip": "application/gzip",
+ ".h": "text/plain",
+ ".h5": "application/x-hdf5",
+ ".hdf": "application/x-hdf",
+ ".heic": "image/heic",
+ ".heif": "image/heif",
+ ".htm": "text/html",
+ ".html": "text/html",
+ ".ico": "image/vnd.microsoft.icon",
+ ".ics": "text/calendar",
+ ".ief": "image/ief",
+ ".jar": "application/java-archive",
+ ".jpe": "image/jpeg",
+ ".jpeg": "image/jpeg",
+ ".jpg": "image/jpeg",
+ ".js": "application/javascript",
+ ".json": "application/json",
+ ".jsonld": "application/ld+json",
+ ".ksh": "text/plain",
+ ".latex": "application/x-latex",
+ ".loas": "audio/aac",
+ ".m1v": "video/mpeg",
+ ".m3u": "application/vnd.apple.mpegurl",
+ ".m3u8": "application/vnd.apple.mpegurl",
+ ".man": "application/x-troff-man",
+ ".me": "application/x-troff-me",
+ ".mht": "message/rfc822",
+ ".mhtml": "message/rfc822",
+ ".mid": "audio/x-midi",
+ ".midi": "audio/x-midi",
+ ".mif": "application/x-mif",
+ ".mjs": "application/javascript",
+ ".mov": "video/quicktime",
+ ".movie": "video/x-sgi-movie",
+ ".mp2": "audio/mpeg",
+ ".mp3": "audio/mpeg",
+ ".mp4": "video/mp4",
+ ".mpa": "video/mpeg",
+ ".mpe": "video/mpeg",
+ ".mpeg": "video/mpeg",
+ ".mpg": "video/mpeg",
+ ".mpkg": "application/vnd.apple.installer+xml",
+ ".ms": "application/x-troff-ms",
+ ".nc": "application/x-netcdf",
+ ".nws": "message/rfc822",
+ ".o": "application/octet-stream",
+ ".obj": "application/octet-stream",
+ ".oda": "application/oda",
+ ".odp": "application/vnd.oasis.opendocument.presentation",
+ ".ods": "application/vnd.oasis.opendocument.spreadsheet",
+ ".odt": "application/vnd.oasis.opendocument.text",
+ ".oga": "audio/ogg",
+ ".ogv": "video/ogg",
+ ".ogx": "application/ogg",
+ ".opus": "audio/opus",
+ ".otf": "font/otf",
+ ".p12": "application/x-pkcs12",
+ ".p7c": "application/pkcs7-mime",
+ ".pbm": "image/x-portable-bitmap",
+ ".pdf": "application/pdf",
+ ".pfx": "application/x-pkcs12",
+ ".pgm": "image/x-portable-graymap",
+ ".php": "application/x-httpd-php",
+ ".pl": "text/plain",
+ ".png": "image/png",
+ ".pnm": "image/x-portable-anymap",
+ ".pot": "application/vnd.ms-powerpoint",
+ ".ppa": "application/vnd.ms-powerpoint",
+ ".ppm": "image/x-portable-pixmap",
+ ".pps": "application/vnd.ms-powerpoint",
+ ".ppt": "application/vnd.ms-powerpoint",
+ ".pptx": "application/vnd.openxmlformats-officedocument.presentationml.presentation",
+ ".ps": "application/postscript",
+ ".pwz": "application/vnd.ms-powerpoint",
+ ".py": "text/x-python",
+ ".pyc": "application/x-python-code",
+ ".pyo": "application/x-python-code",
+ ".qt": "video/quicktime",
+ ".ra": "audio/x-pn-realaudio",
+ ".ram": "application/x-pn-realaudio",
+ ".rar": "application/vnd.rar",
+ ".ras": "image/x-cmu-raster",
+ ".rdf": "application/xml",
+ ".rgb": "image/x-rgb",
+ ".roff": "application/x-troff",
+ ".rtf": "application/rtf",
+ ".rtx": "text/richtext",
+ ".sgm": "text/x-sgml",
+ ".sgml": "text/x-sgml",
+ ".sh": "application/x-sh",
+ ".shar": "application/x-shar",
+ ".snd": "audio/basic",
+ ".so": "application/octet-stream",
+ ".src": "application/x-wais-source",
+ ".sv4cpio": "application/x-sv4cpio",
+ ".sv4crc": "application/x-sv4crc",
+ ".svg": "image/svg+xml",
+ ".swf": "application/x-shockwave-flash",
+ ".t": "application/x-troff",
+ ".tar": "application/x-tar",
+ ".tcl": "application/x-tcl",
+ ".tex": "application/x-tex",
+ ".texi": "application/x-texinfo",
+ ".texinfo": "application/x-texinfo",
+ ".tif": "image/tiff",
+ ".tiff": "image/tiff",
+ ".tr": "application/x-troff",
+ ".ts": "video/mp2t",
+ ".tsv": "text/tab-separated-values",
+ ".ttf": "font/ttf",
+ ".txt": "text/plain",
+ ".ustar": "application/x-ustar",
+ ".vcf": "text/x-vcard",
+ ".vsd": "application/vnd.visio",
+ ".wasm": "application/wasm",
+ ".wav": "audio/x-wav",
+ ".weba": "audio/webm",
+ ".webm": "video/webm",
+ ".webmanifest": "application/manifest+json",
+ ".webp": "image/webp",
+ ".wiz": "application/msword",
+ ".woff": "font/woff",
+ ".woff2": "font/woff2",
+ ".wsdl": "application/xml",
+ ".xbm": "image/x-xbitmap",
+ ".xhtml": "application/xhtml+xml",
+ ".xlb": "application/vnd.ms-excel",
+ ".xls": "application/vnd.ms-excel",
+ ".xlsx": "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
+ ".xml": "text/xml",
+ ".xpdl": "application/xml",
+ ".xpm": "image/x-xpixmap",
+ ".xsl": "application/xml",
+ ".xul": "application/vnd.mozilla.xul+xml",
+ ".xwd": "image/x-xwindowdump",
+ ".xz": "application/x-xz",
+ ".zip": "application/zip",
+ ".zst": "application/zstd",
+ ".zstd": "application/zstd",
+}
+
+
+def get_type(url):
+ """Return the mime-type of a file specified by its URL."""
+ _, ext = os.path.splitext(url)
+ return TYPES_MAP.get(ext.lower())
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -1,6 +1,5 @@
import asyncio
import logging
-import mimetypes
import os
import re
from gettext import gettext as _
@@ -36,6 +35,8 @@
Remote,
RemoteArtifact,
)
+from pulpcore.app import mime_types # noqa: E402: module level not at top of file
+
from pulpcore.exceptions import UnsupportedDigestValidationError # noqa: E402
from jinja2 import Template # noqa: E402: module level not at top of file
@@ -308,7 +309,7 @@ def response_headers(path):
Returns:
headers (dict): A dictionary of response headers.
"""
- content_type, encoding = mimetypes.guess_type(path)
+ content_type = mime_types.get_type(path)
headers = {}
if content_type:
headers["Content-Type"] = content_type
| Wrong ContentType header returned for some files via content app
**Version**
All versions
**Describe the bug**
.xml.gz Files being served by the content app have a `ContentType` header of `text/xml` instead of `application/gzip`.
**To Reproduce**
Sync and publish any RPM repo, use HTTPie to request one of the files, observe the headers
**Expected behavior**
`ContentType` should be `application/gzip`, the type of the outer file rather than that of the decompressed contents.
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2064092
| This is caused by an invalid value returned from `mimetypes.guess_type` (https://docs.python.org/3/library/mimetypes.html#mimetypes.guess_type).
Seems like the `mimetypes` module fetches types from the system (https://stackoverflow.com/a/40540381/3907906).
Should the mimetype not be determined by the content being served? Or maybe the published artifact. Guessing the mimetype from the artifact is probably not a good idea. Also when on cloud storage, it means that we need to load the artifact into pulp for guessing. That is even worse.
There is also a good discussion about the similar problem of serving `.xml.gz` files in pulp2: https://pulp.plan.io/issues/1781#note-24. Adjacent comments reference the PR that resolves the issue.
Their workflow however does not work in our use case. In addition to that, I cannot simply add a new mime type to the existing database of types like this: `mimetypes.add_type("application/gzip", ".xml.gz")`. The type is still not correctly detected and it returns `('text/xml', 'gzip')` all the time.
Loading artifacts from cloud storage is also a no-go for me.
Actually, it looks like we are guessing the mime-type just by means of the filename requested. So it is not a data from cloud issue. But still we are guessing where the content object could tell us the right answer.
> Should the mimetype not be determined by the content being served?
Currently, the type is determined by a file extension: https://github.com/pulp/pulpcore/blob/01557ca70f0863ec006977fb9bca6f8af8285dd6/pulpcore/content/handler.py#L337
| 2022-09-01T19:11:37 |
|
pulp/pulpcore | 3,165 | pulp__pulpcore-3165 | [
"3111"
] | f82517d1554529298a3c2255a17566923bd03895 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -165,23 +165,26 @@ def process_batch():
# on select-for-update. So, we select-for-update, in pulp_id order, the
# rows we're about to update as one db-call, and then do the update in a
# second.
+ #
+ # NOTE: select-for-update requires being in an atomic-transaction. We are
+ # **already in an atomic transaction** at this point as a result of the
+ # "with transaction.atomic():", above.
ids = [k.pulp_id for k in to_update_ca_bulk]
- with transaction.atomic():
- # "len()" forces the QA to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- len(
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- .values_list()
- )
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
+ # work for us - Django is smart enough to either not-order, or even
+ # not-emit, a select-for-update in these cases.
+ #
+ # To maximize performance, we make sure to only ask for pulp_ids, and
+ # avoid instantiating a python-object for the affected CAs by using
+ # values_list()
+ subq = (
+ ContentArtifact.objects.filter(pulp_id__in=ids)
+ .only("pulp_id")
+ .order_by("pulp_id")
+ .select_for_update()
+ )
+ len(subq.values_list())
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
| bulk_update() can still deadlock in content_stages
**Version**
pulpcore/3.14.16 (but problem exists in main)
**Describe the bug**
Syncing multiple repositories with high content-overlaps under high concurrency, continues to hit occasional deadlock.
Traceback and postgres log for the particular failure:
```
2022-08-10 15:51:29 EDT ERROR: deadlock detected
2022-08-10 15:51:29 EDT DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
Process 55740: SELECT ....
Process 55746: COMMIT
2022-08-10 15:51:29 EDT HINT: See server log for query details.
2022-08-10 15:51:29 EDT CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
```
```
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: Task 407bb67b-65d0-4d65-b9c8-b1aa1f2c87fd failed (deadlock detected
pulpcore-worker-5[54158]: DETAIL: Process 55740 waits for ShareLock on transaction 61803; blocked by process 55746.
pulpcore-worker-5[54158]: Process 55746 waits for ShareLock on transaction 61805; blocked by process 55740.
pulpcore-worker-5[54158]: HINT: See server log for query details.
pulpcore-worker-5[54158]: CONTEXT: while locking tuple (209,51) in relation "core_contentartifact"
pulpcore-worker-5[54158]: )
pulpcore-worker-5[54158]: pulp [88649f1c-3393-4693-aab7-d73b62eeda62]: pulpcore.tasking.pulpcore_worker:INFO: File "/usr/lib/python3.6/site-packages/pulpcore/tasking/pulpcore_worker.py", line 342, in _perform_task
pulpcore-worker-5[54158]: result = func(*args, **kwargs)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/synchronizing.py", line 494, in synchronize
pulpcore-worker-5[54158]: version = dv.create()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/declarative_version.py", line 151, in create
pulpcore-worker-5[54158]: loop.run_until_complete(pipeline)
pulpcore-worker-5[54158]: File "/usr/lib64/python3.6/asyncio/base_events.py", line 484, in run_until_complete
pulpcore-worker-5[54158]: return future.result()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 225, in create_pipeline
pulpcore-worker-5[54158]: await asyncio.gather(*futures)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/api.py", line 43, in __call__
pulpcore-worker-5[54158]: await self.run()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/content_stages.py", line 178, in run
pulpcore-worker-5[54158]: .order_by("pulp_id")
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 256, in __len__
pulpcore-worker-5[54158]: self._fetch_all()
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 1242, in _fetch_all
pulpcore-worker-5[54158]: self._result_cache = list(self._iterable_class(self))
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 144, in __iter__
pulpcore-worker-5[54158]: return compiler.results_iter(tuple_expected=True, chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1094, in results_iter
pulpcore-worker-5[54158]: results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/models/sql/compiler.py", line 1142, in execute_sql
pulpcore-worker-5[54158]: cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
pulpcore-worker-5[54158]: return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
pulpcore-worker-5[54158]: return executor(sql, params, many, context)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
pulpcore-worker-5[54158]: raise dj_exc_value.with_traceback(traceback) from exc_value
pulpcore-worker-5[54158]: File "/usr/lib/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
pulpcore-worker-5[54158]: return self.cursor.execute(sql, params)
```
**To Reproduce**
See the QE test-setup from https://bugzilla.redhat.com/show_bug.cgi?id=2062526. We have not been able to force the problem with a synthetic test case.
**Expected behavior**
All the repositories should sync all their content without any of the sync-processes being failed due to detected deadlocks.
| https://bugzilla.redhat.com/show_bug.cgi?id=2082209 | 2022-09-01T21:45:36 |
|
pulp/pulpcore | 3,179 | pulp__pulpcore-3179 | [
"3061"
] | 1d851852f6f4cf01f4cac2d710dd47e1cca1e3f0 | diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -291,6 +291,8 @@
TELEMETRY = True
+HIDE_GUARDED_DISTRIBUTIONS = False
+
# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)
# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html
from dynaconf import DjangoDynaconf, Validator # noqa
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -77,6 +77,8 @@ class DistroListings(HTTPOk):
def __init__(self, path, distros):
"""Create the HTML response."""
+ if settings.HIDE_GUARDED_DISTRIBUTIONS:
+ distros = distros.filter(content_guard__isnull=True)
base_paths = (
distros.annotate(rel_path=models.functions.Substr("base_path", 1 + len(path)))
.annotate(
| Add the ability to hide certificate guarded repositories from the listing present at `/pulp/content`
**Is your feature request related to a problem? Please describe.**
For repositories that are protected by client certificates, users desire a way for these to not appear on the listing page that is present when visiting `/pulp/content`. This behavior is how Pulp 2 worked and users have expressed a desire for it to return.
My understanding is that user's view these as protected, and more sensitive than repositories not guarded by client certificates and would prefer they do not appear as browsable in any way to users who do not have the proper client certificates to present to the server.
**Additional context**
There is a Satellite BZ that provides additional context and the originating request: https://bugzilla.redhat.com/show_bug.cgi?id=2088559
| Can this be handled if that page was rendered as a (jinja2-) template?
Should users with the proper access (through the content guard) be able to see the paths again?
This:
> For repositories that are protected by client certificates, users desire a way for these to not appear on the listing page that is present when visiting /pulp/content
Would be quite simple to do, a one line change in fact [0]
But this:
> do not appear as browsable in any way to users who do not have the proper client certificates to present to the server.
Would be a bit more complictated. @ehelms Can you clarify which behavior you're looking for?
[0]
```diff
Author: Daniel Alley <[email protected]>
Date: Wed Aug 17 09:38:03 2022 -0400
Hide contentguard-protected repositories from the directory listing
closes #3061
diff --git a/CHANGES/3061.bugfix b/CHANGES/3061.bugfix
new file mode 100644
index 000000000..aedf06027
--- /dev/null
+++ b/CHANGES/3061.bugfix
@@ -0,0 +1 @@
+Repositories that are protected by a content guard should be hidden from the directory listing in the content app.
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
index 8c864f819..ee858d7cf 100644
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -162,7 +162,7 @@ class Handler:
def get_base_paths_blocking():
distro_model = self.distribution_model or Distribution
- raise DistroListings(path="", distros=distro_model.objects.all())
+ raise DistroListings(path="", distros=distro_model.objects.filter(content_guard__isnull=True))
if request.method.lower() == "head":
raise HTTPOk(headers={"Content-Type": "text/html"})
```
The way it seems to work today is that all repositories within Pulp show up in a list when browsing to `/pulp/content/`, e.g. The repositories which are content guarded shouldn't appear in that list. That would be the simple blanket fix to the problem I think.
Users have expressed that if they present valid client certificates when visiting `/pulp/content/` they would see those content guarded repositories listed. I think this is a nice to have not a must have for almost all users.
Accessing an individual repository at it's fully qualified path is already protected and to my knowledge needs no additional work, e.g. if visit a content guarded repository I see:
```
403: [('PEM routines', 'get_name', 'no start line')]
```
(One could argue that should be a 404 rather than a 403.)
>The way it seems to work today is that all repositories within Pulp show up in a list when browsing to /pulp/content/, e.g. The repositories which are content guarded shouldn't appear in that list. That would be the simple blanket fix to the problem I think.
>Users have expressed that if they present valid client certificates when visiting /pulp/content/ they would see those content guarded repositories listed. I think this is a nice to have not a must have for almost all users.
I'm pretty certain we can do this in an hour or two in that case. Plus some margin for edge cases I haven't thought of and writing some automated tests.
> Accessing an individual repository at it's fully qualified path is already protected and to my knowledge needs no additional work, e.g. if visit a content guarded repository I see:
Great news, glad there's no issue there
> (One could argue that should be a 404 rather than a 403.)
For obscuration purposes I presume? (If you were spraying requests you could theoretically uncover which protected directories exist based on getting not-found or access-denied).
> For obscuration purposes I presume? (If you were spraying requests you could theoretically uncover which protected directories exist based on getting not-found or access-denied).
Correct.
> > For obscuration purposes I presume? (If you were spraying requests you could theoretically uncover which protected directories exist based on getting not-found or access-denied).
>
> Correct.
I agree that this may be a valid concern, but i think that would be a separate issue. Should we have one filed?
I believe so, yes. | 2022-09-08T10:13:09 |
|
pulp/pulpcore | 3,182 | pulp__pulpcore-3182 | [
"2967"
] | ed113dce4c18a7b3a49bc676c7bbbf2722190a5b | diff --git a/pulpcore/app/management/commands/add-signing-service.py b/pulpcore/app/management/commands/add-signing-service.py
--- a/pulpcore/app/management/commands/add-signing-service.py
+++ b/pulpcore/app/management/commands/add-signing-service.py
@@ -11,6 +11,8 @@
from django.apps import apps
from django.db.utils import IntegrityError
+from pulpcore.app.models.content import SigningService as BaseSigningService
+
class Command(BaseCommand):
"""
@@ -65,6 +67,12 @@ def handle(self, *args, **options):
SigningService = apps.get_model(app_label, service_class)
except LookupError as e:
raise CommandError(str(e))
+ if not issubclass(SigningService, BaseSigningService):
+ raise CommandError(
+ _("Class '{}' is not a subclass of the base 'core:SigningService' class.").format(
+ options["class"]
+ )
+ )
gpg = gnupg.GPG(gnupghome=options["gnupghome"], keyring=options["keyring"])
diff --git a/pulpcore/app/management/commands/remove-signing-service.py b/pulpcore/app/management/commands/remove-signing-service.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/remove-signing-service.py
@@ -0,0 +1,66 @@
+from gettext import gettext as _
+
+from django.core.management import BaseCommand, CommandError
+
+from django.core.exceptions import ObjectDoesNotExist
+
+from django.apps import apps
+from django.db.utils import IntegrityError
+
+from pulpcore.app.models.content import SigningService as BaseSigningService
+
+
+class Command(BaseCommand):
+ """
+ Django management command for removing a signing service.
+
+ This command is in tech-preview.
+ """
+
+ help = "Removes an existing AsciiArmoredDetachedSigningService. [tech-preview]"
+
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "name",
+ help=_("Name that the signing_service has in the database."),
+ )
+ parser.add_argument(
+ "--class",
+ default="core:AsciiArmoredDetachedSigningService",
+ required=False,
+ help=_("Signing service class prefixed by the app label separated by a colon."),
+ )
+
+ def handle(self, *args, **options):
+ name = options["name"]
+ if ":" not in options["class"]:
+ raise CommandError(_("The signing service class was not provided in a proper format."))
+ app_label, service_class = options["class"].split(":")
+
+ try:
+ SigningService = apps.get_model(app_label, service_class)
+ except LookupError as e:
+ raise CommandError(str(e))
+ if not issubclass(SigningService, BaseSigningService):
+ raise CommandError(
+ _("Class '{}' is not a subclass of the base 'core:SigningService' class.").format(
+ options["class"]
+ )
+ )
+
+ try:
+ SigningService.objects.get(name=name).delete()
+ except IntegrityError:
+ raise CommandError(
+ _("Signing service '{}' could not be removed because it's still in use.").format(
+ name
+ )
+ )
+ except ObjectDoesNotExist:
+ raise CommandError(
+ _("Signing service '{}' of class '{}' does not exists.").format(
+ name, options["class"]
+ )
+ )
+ else:
+ self.stdout.write(_("Signing service '{}' has been successfully removed.").format(name))
| diff --git a/pulpcore/tests/functional/api/test_signing_service.py b/pulpcore/tests/functional/api/test_signing_service.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/tests/functional/api/test_signing_service.py
@@ -0,0 +1,7 @@
+import pytest
+
+
[email protected]
+def test_crud_signing_service(ascii_armored_detached_signing_service):
+ service = ascii_armored_detached_signing_service
+ assert "/api/v3/signing-services/" in service.pulp_href
diff --git a/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py b/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
--- a/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
+++ b/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py
@@ -157,9 +157,10 @@ def _ascii_armored_detached_signing_service_name(
cmd = (
"pulpcore-manager",
- "shell",
- "-c",
- f'from pulpcore.app.models import AsciiArmoredDetachedSigningService;print(AsciiArmoredDetachedSigningService.objects.get(name="{service_name}").delete())', # noqa: E501
+ "remove-signing-service",
+ service_name,
+ "--class",
+ "core:AsciiArmoredDetachedSigningService",
)
subprocess.run(
cmd,
| Facilitate removal of signing services in functional tests
Allowing testers to call a function for removing a signing service would be useful. Currently, we remove the signing service in the following manner: https://github.com/pulp/pulpcore/blob/6287ef36d2fc20ca316e1e889296fc55072ff477/pulpcore/tests/functional/gpg_ascii_armor_signing_service.py#L158-L168
Having a fixture `remove_signing_service(service_name, service_type)` accessible globally would suffice.
| Actually, i think we should have a management command for admins too.
Also reading the original issue, i think we should not provide a `remove_signing_service` fixture per se, be call the management command in the cleanup code of the signing service fixture. | 2022-09-08T12:05:14 |
pulp/pulpcore | 3,188 | pulp__pulpcore-3188 | [
"3187",
"3187"
] | 6857606f5d6136281747317a5699ebaa08cb1699 | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -2,6 +2,7 @@
import json
import logging
import os
+import os.path
import subprocess
import tarfile
@@ -75,6 +76,15 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
and method != FS_EXPORT_METHODS.WRITE
):
raise RuntimeError(_("Only write is supported for non-filesystem storage."))
+ os.makedirs(path)
+ export_not_on_same_filesystem = (
+ settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem"
+ and os.stat(settings.MEDIA_ROOT).st_dev != os.stat(path).st_dev
+ )
+
+ if method == FS_EXPORT_METHODS.HARDLINK and export_not_on_same_filesystem:
+ log.info(_("Hard link cannot be created, file will be copied."))
+ method = FS_EXPORT_METHODS.WRITE
for relative_path, artifact in relative_paths_to_artifacts.items():
dest = os.path.join(path, relative_path)
@@ -82,9 +92,11 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
if method == FS_EXPORT_METHODS.SYMLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.symlink(src, dest)
elif method == FS_EXPORT_METHODS.HARDLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.link(src, dest)
elif method == FS_EXPORT_METHODS.WRITE:
with open(dest, "wb") as f, artifact.file as af:
| FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
| 2022-09-09T03:34:14 |
||
pulp/pulpcore | 3,193 | pulp__pulpcore-3193 | [
"3192"
] | b1207f30c722da17db0c8d04b98f997a10bb26f0 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T13:28:02 |
||
pulp/pulpcore | 3,195 | pulp__pulpcore-3195 | [
"3192"
] | 9d734071c4a71e82d68c540ac28c0c3a34c320fb | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T15:18:43 |
||
pulp/pulpcore | 3,196 | pulp__pulpcore-3196 | [
"3192"
] | 8d78c126c4d574de5fe866e8a5c1e7a47e1c5411 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T15:18:59 |
||
pulp/pulpcore | 3,197 | pulp__pulpcore-3197 | [
"3192"
] | 9e302d26059a6ebb2cd2a26fb00bbdf6d4e270c3 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T15:19:16 |
||
pulp/pulpcore | 3,198 | pulp__pulpcore-3198 | [
"3192"
] | 4e865f76c3750e10693d75284807f7279997dc93 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T15:19:30 |
||
pulp/pulpcore | 3,199 | pulp__pulpcore-3199 | [
"3192"
] | 5736e76ea6033b97385ef5b8052817b1380b30a4 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -183,7 +183,10 @@ def process_batch():
.order_by("pulp_id")
.select_for_update()
)
- len(subq.values_list())
+ # NOTE: it might look like you can "safely" make this request
+ # "more efficient". You'd be wrong, and would only be removing an
+ # ordering-guardrail preventing deadlock. Don't touch.
+ len(ContentArtifact.objects.filter(pk__in=subq).values_list())
ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
# To avoid a similar deadlock issue when calling get_or_create, we sort the
| content_stages deadlock - Once More Unto The Breach
version: core/3.16+
content-stage deadlock still occurs in QE test case - see https://bugzilla.redhat.com/show_bug.cgi?id=2082209#c27
| 2022-09-13T15:20:00 |
||
pulp/pulpcore | 3,211 | pulp__pulpcore-3211 | [
"3138"
] | da6d82064c4f096cd2a96ede5a9ba98dc78e8272 | diff --git a/pulpcore/app/serializers/status.py b/pulpcore/app/serializers/status.py
--- a/pulpcore/app/serializers/status.py
+++ b/pulpcore/app/serializers/status.py
@@ -49,6 +49,19 @@ class StorageSerializer(serializers.Serializer):
free = serializers.IntegerField(min_value=0, help_text=_("Number of free bytes"))
+class ContentSettingsSerializer(serializers.Serializer):
+ """
+ Serializer for information about content-app-settings for the pulp instance
+ """
+
+ content_origin = serializers.CharField(
+ help_text=_("The CONTENT_ORIGIN setting for this Pulp instance"),
+ )
+ content_path_prefix = serializers.CharField(
+ help_text=_("The CONTENT_PATH_PREFIX setting for this Pulp instance"),
+ )
+
+
class StatusSerializer(serializers.Serializer):
"""
Serializer for the status information of the app
@@ -82,3 +95,5 @@ class StatusSerializer(serializers.Serializer):
)
storage = StorageSerializer(required=False, help_text=_("Storage information"))
+
+ content_settings = ContentSettingsSerializer(help_text=_("Content-app settings"))
diff --git a/pulpcore/app/views/status.py b/pulpcore/app/views/status.py
--- a/pulpcore/app/views/status.py
+++ b/pulpcore/app/views/status.py
@@ -76,6 +76,11 @@ def get(self, request):
except Exception:
online_content_apps = None
+ content_settings = {
+ "content_origin": settings.CONTENT_ORIGIN,
+ "content_path_prefix": settings.CONTENT_PATH_PREFIX,
+ }
+
data = {
"versions": versions,
"online_workers": online_workers,
@@ -83,6 +88,7 @@ def get(self, request):
"database_connection": db_status,
"redis_connection": redis_status,
"storage": _disk_usage(),
+ "content_settings": content_settings,
}
context = {"request": request}
| diff --git a/pulpcore/tests/functional/api/test_status.py b/pulpcore/tests/functional/api/test_status.py
--- a/pulpcore/tests/functional/api/test_status.py
+++ b/pulpcore/tests/functional/api/test_status.py
@@ -43,7 +43,22 @@
"free": {"type": "integer"},
},
},
+ "content_settings": {
+ "type": "object",
+ "properties": {
+ "content_origin": {"type": "string"},
+ "content_path_prefix": {"type": "string"},
+ },
+ "required": ["content_origin", "content_path_prefix"],
+ },
},
+ "required": [
+ "content_settings",
+ "database_connection",
+ "online_workers",
+ "storage",
+ "versions",
+ ],
}
@@ -60,9 +75,10 @@ class StatusTestCase(unittest.TestCase):
def setUp(self):
"""Make an API client."""
- self.client = api.Client(config.get_config(), api.json_handler)
+ self.cfg = config.get_config()
+ self.client = api.Client(self.cfg, api.json_handler)
self.status_response = STATUS
- cli_client = cli.Client(config.get_config())
+ cli_client = cli.Client(self.cfg)
self.storage = utils.get_pulp_setting(cli_client, "DEFAULT_FILE_STORAGE")
if self.storage != "pulpcore.app.models.storage.FileSystem":
@@ -113,6 +129,10 @@ def verify_get_response(self, status):
else:
warnings.warn("Could not connect to the Redis server")
+ self.assertIsNotNone(status["content_settings"])
+ self.assertIsNotNone(status["content_settings"]["content_origin"])
+ self.assertIsNotNone(status["content_settings"]["content_path_prefix"])
+
@override_settings(CACHE_ENABLED=False)
def verify_get_response_without_redis(self, status):
"""Verify the response to an HTTP GET call when Redis is not used.
| Add the URL of `content-app` to the pulp's status endpoint
Users who execute `pulp status` shall see the URL of content-app where all resources are served by a distribution. This might be useful for determining the URL of the Pulp Registry when using the container plugin.
| @lubosmj what do you think about a stanza like this : "content_path": "http://localhost:5001/pulp/content/"` | 2022-09-15T15:09:55 |
pulp/pulpcore | 3,224 | pulp__pulpcore-3224 | [
"3213"
] | 3e6d68e502f5b17957339ae0f005783d9d7049e6 | diff --git a/pulpcore/app/tasks/telemetry.py b/pulpcore/app/tasks/telemetry.py
--- a/pulpcore/app/tasks/telemetry.py
+++ b/pulpcore/app/tasks/telemetry.py
@@ -23,10 +23,24 @@
def get_telemetry_posting_url():
+ """
+ Return either the dev or production telemetry FQDN url.
+
+ Production version string examples: ["3.21.1", "1.11.0"]
+ Developer version string example: ["3.20.3.dev", "2.0.0a6"]
+
+ Returns:
+ The FQDN string of either the dev or production telemetry site.
+ """
for app in pulp_plugin_configs():
- if ".dev" in app.version:
+ if not app.version.count(".") == 2: # Only two periods allowed in prod version strings
return DEV_URL
+ x, y, z = app.version.split(".")
+ for item in [x, y, z]:
+ if not item.isdigit(): # Only numbers should be in the prod version string components
+ return DEV_URL
+
return PRODUCTION_URL
| Telemetry of dev installs submitted to production site (https://anaytics.pulpproject.org/)
**Version**
3.20.z and 3.21.z
**Describe the bug**
Telemetry of a non-GA component is being submitted to analytics.pulpproject.org. This is unexpected and causes summarization to stop working with this exception:
```
Traceback (most recent call last):
File "/srv/app/./manage.py", line 22, in <module>
main()
File "/srv/app/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 112, in handle
self._handle_components(systems, summary)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 74, in _handle_components
semver_version = semver.parse(component.version)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 159, in wrapper
return func(*args, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 190, in parse
return VersionInfo.parse(version).to_dict()
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 726, in parse
raise ValueError("%s is not valid SemVer string" % version)
ValueError: 2.0.0a6 is not valid SemVer string
```
| @pulp/core I added prio-list to this for two reasons. Please let me know if you'd like me to do something differently.
* The production telemetry site is incorrectly receiving data from developer installations, which is a significant data quality problem.
* There are no other unassigned issues with prio-list. | 2022-09-19T19:57:43 |
|
pulp/pulpcore | 3,225 | pulp__pulpcore-3225 | [
"3213"
] | f785fbb9dc1afb2831c2ab6f24b8b4c279c72823 | diff --git a/pulpcore/app/tasks/telemetry.py b/pulpcore/app/tasks/telemetry.py
--- a/pulpcore/app/tasks/telemetry.py
+++ b/pulpcore/app/tasks/telemetry.py
@@ -23,10 +23,24 @@
def get_telemetry_posting_url():
+ """
+ Return either the dev or production telemetry FQDN url.
+
+ Production version string examples: ["3.21.1", "1.11.0"]
+ Developer version string example: ["3.20.3.dev", "2.0.0a6"]
+
+ Returns:
+ The FQDN string of either the dev or production telemetry site.
+ """
for app in pulp_plugin_configs():
- if ".dev" in app.version:
+ if not app.version.count(".") == 2: # Only two periods allowed in prod version strings
return DEV_URL
+ x, y, z = app.version.split(".")
+ for item in [x, y, z]:
+ if not item.isdigit(): # Only numbers should be in the prod version string components
+ return DEV_URL
+
return PRODUCTION_URL
| Telemetry of dev installs submitted to production site (https://anaytics.pulpproject.org/)
**Version**
3.20.z and 3.21.z
**Describe the bug**
Telemetry of a non-GA component is being submitted to analytics.pulpproject.org. This is unexpected and causes summarization to stop working with this exception:
```
Traceback (most recent call last):
File "/srv/app/./manage.py", line 22, in <module>
main()
File "/srv/app/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 112, in handle
self._handle_components(systems, summary)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 74, in _handle_components
semver_version = semver.parse(component.version)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 159, in wrapper
return func(*args, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 190, in parse
return VersionInfo.parse(version).to_dict()
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 726, in parse
raise ValueError("%s is not valid SemVer string" % version)
ValueError: 2.0.0a6 is not valid SemVer string
```
| @pulp/core I added prio-list to this for two reasons. Please let me know if you'd like me to do something differently.
* The production telemetry site is incorrectly receiving data from developer installations, which is a significant data quality problem.
* There are no other unassigned issues with prio-list. | 2022-09-19T23:38:34 |
|
pulp/pulpcore | 3,233 | pulp__pulpcore-3233 | [
"3213"
] | 677687f3ac236562834d2fc38b2e9ed62aa97623 | diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -211,10 +211,24 @@ def verify_signature(filepath, public_key, detached_data=None):
def get_telemetry_posting_url():
+ """
+ Return either the dev or production telemetry FQDN url.
+
+ Production version string examples: ["3.21.1", "1.11.0"]
+ Developer version string example: ["3.20.3.dev", "2.0.0a6"]
+
+ Returns:
+ The FQDN string of either the dev or production telemetry site.
+ """
for app in pulp_plugin_configs():
- if ".dev" in app.version:
+ if not app.version.count(".") == 2: # Only two periods allowed in prod version strings
return DEV_URL
+ x, y, z = app.version.split(".")
+ for item in [x, y, z]:
+ if not item.isdigit(): # Only numbers should be in the prod version string components
+ return DEV_URL
+
return PRODUCTION_URL
| Telemetry of dev installs submitted to production site (https://anaytics.pulpproject.org/)
**Version**
3.20.z and 3.21.z
**Describe the bug**
Telemetry of a non-GA component is being submitted to analytics.pulpproject.org. This is unexpected and causes summarization to stop working with this exception:
```
Traceback (most recent call last):
File "/srv/app/./manage.py", line 22, in <module>
main()
File "/srv/app/./manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/app-root/lib64/python3.9/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 112, in handle
self._handle_components(systems, summary)
File "/srv/app/pulpanalytics/management/commands/summarize.py", line 74, in _handle_components
semver_version = semver.parse(component.version)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 159, in wrapper
return func(*args, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 190, in parse
return VersionInfo.parse(version).to_dict()
File "/opt/app-root/lib64/python3.9/site-packages/semver.py", line 726, in parse
raise ValueError("%s is not valid SemVer string" % version)
ValueError: 2.0.0a6 is not valid SemVer string
```
| @pulp/core I added prio-list to this for two reasons. Please let me know if you'd like me to do something differently.
* The production telemetry site is incorrectly receiving data from developer installations, which is a significant data quality problem.
* There are no other unassigned issues with prio-list. | 2022-09-20T18:59:14 |
|
pulp/pulpcore | 3,243 | pulp__pulpcore-3243 | [
"3238"
] | 241ca8586b9909a50b2e913197d0823bc968429b | diff --git a/pulpcore/app/models/publication.py b/pulpcore/app/models/publication.py
--- a/pulpcore/app/models/publication.py
+++ b/pulpcore/app/models/publication.py
@@ -1,12 +1,14 @@
import hashlib
import os
import re
+from datetime import timedelta
from url_normalize import url_normalize
from urllib.parse import urlparse, urljoin
from aiohttp.web_exceptions import HTTPNotFound
from django.db import IntegrityError, models, transaction
+from django.utils import timezone
from django_lifecycle import hook, AFTER_UPDATE, BEFORE_DELETE
from .base import MasterModel, BaseModel
@@ -385,6 +387,8 @@ class ContentRedirectContentGuard(ContentGuard, AutoAddObjPermsMixin):
TYPE = "content_redirect"
+ EXPIRETIME = timedelta(hours=1)
+
shared_secret = models.BinaryField(max_length=32, default=_gen_secret)
def permit(self, request):
@@ -392,12 +396,15 @@ def permit(self, request):
Permit preauthenticated redirects from pulp-api.
"""
try:
+ expires = request.query["expires"]
+ if int(timezone.now().timestamp()) > int(expires):
+ raise PermissionError("Authentication expired")
signed_url = request.url
validate_token = request.query["validate_token"]
hex_salt, hex_digest = validate_token.split(":", 1)
salt = bytes.fromhex(hex_salt)
digest = bytes.fromhex(hex_digest)
- url = re.sub(r"\?validate_token=.*$", "", str(signed_url))
+ url = re.sub(r"\&validate_token=.*$", "", str(signed_url))
if not digest == self._get_digest(salt, url):
raise PermissionError("Access not authenticated")
except (KeyError, ValueError):
@@ -410,9 +417,14 @@ def preauthenticate_url(self, url, salt=None):
if not salt:
salt = _gen_secret()
hex_salt = salt.hex()
- digest = self._get_digest(salt, url).hex()
- url = url + f"?validate_token={hex_salt}:{digest}"
- return url
+ expiretime = int((timezone.now() + self.EXPIRETIME).timestamp())
+ if "?" in url:
+ separator = "&"
+ else:
+ separator = "?"
+ url_to_sign = url + separator + f"expires={expiretime}"
+ digest = self._get_digest(salt, url_to_sign).hex()
+ return url_to_sign + f"&validate_token={hex_salt}:{digest}"
def _get_digest(self, salt, url):
url_parts = urlparse(url_normalize(url))
| diff --git a/pulpcore/tests/unit/test_content_guard.py b/pulpcore/tests/unit/test_content_guard.py
--- a/pulpcore/tests/unit/test_content_guard.py
+++ b/pulpcore/tests/unit/test_content_guard.py
@@ -1,69 +1,75 @@
+import pytest
import re
-from unittest import TestCase
from unittest.mock import Mock
from pulpcore.app.models import ContentRedirectContentGuard
-class RedirectingContentGuardTestCase(TestCase):
- """Tests that the redirecting content guard can produce url to the content app."""
+def test_preauthenticate_urls():
+ """Test that the redirecting content guard can produce url to the content app."""
- def test_preauthenticate_urls(self):
- """Test that the redirecting content guard can produce url to the content app."""
+ original_url = "http://localhost:8080/pulp/content/dist/"
+ content_guard = ContentRedirectContentGuard(name="test")
+ content_guard2 = ContentRedirectContentGuard(name="test2")
+ signed_url = content_guard.preauthenticate_url(original_url)
+ signed_url2 = content_guard2.preauthenticate_url(original_url)
- original_url = "http://localhost:8080/pulp/content/dist/"
- content_guard = ContentRedirectContentGuard(name="test")
- content_guard2 = ContentRedirectContentGuard(name="test2")
- signed_url = content_guard.preauthenticate_url(original_url)
- signed_url2 = content_guard2.preauthenticate_url(original_url)
+ # analyse signed url
+ pattern = re.compile(
+ r"^(?P<url>.*)\?expires=(?P<expires>\d*)&validate_token=(?P<salt>.*):(?P<digest>.*)$"
+ )
+ url_match = pattern.match(signed_url)
+ assert bool(url_match)
+ assert url_match.group("url") == original_url
+ salt = url_match.group("salt")
+ digest = url_match.group("digest")
+ expires = url_match.group("expires")
- # analyse signed url
- pattern = re.compile(r"^(?P<url>.*)\?validate_token=(?P<salt>.*):(?P<digest>.*)$")
- url_match = pattern.match(signed_url)
- self.assertTrue(bool(url_match))
- self.assertEqual(url_match.group("url"), original_url)
- salt = url_match.group("salt")
- digest = url_match.group("digest")
+ url_match2 = pattern.match(signed_url2)
+ assert bool(url_match2)
+ assert url_match2.group("url") == original_url
+ salt2 = url_match2.group("salt")
+ digest2 = url_match2.group("digest")
- url_match2 = pattern.match(signed_url2)
- self.assertTrue(bool(url_match2))
- self.assertEqual(url_match2.group("url"), original_url)
- salt2 = url_match2.group("salt")
- digest2 = url_match2.group("digest")
+ request = Mock()
- request = Mock()
+ # Try unsigned url
+ request.url = original_url
+ request.query = {}
+ with pytest.raises(PermissionError):
+ content_guard.permit(request)
- # Try unsigned url
- request.url = original_url
- request.query = {}
- with self.assertRaises(PermissionError):
- content_guard.permit(request)
+ # Try valid url
+ request.url = signed_url
+ request.query = {"expires": expires, "validate_token": ":".join((salt, digest))}
+ content_guard.permit(request)
- # Try valid url
- request.url = signed_url
- request.query = {"validate_token": ":".join((salt, digest))}
- content_guard.permit(request)
+ # Try changed hostname
+ request.url = signed_url.replace("localhost", "localnest")
+ request.query = {"expires": expires, "validate_token": ":".join((salt, digest))}
+ content_guard.permit(request)
- # Try changed hostname
- request.url = signed_url.replace("localhost", "localnest")
- request.query = {"validate_token": ":".join((salt, digest))}
+ # Try changed distribution
+ request.url = signed_url.replace("dist", "publication")
+ request.query = {"expires": expires, "validate_token": ":".join((salt, digest))}
+ with pytest.raises(PermissionError):
content_guard.permit(request)
- # Try changed distribution
- request.url = signed_url.replace("dist", "publication")
- request.query = {"validate_token": ":".join((salt, digest))}
- with self.assertRaises(PermissionError):
- content_guard.permit(request)
+ # Try tempered salt
+ request.url = signed_url.replace(salt, salt2)
+ request.query = {"expires": expires, "validate_token": ":".join((salt2, digest))}
+ with pytest.raises(PermissionError):
+ content_guard.permit(request)
- # Try tempered salt
- request.url = signed_url.replace(salt, salt2)
- request.query = {"validate_token": ":".join((salt2, digest))}
- with self.assertRaises(PermissionError):
- content_guard.permit(request)
+ # Try tempered digest
+ request.url = signed_url.replace(digest, digest2)
+ request.query = {"expires": expires, "validate_token": ":".join((salt, digest2))}
+ with pytest.raises(PermissionError):
+ content_guard.permit(request)
- # Try tempered digest
- request.url = signed_url.replace(digest, digest2)
- request.query = {"validate_token": ":".join((salt, digest2))}
- with self.assertRaises(PermissionError):
- content_guard.permit(request)
+ # Try tempered expiry
+ request.url = signed_url.replace(digest, digest2)
+ request.query = {"expires": str(int(expires) + 1), "validate_token": ":".join((salt, digest2))}
+ with pytest.raises(PermissionError):
+ content_guard.permit(request)
| preauthenticated urls should expire
The `ContentRedirectingContentGuard` is able to emit preauthenticated urls. The should have an expiry.
| 2022-09-23T06:54:27 |
|
pulp/pulpcore | 3,245 | pulp__pulpcore-3245 | [
"3244"
] | 16650afc2f53203f52baf0d76a920fe4dbe56b57 | diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -276,23 +276,24 @@ def get_artifact_url(artifact, headers=None, http_method=None):
or private cloud storage.
"""
artifact_file = artifact.file
+ content_disposition = f"attachment;filename={artifact.pk}"
if (
settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem"
or not settings.REDIRECT_TO_OBJECT_STORAGE
):
return _artifact_serving_distribution().artifact_url(artifact)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.s3boto3.S3Boto3Storage":
- parameters = {"ResponseContentDisposition": f"attachment%3Bfilename={artifact_file.name}"}
+ parameters = {"ResponseContentDisposition": content_disposition}
if headers and headers.get("Content-Type"):
parameters["ResponseContentType"] = headers.get("Content-Type")
url = artifact_file.storage.url(
artifact_file.name, parameters=parameters, http_method=http_method
)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.azure_storage.AzureStorage":
- parameters = {"content_disposition": f"attachment%3Bfilename={artifact_file.name}"}
+ parameters = {"content_disposition": content_disposition}
if headers and headers.get("Content-Type"):
parameters["content_type"] = headers.get("Content-Type")
- url = artifact_file.storage.url(artifact_file.name)
+ url = artifact_file.storage.url(artifact_file.name, parameters=parameters)
else:
raise NotImplementedError(
f"The value settings.DEFAULT_FILE_STORAGE={settings.DEFAULT_FILE_STORAGE} "
diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -810,6 +810,7 @@ async def _serve_content_artifact(self, content_artifact, headers, request):
"""
artifact_file = content_artifact.artifact.file
artifact_name = artifact_file.name
+ content_disposition = f"attachment;filename={content_artifact.relative_path}"
if settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem":
path = os.path.join(settings.MEDIA_ROOT, artifact_name)
@@ -819,19 +820,17 @@ async def _serve_content_artifact(self, content_artifact, headers, request):
elif not settings.REDIRECT_TO_OBJECT_STORAGE:
return ArtifactResponse(content_artifact.artifact, headers=headers)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.s3boto3.S3Boto3Storage":
- content_disposition = f"attachment%3Bfilename={content_artifact.relative_path}"
parameters = {"ResponseContentDisposition": content_disposition}
if headers.get("Content-Type"):
parameters["ResponseContentType"] = headers.get("Content-Type")
url = URL(
artifact_file.storage.url(
- artifact_file.name, parameters=parameters, http_method=request.method
+ artifact_name, parameters=parameters, http_method=request.method
),
encoded=True,
)
raise HTTPFound(url)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.azure_storage.AzureStorage":
- content_disposition = f"attachment%3Bfilename={artifact_name}"
parameters = {"content_disposition": content_disposition}
if headers.get("Content-Type"):
parameters["content_type"] = headers.get("Content-Type")
| diff --git a/pulpcore/tests/functional/api/test_artifact_distribution.py b/pulpcore/tests/functional/api/test_artifact_distribution.py
--- a/pulpcore/tests/functional/api/test_artifact_distribution.py
+++ b/pulpcore/tests/functional/api/test_artifact_distribution.py
@@ -1,8 +1,17 @@
import requests
+
+from django.conf import settings
+
from hashlib import sha256
from pulp_smash import utils
+OBJECT_STORAGES = (
+ "storages.backends.s3boto3.S3Boto3Storage",
+ "storages.backends.azure_storage.AzureStorage",
+)
+
+
def test_artifact_distribution(cli_client, random_artifact):
artifact_uuid = random_artifact.pulp_href.split("/")[-2]
@@ -18,3 +27,8 @@ def test_artifact_distribution(cli_client, random_artifact):
hasher = sha256()
hasher.update(response.content)
assert hasher.hexdigest() == random_artifact.sha256
+ if settings.DEFAULT_FILE_STORAGE in OBJECT_STORAGES:
+ content_disposition = response.headers.get("Content-Disposition")
+ assert content_disposition is not None
+ filename = artifact_uuid
+ assert f"attachment;filename={filename}" == content_disposition
| Do not expose artifact digest via content-disposition header using Azure backend
**Version**
main and older
**Describe the bug**
When retrieving content from Azure backend `Content-disposition` header contains artifact's digest in the filename.
**To Reproduce**
```
$ curl --head https://pulp3-source-fedora36.puffy.example.com/pulp/content/lukasku/Packages/t/tada.rpm -L
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 23 Sep 2022 10:18:43 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 10
Connection: keep-alive
Location: https://pulp-azurite:10000/devstoreaccount1/pulp-test/pulp3/artifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128?se=2022-09-23T10%3A20%3A43Z&sp=r&sv=2021-08-06&sr=b&rscd=attachment%253Bfilename%3Dartifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128&sig=TYou/u4cjMat3kab1kCkynRruhGiYJrBDatXIB0cmaQ%3D
HTTP/1.1 200 OK
Server: Azurite-Blob/3.19.0
last-modified: Fri, 23 Sep 2022 10:10:38 GMT
x-ms-creation-time: Fri, 23 Sep 2022 10:10:38 GMT
x-ms-blob-type: BlockBlob
x-ms-lease-state: available
x-ms-lease-status: unlocked
content-length: 2494
content-type: application/octet-stream
etag: "0x21B8870A8293B60"
content-md5: PJ1thLs2HRCKNB4OsNW92A==
content-disposition: attachment%3Bfilename=artifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128
x-ms-request-id: 6b3e8e1f-abb0-4902-8fc5-43cf81a2cd86
x-ms-version: 2021-10-04
date: Fri, 23 Sep 2022 10:18:43 GMT
accept-ranges: bytes
x-ms-server-encrypted: true
x-ms-access-tier: Hot
x-ms-access-tier-inferred: true
x-ms-access-tier-change-time: Fri, 23 Sep 2022 10:10:38 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
**Expected behavior**
Filename is user friendly and user readable and does not expose artifact digest.
| 2022-09-23T12:03:24 |
|
pulp/pulpcore | 3,258 | pulp__pulpcore-3258 | [
"3016"
] | f17ec84156d81ae92226030922f3d352fc320abc | diff --git a/pulpcore/app/migrations/0096_alter_task_logging_cid.py b/pulpcore/app/migrations/0096_alter_task_logging_cid.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/migrations/0096_alter_task_logging_cid.py
@@ -0,0 +1,31 @@
+# Generated by Django 3.2.15 on 2022-10-18 09:51
+
+from django.db import migrations, models
+from django_guid.utils import generate_guid
+
+
+def migrate_empty_string_logging_cid(apps, schema_editor):
+
+ Task = apps.get_model("core", "Task")
+
+ for task in Task.objects.filter(logging_cid=""):
+ task.logging_cid = generate_guid()
+ task.save()
+
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('core', '0095_artifactdistribution'),
+ ]
+
+ operations = [
+ migrations.RunPython(
+ migrate_empty_string_logging_cid, reverse_code=migrations.RunPython.noop
+ ),
+ migrations.AlterField(
+ model_name='task',
+ name='logging_cid',
+ field=models.TextField(db_index=True),
+ ),
+ ]
diff --git a/pulpcore/app/models/task.py b/pulpcore/app/models/task.py
--- a/pulpcore/app/models/task.py
+++ b/pulpcore/app/models/task.py
@@ -176,7 +176,7 @@ class Task(BaseModel, AutoAddObjPermsMixin):
state = models.TextField(choices=TASK_CHOICES)
name = models.TextField()
- logging_cid = models.TextField(db_index=True, default="")
+ logging_cid = models.TextField(db_index=True)
started_at = models.DateTimeField(null=True)
finished_at = models.DateTimeField(null=True)
diff --git a/pulpcore/tasking/tasks.py b/pulpcore/tasking/tasks.py
--- a/pulpcore/tasking/tasks.py
+++ b/pulpcore/tasking/tasks.py
@@ -4,7 +4,8 @@
from django.db import transaction, connection
from django.db.models import Model, Q
from django.utils import timezone
-from django_guid import get_guid
+from django_guid import get_guid, set_guid
+from django_guid.utils import generate_guid
from pulpcore.app.models import Task, TaskSchedule
from pulpcore.app.util import get_url
@@ -87,7 +88,7 @@ def dispatch(
with transaction.atomic():
task = Task.objects.create(
state=TASK_STATES.WAITING,
- logging_cid=(get_guid() or ""),
+ logging_cid=(get_guid()),
task_group=task_group,
name=func,
args=args,
@@ -116,6 +117,7 @@ def dispatch_scheduled_tasks():
while task_schedule.next_dispatch < now:
# Do not schedule in the past
task_schedule.next_dispatch += task_schedule.dispatch_interval
+ set_guid(generate_guid())
with transaction.atomic():
task_schedule.last_task = dispatch(
task_schedule.task_name,
| logging_cid in task response is an empty string
Sometimes the logging_cid in the task response is an empty string:
```
{
"child_tasks": [],
"created_resources": [],
"error": null,
"finished_at": "2022-07-25T21:32:09.435330Z",
"logging_cid": "",
"name": "pulpcore.app.tasks.telemetry.post_telemetry",
"parent_task": null,
"progress_reports": [],
"pulp_created": "2022-07-25T21:32:09.383372Z",
"pulp_href": "/pulp/api/v3/tasks/7f4c54d6-fcda-4006-9da1-3c6910a18171/",
"reserved_resources_record": [],
"started_at": "2022-07-25T21:32:09.419785Z",
"state": "completed",
"task_group": null,
"worker": "/pulp/api/v3/workers/c609a0d5-45cc-4756-af3c-4b50cd7520b1/"
}
```
Since logging_cid represents a uuid, I'd expect it to be null or a uuid. I don't think an empty string makes sense. It also makes our code that parses out uuid more complex because we have handle the possibility of empty strings.
| 2022-09-28T11:51:36 |
||
pulp/pulpcore | 3,267 | pulp__pulpcore-3267 | [
"3263"
] | a5ad0dc4a542bac90ceb6564a54c5aff18a279e0 | diff --git a/pulpcore/app/tasks/purge.py b/pulpcore/app/tasks/purge.py
--- a/pulpcore/app/tasks/purge.py
+++ b/pulpcore/app/tasks/purge.py
@@ -1,4 +1,5 @@
from gettext import gettext as _
+
from django_currentuser.middleware import get_current_authenticated_user
from pulpcore.app.models import (
ProgressReport,
@@ -71,7 +72,6 @@ def purge(finished_before, states):
# Tasks, prior to the specified date, in the specified state, owned by the current-user
tasks_qs = Task.objects.filter(finished_at__lt=finished_before, state__in=states)
candidate_qs = get_objects_for_user(current_user, "core.delete_task", qs=tasks_qs)
- delete_qs = get_objects_for_user(current_user, "core.delete_task", qs=tasks_qs[:DELETE_LIMIT])
# Progress bar reporting total-units
totals_pb = ProgressReport(
message=_("Purged task-related-objects total"),
@@ -101,7 +101,7 @@ def purge(finished_before, states):
# Until our query returns "No tasks deleted", add results into totals and Do It Again
while units_deleted > 0:
units_deleted, details = Task.objects.filter(
- pk__in=delete_qs.values_list("pk", flat=True)
+ pk__in=candidate_qs[:DELETE_LIMIT].values_list("pk", flat=True)
).delete()
_details_reporting(details_reports, details, totals_pb)
| Cannot purge tasks with a new user
A runtime error is raised when a random user tries to purge tasks. Everything works correctly for an admin user.
These tests do not properly test the workflow: https://github.com/pulp/pulp_file/blob/189da4a6902a4339a8b7f47ea89ae53692b7c81a/pulp_file/tests/functional/api/from_pulpcore/test_purge.py (We are sending requests with admin credentials: https://github.com/pulp/pulp_file/blob/189da4a6902a4339a8b7f47ea89ae53692b7c81a/pulp_file/tests/functional/api/from_pulpcore/test_purge.py#L264, https://github.com/pulp/pulp_file/blob/189da4a6902a4339a8b7f47ea89ae53692b7c81a/pulp_file/tests/functional/api/from_pulpcore/test_purge.py#L296).
```
(pulp) [vagrant@pulp3-source-fedora35 backup]$ pulp user create --username test --password characters
{
"pulp_href": "/pulp/api/v3/users/2/",
"id": 2,
"username": "test",
"first_name": "",
"last_name": "",
"email": "",
"is_staff": false,
"is_active": true,
"date_joined": "2022-09-29T18:13:59.604621Z",
"groups": []
}
(pulp) [vagrant@pulp3-source-fedora35 backup]$ pulp --username test --password characters task purge
Started background task /pulp/api/v3/tasks/16d5b9c8-92aa-4e2d-a381-96195c199f5f/
Error: Task /pulp/api/v3/tasks/16d5b9c8-92aa-4e2d-a381-96195c199f5f/ failed: 'Cannot filter a query once a slice has been taken.'
(pulp) [vagrant@pulp3-source-fedora35 backup]$ http :24817/pulp/api/v3/tasks/16d5b9c8-92aa-4e2d-a381-96195c199f5f/
HTTP/1.1 200 OK
Access-Control-Expose-Headers: Correlation-ID
Allow: GET, PATCH, DELETE, HEAD, OPTIONS
Connection: close
Content-Length: 1595
Content-Type: application/json
Correlation-ID: 691f71bf781f4cca9bbeec7ac22ea9eb
Date: Thu, 29 Sep 2022 18:14:16 GMT
Referrer-Policy: same-origin
Server: gunicorn
Vary: Accept, Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
{
"child_tasks": [],
"created_resources": [],
"error": {
"description": "Cannot filter a query once a slice has been taken.",
"traceback": " File \"/home/vagrant/devel/pulpcore/pulpcore/tasking/pulpcore_worker.py\", line 452, in _perform_task\n result = func(*args, **kwargs)\n File \"/home/vagrant/devel/pulpcore/pulpcore/app/tasks/purge.py\", line 74, in purge\n delete_qs = get_objects_for_user(current_user, \"core.delete_task\", qs=tasks_qs[:DELETE_LIMIT])\n File \"/home/vagrant/devel/pulpcore/pulpcore/app/role_util.py\", line 125, in get_objects_for_user\n new_qs |= get_objects_for_user_roles(\n File \"/home/vagrant/devel/pulpcore/pulpcore/app/role_util.py\", line 108, in get_objects_for_user_roles\n return qs.annotate(pk_str=Cast(\"pk\", output_field=CharField())).filter(\n File \"/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/query.py\", line 941, in filter\n return self._filter_or_exclude(False, args, kwargs)\n File \"/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/query.py\", line 953, in _filter_or_exclude\n assert not self.query.is_sliced, \\\n"
},
"finished_at": "2022-09-29T18:14:05.194220Z",
"logging_cid": "25aa82231d8b44939f07d3110fb82f8f",
"name": "pulpcore.app.tasks.purge.purge",
"parent_task": null,
"progress_reports": [],
"pulp_created": "2022-09-29T18:14:05.119070Z",
"pulp_href": "/pulp/api/v3/tasks/16d5b9c8-92aa-4e2d-a381-96195c199f5f/",
"reserved_resources_record": [],
"started_at": "2022-09-29T18:14:05.156556Z",
"state": "failed",
"task_group": null,
"worker": "/pulp/api/v3/workers/d4dfe511-5c98-40de-954b-b594c5b42d97/"
}
(pulp) [vagrant@pulp3-source-fedora35 backup]$ pulp task purge
Started background task /pulp/api/v3/tasks/91423959-cfe1-4b33-b0bf-4b92365fbd32/
Done.
```
| 2022-09-30T12:24:46 |
||
pulp/pulpcore | 3,273 | pulp__pulpcore-3273 | [
"3244"
] | 9a4b56e5b6b3c757a505c56d06065a1f78af478c | diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -806,6 +806,7 @@ async def _serve_content_artifact(self, content_artifact, headers, request):
"""
artifact_file = content_artifact.artifact.file
artifact_name = artifact_file.name
+ content_disposition = f"attachment;filename={content_artifact.relative_path}"
if settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem":
path = os.path.join(settings.MEDIA_ROOT, artifact_name)
@@ -815,19 +816,17 @@ async def _serve_content_artifact(self, content_artifact, headers, request):
elif not settings.REDIRECT_TO_OBJECT_STORAGE:
return ArtifactResponse(content_artifact.artifact, headers=headers)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.s3boto3.S3Boto3Storage":
- content_disposition = f"attachment%3Bfilename={content_artifact.relative_path}"
parameters = {"ResponseContentDisposition": content_disposition}
if headers.get("Content-Type"):
parameters["ResponseContentType"] = headers.get("Content-Type")
url = URL(
artifact_file.storage.url(
- artifact_file.name, parameters=parameters, http_method=request.method
+ artifact_name, parameters=parameters, http_method=request.method
),
encoded=True,
)
raise HTTPFound(url)
elif settings.DEFAULT_FILE_STORAGE == "storages.backends.azure_storage.AzureStorage":
- content_disposition = f"attachment%3Bfilename={artifact_name}"
parameters = {"content_disposition": content_disposition}
if headers.get("Content-Type"):
parameters["content_type"] = headers.get("Content-Type")
| Do not expose artifact digest via content-disposition header using Azure backend
**Version**
main and older
**Describe the bug**
When retrieving content from Azure backend `Content-disposition` header contains artifact's digest in the filename.
**To Reproduce**
```
$ curl --head https://pulp3-source-fedora36.puffy.example.com/pulp/content/lukasku/Packages/t/tada.rpm -L
HTTP/1.1 302 Found
Server: nginx
Date: Fri, 23 Sep 2022 10:18:43 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 10
Connection: keep-alive
Location: https://pulp-azurite:10000/devstoreaccount1/pulp-test/pulp3/artifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128?se=2022-09-23T10%3A20%3A43Z&sp=r&sv=2021-08-06&sr=b&rscd=attachment%253Bfilename%3Dartifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128&sig=TYou/u4cjMat3kab1kCkynRruhGiYJrBDatXIB0cmaQ%3D
HTTP/1.1 200 OK
Server: Azurite-Blob/3.19.0
last-modified: Fri, 23 Sep 2022 10:10:38 GMT
x-ms-creation-time: Fri, 23 Sep 2022 10:10:38 GMT
x-ms-blob-type: BlockBlob
x-ms-lease-state: available
x-ms-lease-status: unlocked
content-length: 2494
content-type: application/octet-stream
etag: "0x21B8870A8293B60"
content-md5: PJ1thLs2HRCKNB4OsNW92A==
content-disposition: attachment%3Bfilename=artifact/de/b341b9743f09c938277d645ec9f50d90143734e695cf58d14191c279ff6128
x-ms-request-id: 6b3e8e1f-abb0-4902-8fc5-43cf81a2cd86
x-ms-version: 2021-10-04
date: Fri, 23 Sep 2022 10:18:43 GMT
accept-ranges: bytes
x-ms-server-encrypted: true
x-ms-access-tier: Hot
x-ms-access-tier-inferred: true
x-ms-access-tier-change-time: Fri, 23 Sep 2022 10:10:38 GMT
Connection: keep-alive
Keep-Alive: timeout=5
```
**Expected behavior**
Filename is user friendly and user readable and does not expose artifact digest.
| 2022-10-03T15:56:12 |
||
pulp/pulpcore | 3,282 | pulp__pulpcore-3282 | [
"3280"
] | d2f4762f28792e925d42966c3c52dcd22743c106 | diff --git a/pulpcore/app/viewsets/custom_filters.py b/pulpcore/app/viewsets/custom_filters.py
--- a/pulpcore/app/viewsets/custom_filters.py
+++ b/pulpcore/app/viewsets/custom_filters.py
@@ -22,7 +22,58 @@
class ReservedResourcesFilter(Filter):
"""
- Enables a user to filter tasks by a reserved resource href
+ Enables a user to filter tasks by a reserved resource href.
+ """
+
+ def __init__(self, *args, exclusive=True, shared=True, **kwargs):
+ self.exclusive = exclusive
+ self.shared = shared
+ assert (
+ exclusive or shared
+ ), "ReservedResourceFilter must have either exclusive or shared set."
+ super().__init__(*args, **kwargs)
+
+ def filter(self, qs, value):
+ """
+ Callback to filter the query set based on the provided filter value.
+
+ Args:
+ qs (django.db.models.query.QuerySet): The Queryset to filter
+ value (string|List[str]): href to a reference to a reserved resource or a list thereof
+
+ Returns:
+ django.db.models.query.QuerySet: Queryset filtered by the reserved resource
+ """
+
+ if value is not None:
+ if isinstance(value, str):
+ value = [value]
+ if self.exclusive:
+ if self.shared:
+ for item in value:
+ qs = qs.filter(reserved_resources_record__overlap=[item, "shared:" + item])
+ else:
+ qs = qs.filter(reserved_resources_record__contains=value)
+ else: # self.shared
+ qs = qs.filter(
+ reserved_resources_record__contains=["shared:" + item for item in value]
+ )
+
+ return qs
+
+
+class ReservedResourcesInFilter(BaseInFilter, ReservedResourcesFilter):
+ """
+ Enables a user to filter tasks by a list of reserved resource hrefs.
+ """
+
+
+class ReservedResourcesRecordFilter(Filter):
+ """
+ Enables a user to filter tasks by a reserved resource href.
+
+ Warning: This filter is badly documented and not fully functional, but we need to keep it for
+ compatibility reasons. Use ``ReservedResourcesFilter`` instead.
"""
def filter(self, qs, value):
diff --git a/pulpcore/app/viewsets/task.py b/pulpcore/app/viewsets/task.py
--- a/pulpcore/app/viewsets/task.py
+++ b/pulpcore/app/viewsets/task.py
@@ -26,6 +26,8 @@
HyperlinkRelatedFilter,
IsoDateTimeFilter,
ReservedResourcesFilter,
+ ReservedResourcesInFilter,
+ ReservedResourcesRecordFilter,
CreatedResourcesFilter,
)
from pulpcore.constants import TASK_INCOMPLETE_STATES, TASK_STATES, TASK_CHOICES
@@ -43,8 +45,17 @@ class TaskFilter(BaseFilterSet):
parent_task = HyperlinkRelatedFilter()
child_tasks = HyperlinkRelatedFilter()
task_group = HyperlinkRelatedFilter()
- reserved_resources_record = ReservedResourcesFilter()
+ # This filter is deprecated and badly documented, but we need to keep it for compatibility
+ # reasons
+ reserved_resources_record = ReservedResourcesRecordFilter()
created_resources = CreatedResourcesFilter()
+ # Non model field filters
+ reserved_resources = ReservedResourcesFilter(exclusive=True, shared=True)
+ reserved_resources__in = ReservedResourcesInFilter(exclusive=True, shared=True)
+ exclusive_resources = ReservedResourcesFilter(exclusive=True, shared=False)
+ exclusive_resources__in = ReservedResourcesInFilter(exclusive=True, shared=False)
+ shared_resources = ReservedResourcesFilter(exclusive=False, shared=True)
+ shared_resources__in = ReservedResourcesInFilter(exclusive=False, shared=True)
class Meta:
model = Task
| task filter reserved_resources_record does not work as expected
The task filter named `reserved_resources_record` specifies to be of type `array` in the openapi spec. But it only filters by the last entry in the list. Instead it should filter tasks that lock on all filtered resources.
Additionally, the filter is not aware of the `shared:` prefix.
| 2022-10-05T13:28:14 |
||
pulp/pulpcore | 3,286 | pulp__pulpcore-3286 | [
"3284"
] | 46019850fd7917bae3573552694e85c8d1733209 | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -148,53 +148,51 @@ def process_batch():
)
key = (d_content.content.pk, d_artifact.relative_path)
to_update_ca_artifact[key] = d_artifact.artifact
+
# Query db once and update each object in memory for bulk_update call
for content_artifact in to_update_ca_query.iterator():
key = (content_artifact.content_id, content_artifact.relative_path)
- # Maybe remove dict elements after to reduce memory?
- content_artifact.artifact = to_update_ca_artifact[key]
- to_update_ca_bulk.append(content_artifact)
+ # Same content/relpath/artifact-sha means no change to the
+ # contentartifact, ignore. This prevents us from colliding with any
+ # concurrent syncs with overlapping identical content. "Someone" updated
+ # the contentartifacts to match what we would be doing, so we don't need
+ # to do an (unnecessary) db-update, which was opening us up for a variety
+ # of potential deadlock scenarios.
+ #
+ # We start knowing that we're comparing CAs with same content/rel-path,
+ # because that's what we're using for the key to look up the incoming CA.
+ # So now let's compare artifacts, incoming vs current.
+ #
+ # Are we changing from no-artifact to having one or vice-versa?
+ artifact_state_change = bool(content_artifact.artifact) ^ bool(
+ to_update_ca_artifact[key]
+ )
+ # Do both current and incoming have an artifact?
+ both_have_artifact = (
+ content_artifact.artifact and to_update_ca_artifact[key]
+ )
+ # If both sides have an artifact, do they have the same sha256?
+ same_artifact_hash = both_have_artifact and (
+ content_artifact.artifact.sha256 == to_update_ca_artifact[key].sha256
+ )
+ # Only update if there was an actual change
+ if artifact_state_change or (both_have_artifact and not same_artifact_hash):
+ content_artifact.artifact = to_update_ca_artifact[key]
+ to_update_ca_bulk.append(content_artifact)
# to_update_ca_bulk are the CAs that we know are already persisted.
# We need to update their artifact_ids, and wish to do it in bulk to
# avoid hundreds of round-trips to the database.
- #
- # To avoid deadlocks in high-concurrency environments with overlapping
- # content, we need to update the rows in some defined order. Unfortunately,
- # postgres doesn't support order-on-update - but it *does* support ordering
- # on select-for-update. So, we select-for-update, in pulp_id order, the
- # rows we're about to update as one db-call, and then do the update in a
- # second.
- #
- # NOTE: select-for-update requires being in an atomic-transaction. We are
- # **already in an atomic transaction** at this point as a result of the
- # "with transaction.atomic():", above.
- ids = [k.pulp_id for k in to_update_ca_bulk]
- # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- subq = (
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- )
- # NOTE: it might look like you can "safely" make this request
- # "more efficient". You'd be wrong, and would only be removing an
- # ordering-guardrail preventing deadlock. Don't touch.
- len(ContentArtifact.objects.filter(pk__in=subq).values_list())
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ if to_update_ca_bulk:
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
- # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # To avoid a deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
# trust the pulp_id (by the time we go to create a CA, it may already exist,
# and be replaced by the 'real' one), we sort by their "natural key".
content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
+
self._post_save(batch)
await sync_to_async(process_batch)()
| contentartifact bulk_update can still deadlock
**Version**
all
**Describe the bug**
There have been several attempts to address deadlocks that occur under high-load, high-concurrency sync operations with overlapping content. The deadlock stems from ContentArtifact being deduplicated across repositories that contain the same Contents. At this point, the remaining issue arises when parties to the deadlock are attempting to bulk_update the 'artifact' field of any existing (already-persisted) ContentArtifacts for the Content-batch they are syncing.
**To Reproduce**
Here is a script using pulp-cli that can reproduce the behavior at will. It requires use of the Very Large pulp-file performance fixture. It assumes you have the 'jq' package available.
```
#!/bin/bash
URLS=(\
https://fixtures.pulpproject.org/file-perf/PULP_MANIFEST \
)
NAMES=(\
file-perf \
)
# Make sure we're concurent-enough
num_workers=`sudo systemctl status pulpcore-worker* | grep "service - Pulp Worker" | wc -l`
echo "Current num-workers ${num_workers}"
if [ ${num_workers} -lt 10 ]
then
for (( i=${num_workers}+1; i<=10; i++ ))
do
echo "Starting worker ${i}"
sudo systemctl start pulpcore-worker@${i}
done
fi
echo "CLEANUP"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote destroy --name ${NAMES[$n]}-${i}
pulp file repository destroy --name ${NAMES[$n]}-${i}
done
done
pulp orphan cleanup --protection-time 0
echo "SETUP URLS AND REMOTES"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote create --name ${NAMES[$n]}-${i} --url ${URLS[$n]} | jq .pulp_href
pulp file repository create --name ${NAMES[$n]}-${i} --remote ${NAMES[$n]}-${i} | jq .pulp_href
done
done
starting_failed=`pulp task list --limit 10000 --state failed | jq length`
echo "SYNCING..."
for i in {1..9}
do
for n in ${!NAMES[@]}
do
pulp -b file repository sync --name ${NAMES[$n]}-${i}
done
done
sleep 5
echo "WAIT FOR COMPLETION...."
while true
do
running=`pulp task list --limit 10000 --state running | jq length`
echo -n "."
sleep 5
if [ ${running} -eq 0 ]
then
echo "DONE"
break
fi
done
failed=`pulp task list --limit 10000 --state failed | jq length`
echo "FAILURES : ${failed}"
if [ ${failed} -gt ${starting_failed} ]
then
echo "FAILED: " ${failed} - ${starting_failed}
exit
fi
```
**Expected behavior**
All syncs should happen without deadlocks.
**Additional context**
This is the latest step in fixing the codepath that has attempted to be addressed by issues: #3192 #3111 #2430
| 2022-10-06T20:30:59 |
||
pulp/pulpcore | 3,306 | pulp__pulpcore-3306 | [
"3187",
"3187"
] | 9a4b56e5b6b3c757a505c56d06065a1f78af478c | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -2,6 +2,7 @@
import json
import logging
import os
+import os.path
import subprocess
import tarfile
@@ -75,6 +76,15 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
and method != FS_EXPORT_METHODS.WRITE
):
raise RuntimeError(_("Only write is supported for non-filesystem storage."))
+ os.makedirs(path)
+ export_not_on_same_filesystem = (
+ settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem"
+ and os.stat(settings.MEDIA_ROOT).st_dev != os.stat(path).st_dev
+ )
+
+ if method == FS_EXPORT_METHODS.HARDLINK and export_not_on_same_filesystem:
+ log.info(_("Hard link cannot be created, file will be copied."))
+ method = FS_EXPORT_METHODS.WRITE
for relative_path, artifact in relative_paths_to_artifacts.items():
dest = os.path.join(path, relative_path)
@@ -82,9 +92,11 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
if method == FS_EXPORT_METHODS.SYMLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.symlink(src, dest)
elif method == FS_EXPORT_METHODS.HARDLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.link(src, dest)
elif method == FS_EXPORT_METHODS.WRITE:
with open(dest, "wb") as f, artifact.file as af:
| FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
| 2022-10-12T19:31:15 |
||
pulp/pulpcore | 3,312 | pulp__pulpcore-3312 | [
"3313"
] | 351bd8653820195ffd3b16c5fbab0f14fd0061fe | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -54,9 +54,7 @@ def _validate_fs_export(content_artifacts):
RuntimeError: If Artifacts are not downloaded or when trying to link non-fs files
"""
if content_artifacts.filter(artifact=None).exists():
- raise UnexportableArtifactException(
- _("Cannot export artifacts that haven't been downloaded.")
- )
+ raise UnexportableArtifactException()
def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_METHODS.WRITE):
| `UnexportableArtifactException` does not expect any arguments during the initialization
`UnexportableArtifactException.__init__() takes 1 positional argument but 2 were given)'`
https://github.com/pulp/pulpcore/blob/351bd8653820195ffd3b16c5fbab0f14fd0061fe/pulpcore/app/tasks/export.py#L41-L59
| 2022-10-14T07:35:28 |
||
pulp/pulpcore | 3,335 | pulp__pulpcore-3335 | [
"3323"
] | cb68fe59ae8f6da4eb3c95c6d05870483cac863e | diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -447,7 +447,26 @@ def validate_and_assemble(toc_filename):
with tempfile.TemporaryDirectory(dir=".") as temp_dir:
with tarfile.open(path, "r:gz") as tar:
- tar.extractall(path=temp_dir)
+
+ def is_within_directory(directory, target):
+
+ abs_directory = os.path.abspath(directory)
+ abs_target = os.path.abspath(target)
+
+ prefix = os.path.commonprefix([abs_directory, abs_target])
+
+ return prefix == abs_directory
+
+ def safe_extract(tar, path=".", members=None, *, numeric_owner=False):
+
+ for member in tar.getmembers():
+ member_path = os.path.join(path, member.name)
+ if not is_within_directory(path, member_path):
+ raise Exception("Attempted Path Traversal in Tar File")
+
+ tar.extractall(path, members, numeric_owner=numeric_owner)
+
+ safe_extract(tar, path=temp_dir)
# Check version info
with open(os.path.join(temp_dir, VERSIONS_FILE)) as version_file:
| CVE-2007-4559 fix tar.extractall() use in pulp-import
**Version**
all
**Describe the bug**
python's tar.extractall() allows for malicious file overwriting on the executing system. See CVE-2007-4559 for details. PulpImport uses extractall() (although the workflow is such that the attack-window is small and requires social-engineering access to admins of importing systems).
**Additional context**
TrellixVulnTeam submitted PR #3287 to address - that work needs to be picked up and moved through the Pulp submission process.
| 2022-10-19T17:36:52 |
||
pulp/pulpcore | 3,337 | pulp__pulpcore-3337 | [
"3319"
] | 532273a3c5e0ed39d3ecc0884fd24184518c855e | diff --git a/pulpcore/app/models/task.py b/pulpcore/app/models/task.py
--- a/pulpcore/app/models/task.py
+++ b/pulpcore/app/models/task.py
@@ -4,6 +4,7 @@
import logging
import traceback
import os
+from contextlib import suppress
from datetime import timedelta
from gettext import gettext as _
@@ -18,7 +19,7 @@
BaseModel,
GenericRelationModel,
)
-from pulpcore.constants import TASK_CHOICES, TASK_FINAL_STATES, TASK_STATES
+from pulpcore.constants import TASK_CHOICES, TASK_INCOMPLETE_STATES, TASK_STATES
from pulpcore.exceptions import AdvisoryLockError, exception_to_dict
from pulpcore.tasking.constants import TASKING_CONSTANTS
@@ -239,8 +240,13 @@ def set_running(self):
state=TASK_STATES.RUNNING, started_at=timezone.now()
)
if rows != 1:
- _logger.warning(_("Task __call__() occurred but Task %s is not at WAITING") % self.pk)
- self.refresh_from_db()
+ raise RuntimeError(
+ _("Task set_running() occurred but Task {} is not WAITING").format(self.pk)
+ )
+ with suppress(AttributeError):
+ del self.state
+ with suppress(AttributeError):
+ del self.started_at
def set_completed(self):
"""
@@ -248,18 +254,19 @@ def set_completed(self):
This updates the :attr:`finished_at` and sets the :attr:`state` to :attr:`COMPLETED`.
"""
- # Only set the state to finished if it's not already in a complete state. This is
- # important for when the task has been canceled, so we don't move the task from canceled
- # to finished.
- rows = (
- Task.objects.filter(pk=self.pk)
- .exclude(state__in=TASK_FINAL_STATES)
- .update(state=TASK_STATES.COMPLETED, finished_at=timezone.now())
+ # Only set the state to finished if it's running. This is important for when the task has
+ # been canceled, so we don't move the task from canceled to finished.
+ rows = Task.objects.filter(pk=self.pk, state=TASK_STATES.RUNNING).update(
+ state=TASK_STATES.COMPLETED, finished_at=timezone.now()
)
if rows != 1:
- msg = "Task set_completed() occurred but Task %s is already in final state"
- _logger.warning(msg % self.pk)
- self.refresh_from_db()
+ raise RuntimeError(
+ _("Task set_completed() occurred but Task {} is not RUNNING.").format(self.pk)
+ )
+ with suppress(AttributeError):
+ del self.state
+ with suppress(AttributeError):
+ del self.finished_at
def set_failed(self, exc, tb):
"""
@@ -273,18 +280,70 @@ def set_failed(self, exc, tb):
tb (traceback): Traceback instance for the current exception.
"""
tb_str = "".join(traceback.format_tb(tb))
- rows = (
- Task.objects.filter(pk=self.pk)
- .exclude(state__in=TASK_FINAL_STATES)
- .update(
- state=TASK_STATES.FAILED,
- finished_at=timezone.now(),
- error=exception_to_dict(exc, tb_str),
- )
+ rows = Task.objects.filter(pk=self.pk, state=TASK_STATES.RUNNING).update(
+ state=TASK_STATES.FAILED,
+ finished_at=timezone.now(),
+ error=exception_to_dict(exc, tb_str),
+ )
+ if rows != 1:
+ raise RuntimeError(_("Attempt to set a not running task to failed."))
+ with suppress(AttributeError):
+ del self.state
+ with suppress(AttributeError):
+ del self.finished_at
+ with suppress(AttributeError):
+ del self.error
+
+ def set_canceling(self):
+ """
+ Set this task to canceling from either waiting, running or canceling.
+
+ This is the only valid transition without holding the task lock.
+ """
+ rows = Task.objects.filter(pk=self.pk, state__in=TASK_INCOMPLETE_STATES).update(
+ state=TASK_STATES.CANCELING,
+ )
+ if rows != 1:
+ raise RuntimeError(_("Attempt to cancel a finished task."))
+ with suppress(AttributeError):
+ del self.state
+
+ def set_canceled(self, final_state=TASK_STATES.CANCELED, reason=None):
+ """
+ Set this task to canceled or failed from canceling.
+ """
+ # Make sure this function was called with a proper final state
+ assert final_state in [TASK_STATES.CANCELED, TASK_STATES.FAILED]
+ task_data = {}
+ if reason:
+ task_data["error"] = {"reason": reason}
+ rows = Task.objects.filter(pk=self.pk, state=TASK_STATES.CANCELING).update(
+ state=final_state,
+ finished_at=timezone.now(),
+ **task_data,
)
if rows != 1:
- raise RuntimeError("Attempt to set a finished task to failed.")
- self.refresh_from_db()
+ raise RuntimeError(_("Attempt to mark a task canceled that is not in canceling state."))
+ with suppress(AttributeError):
+ del self.state
+ with suppress(AttributeError):
+ del self.finished_at
+ with suppress(AttributeError):
+ del self.error
+
+ # Example taken from here:
+ # https://docs.djangoproject.com/en/3.2/ref/models/instances/#refreshing-objects-from-database
+ def refresh_from_db(self, using=None, fields=None, **kwargs):
+ # fields contains the name of the deferred field to be
+ # loaded.
+ if fields is not None:
+ fields = set(fields)
+ deferred_fields = self.get_deferred_fields()
+ # If any deferred field is going to be loaded
+ if fields.intersection(deferred_fields):
+ # then load all of them
+ fields = fields.union(deferred_fields)
+ super().refresh_from_db(using, fields, **kwargs)
class Meta:
indexes = [models.Index(fields=["pulp_created"])]
diff --git a/pulpcore/tasking/pulpcore_worker.py b/pulpcore/tasking/pulpcore_worker.py
--- a/pulpcore/tasking/pulpcore_worker.py
+++ b/pulpcore/tasking/pulpcore_worker.py
@@ -184,32 +184,24 @@ def cancel_abandoned_task(self, task, final_state, reason=None):
Return ``True`` if the task was actually canceled, ``False`` otherwise.
"""
# A task is considered abandoned when in running state, but no worker holds its lock
- Task.objects.filter(pk=task.pk, state=TASK_STATES.RUNNING).update(
- state=TASK_STATES.CANCELING
- )
- task.refresh_from_db()
- if task.state == TASK_STATES.CANCELING:
- if reason:
- _logger.info(
- "Cleaning up task %s and marking as %s. Reason: %s",
- task.pk,
- final_state,
- reason,
- )
- else:
- _logger.info(_("Cleaning up task %s and marking as %s."), task.pk, final_state)
- _delete_incomplete_resources(task)
- if task.reserved_resources_record:
- self.notify_workers()
- task_data = {
- "state": final_state,
- "finished_at": timezone.now(),
- }
- if reason:
- task_data["error"] = {"reason": reason}
- Task.objects.filter(pk=task.pk, state=TASK_STATES.CANCELING).update(**task_data)
- return True
- return False
+ try:
+ task.set_canceling()
+ except RuntimeError:
+ return False
+ if reason:
+ _logger.info(
+ "Cleaning up task %s and marking as %s. Reason: %s",
+ task.pk,
+ final_state,
+ reason,
+ )
+ else:
+ _logger.info(_("Cleaning up task %s and marking as %s."), task.pk, final_state)
+ _delete_incomplete_resources(task)
+ task.set_canceled(final_state=final_state, reason=reason)
+ if task.reserved_resources_record:
+ self.notify_workers()
+ return True
def iter_tasks(self):
"""Iterate over ready tasks and yield each task while holding the lock."""
diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -5,7 +5,7 @@
from django.db import connection
from pulpcore.app.models import Task
-from pulpcore.constants import TASK_FINAL_STATES, TASK_INCOMPLETE_STATES, TASK_STATES
+from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES
_logger = logging.getLogger(__name__)
@@ -15,7 +15,7 @@ def cancel(task_id):
Cancel the task that is represented by the given task_id.
This method cancels only the task with given task_id, not the spawned tasks. This also updates
- task's state to either 'canceled' or 'canceling'.
+ task's state to 'canceling'.
Args:
task_id (str): The ID of the task you wish to cancel
@@ -23,30 +23,25 @@ def cancel(task_id):
Raises:
rest_framework.exceptions.NotFound: If a task with given task_id does not exist
"""
- task_status = Task.objects.get(pk=task_id)
+ task = Task.objects.get(pk=task_id)
- if task_status.state in TASK_FINAL_STATES:
+ if task.state in TASK_FINAL_STATES:
# If the task is already done, just stop
_logger.debug(
"Task [{task_id}] already in a final state: {state}".format(
- task_id=task_id, state=task_status.state
+ task_id=task_id, state=task.state
)
)
- return task_status
+ return task
_logger.info(_("Canceling task: {id}").format(id=task_id))
- task = task_status
# This is the only valid transition without holding the task lock
- rows = Task.objects.filter(pk=task.pk, state__in=TASK_INCOMPLETE_STATES).update(
- state=TASK_STATES.CANCELING
- )
+ task.set_canceling()
# Notify the worker that might be running that task and other workers to clean up
with connection.cursor() as cursor:
cursor.execute("SELECT pg_notify('pulp_worker_cancel', %s)", (str(task.pk),))
cursor.execute("NOTIFY pulp_worker_wakeup")
- if rows == 1:
- task.refresh_from_db()
return task
@@ -57,8 +52,8 @@ def _delete_incomplete_resources(task):
Args:
task (Task): A task.
"""
- if task.state not in [TASK_STATES.CANCELED, TASK_STATES.CANCELING]:
- raise RuntimeError(_("Task must be canceled."))
+ if task.state != TASK_STATES.CANCELING:
+ raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
try:
if model.complete:
| diff --git a/pulpcore/tests/functional/api/test_tasking.py b/pulpcore/tests/functional/api/test_tasking.py
--- a/pulpcore/tests/functional/api/test_tasking.py
+++ b/pulpcore/tests/functional/api/test_tasking.py
@@ -91,28 +91,27 @@ def test_delete_cancel_waiting_task(dispatch_task, tasks_api_client):
# Now cancel the task
task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
-
- assert task.started_at is None
- assert task.finished_at is None
- assert task.state == "canceling"
-
# cancel the blocking task
- task = tasks_api_client.tasks_cancel(blocking_task_href, {"state": "canceled"})
+ tasks_api_client.tasks_cancel(blocking_task_href, {"state": "canceled"})
- for i in range(10):
- task = tasks_api_client.read(task_href)
- if task.state != "canceling":
- break
- time.sleep(1)
+ if task.state == "canceling":
+ assert task.started_at is None
+ assert task.finished_at is None
+
+ for i in range(10):
+ if task.state != "canceling":
+ break
+ time.sleep(1)
+ task = tasks_api_client.read(task_href)
assert task.state == "canceled"
+ assert task.started_at is None
+ assert task.finished_at is not None
[email protected](
- reason="This is a flaky test that fails from time to time. Probably gonna be fixed by 3319"
-)
[email protected]
def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
- task_href = dispatch_task("time.sleep", args=(6000,))
+ task_href = dispatch_task("time.sleep", args=(600,))
for i in range(10):
task = tasks_api_client.read(task_href)
@@ -130,17 +129,19 @@ def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
# Now cancel the task
task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
- assert task.started_at is not None
- assert task.finished_at is None
- assert task.state == "canceling"
+ if task.state == "canceling":
+ assert task.started_at is not None
+ assert task.finished_at is None
- for i in range(10):
- task = tasks_api_client.read(task_href)
- if task.state != "canceling":
- break
- time.sleep(1)
+ for i in range(10):
+ if task.state != "canceling":
+ break
+ time.sleep(1)
+ task = tasks_api_client.read(task_href)
assert task.state == "canceled"
+ assert task.started_at is not None
+ assert task.finished_at is not None
@pytest.mark.parallel
| Tasks API should report a `finished_at` date/time for cancelled tasks
Cancelled tasks should have `finished_at` set at cancel time.
Ideally any attempt to set `finished_at` from something different than `None` in the database should be reported as a `RuntimeError`. (audit of all `Task.save()` and `Task.objects.update()` calls)
Original wording: Tasks API should return a `cancelled_at` date/time for tasks that were cancelled. The `finished_at` should be set to `null` when `cancelled_at` is not `null`.
| We have tasks in multiple final states - completed, failed, cancelled, skipped. Regardless on what state it is getting in the `finished_at` gets populated.
While your request makes sense, to make everything consistent we'd need to introduce besides `cancelled_at` also `failed_at` and `skipped_at`.
My preference would be keep things as they are so we don't need to manage fields in addition. The combination of `state` and `finished_at` fields makes it pretty self explanatory of what happened to the task.
So i read that canceling would set the `finished_at` field as well.
I agree with @ipanova, I'm not sure I've seen justification for adding a new field just yet. Is there a gap with `state: canceled, finished_at: $date` doesn't cover?
The thing is that we seem to have canceled tasks with `finished_at: null` and `finished_at: <date>`. That should probably not be the case. `started_at` should then tell us whether the task was aborted or canceled before being started.
> The thing is that we seem to have canceled tasks with `finished_at: null` and `finished_at: <date>`. That should probably not be the case. `started_at` should then tell us whether the task was aborted or canceled before being started.
I was not clear on this issue from the original report. It seems like we always set `finished_at` for canceled tasks https://github.com/pulp/pulpcore/blob/main/pulpcore/tasking/pulpcore_worker.py#L206 am i looking into wrong place?
@mdellweg For the sake of readability and preserving history - can you avoid editing original messages and rather add a comment?
>The thing is that we seem to have canceled tasks with finished_at: null and finished_at: <date>. That should probably not be the case. started_at should then tell us whether the task was aborted or canceled before being started.
Definitely agree that should not be the case, but that's a bug that can be fixed, I'm not sure I understand why a new field is required here.
> > The thing is that we seem to have canceled tasks with `finished_at: null` and `finished_at: <date>`. That should probably not be the case. `started_at` should then tell us whether the task was aborted or canceled before being started.
>
> I was not clear on this issue from the original report. It seems like we always set `finished_at` for canceled tasks https://github.com/pulp/pulpcore/blob/main/pulpcore/tasking/pulpcore_worker.py#L206 am i looking into wrong place?
>
> @mdellweg For the sake of readability and preserving history - can you avoid editing original messages and rather add a comment?
I recall that we edited the plan of an issue after the discussion took it into a different direction in the first message back on plan.io. And it made sense to me. So you didn't get confused reading the original request and then need to extract what actually to do from a lengthy discussion. For histroy purpose i kept the original text. Sorry if i confused you. | 2022-10-20T09:47:03 |
pulp/pulpcore | 3,344 | pulp__pulpcore-3344 | [
"3284"
] | 1105239f669b8128ba1ff7f63391f58e8fdee06f | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -148,53 +148,51 @@ def process_batch():
)
key = (d_content.content.pk, d_artifact.relative_path)
to_update_ca_artifact[key] = d_artifact.artifact
+
# Query db once and update each object in memory for bulk_update call
for content_artifact in to_update_ca_query.iterator():
key = (content_artifact.content_id, content_artifact.relative_path)
- # Maybe remove dict elements after to reduce memory?
- content_artifact.artifact = to_update_ca_artifact[key]
- to_update_ca_bulk.append(content_artifact)
+ # Same content/relpath/artifact-sha means no change to the
+ # contentartifact, ignore. This prevents us from colliding with any
+ # concurrent syncs with overlapping identical content. "Someone" updated
+ # the contentartifacts to match what we would be doing, so we don't need
+ # to do an (unnecessary) db-update, which was opening us up for a variety
+ # of potential deadlock scenarios.
+ #
+ # We start knowing that we're comparing CAs with same content/rel-path,
+ # because that's what we're using for the key to look up the incoming CA.
+ # So now let's compare artifacts, incoming vs current.
+ #
+ # Are we changing from no-artifact to having one or vice-versa?
+ artifact_state_change = bool(content_artifact.artifact) ^ bool(
+ to_update_ca_artifact[key]
+ )
+ # Do both current and incoming have an artifact?
+ both_have_artifact = (
+ content_artifact.artifact and to_update_ca_artifact[key]
+ )
+ # If both sides have an artifact, do they have the same sha256?
+ same_artifact_hash = both_have_artifact and (
+ content_artifact.artifact.sha256 == to_update_ca_artifact[key].sha256
+ )
+ # Only update if there was an actual change
+ if artifact_state_change or (both_have_artifact and not same_artifact_hash):
+ content_artifact.artifact = to_update_ca_artifact[key]
+ to_update_ca_bulk.append(content_artifact)
# to_update_ca_bulk are the CAs that we know are already persisted.
# We need to update their artifact_ids, and wish to do it in bulk to
# avoid hundreds of round-trips to the database.
- #
- # To avoid deadlocks in high-concurrency environments with overlapping
- # content, we need to update the rows in some defined order. Unfortunately,
- # postgres doesn't support order-on-update - but it *does* support ordering
- # on select-for-update. So, we select-for-update, in pulp_id order, the
- # rows we're about to update as one db-call, and then do the update in a
- # second.
- #
- # NOTE: select-for-update requires being in an atomic-transaction. We are
- # **already in an atomic transaction** at this point as a result of the
- # "with transaction.atomic():", above.
- ids = [k.pulp_id for k in to_update_ca_bulk]
- # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- subq = (
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- )
- # NOTE: it might look like you can "safely" make this request
- # "more efficient". You'd be wrong, and would only be removing an
- # ordering-guardrail preventing deadlock. Don't touch.
- len(ContentArtifact.objects.filter(pk__in=subq).values_list())
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ if to_update_ca_bulk:
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
- # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # To avoid a deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
# trust the pulp_id (by the time we go to create a CA, it may already exist,
# and be replaced by the 'real' one), we sort by their "natural key".
content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
+
self._post_save(batch)
await sync_to_async(process_batch)()
| contentartifact bulk_update can still deadlock
**Version**
all
**Describe the bug**
There have been several attempts to address deadlocks that occur under high-load, high-concurrency sync operations with overlapping content. The deadlock stems from ContentArtifact being deduplicated across repositories that contain the same Contents. At this point, the remaining issue arises when parties to the deadlock are attempting to bulk_update the 'artifact' field of any existing (already-persisted) ContentArtifacts for the Content-batch they are syncing.
**To Reproduce**
Here is a script using pulp-cli that can reproduce the behavior at will. It requires use of the Very Large pulp-file performance fixture. It assumes you have the 'jq' package available.
```
#!/bin/bash
URLS=(\
https://fixtures.pulpproject.org/file-perf/PULP_MANIFEST \
)
NAMES=(\
file-perf \
)
# Make sure we're concurent-enough
num_workers=`sudo systemctl status pulpcore-worker* | grep "service - Pulp Worker" | wc -l`
echo "Current num-workers ${num_workers}"
if [ ${num_workers} -lt 10 ]
then
for (( i=${num_workers}+1; i<=10; i++ ))
do
echo "Starting worker ${i}"
sudo systemctl start pulpcore-worker@${i}
done
fi
echo "CLEANUP"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote destroy --name ${NAMES[$n]}-${i}
pulp file repository destroy --name ${NAMES[$n]}-${i}
done
done
pulp orphan cleanup --protection-time 0
echo "SETUP URLS AND REMOTES"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote create --name ${NAMES[$n]}-${i} --url ${URLS[$n]} | jq .pulp_href
pulp file repository create --name ${NAMES[$n]}-${i} --remote ${NAMES[$n]}-${i} | jq .pulp_href
done
done
starting_failed=`pulp task list --limit 10000 --state failed | jq length`
echo "SYNCING..."
for i in {1..9}
do
for n in ${!NAMES[@]}
do
pulp -b file repository sync --name ${NAMES[$n]}-${i}
done
done
sleep 5
echo "WAIT FOR COMPLETION...."
while true
do
running=`pulp task list --limit 10000 --state running | jq length`
echo -n "."
sleep 5
if [ ${running} -eq 0 ]
then
echo "DONE"
break
fi
done
failed=`pulp task list --limit 10000 --state failed | jq length`
echo "FAILURES : ${failed}"
if [ ${failed} -gt ${starting_failed} ]
then
echo "FAILED: " ${failed} - ${starting_failed}
exit
fi
```
**Expected behavior**
All syncs should happen without deadlocks.
**Additional context**
This is the latest step in fixing the codepath that has attempted to be addressed by issues: #3192 #3111 #2430
| 2022-10-20T22:34:59 |
||
pulp/pulpcore | 3,345 | pulp__pulpcore-3345 | [
"3284"
] | c2bb80090efe2705c8b123d2175a4754c62e104a | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -148,53 +148,51 @@ def process_batch():
)
key = (d_content.content.pk, d_artifact.relative_path)
to_update_ca_artifact[key] = d_artifact.artifact
+
# Query db once and update each object in memory for bulk_update call
for content_artifact in to_update_ca_query.iterator():
key = (content_artifact.content_id, content_artifact.relative_path)
- # Maybe remove dict elements after to reduce memory?
- content_artifact.artifact = to_update_ca_artifact[key]
- to_update_ca_bulk.append(content_artifact)
+ # Same content/relpath/artifact-sha means no change to the
+ # contentartifact, ignore. This prevents us from colliding with any
+ # concurrent syncs with overlapping identical content. "Someone" updated
+ # the contentartifacts to match what we would be doing, so we don't need
+ # to do an (unnecessary) db-update, which was opening us up for a variety
+ # of potential deadlock scenarios.
+ #
+ # We start knowing that we're comparing CAs with same content/rel-path,
+ # because that's what we're using for the key to look up the incoming CA.
+ # So now let's compare artifacts, incoming vs current.
+ #
+ # Are we changing from no-artifact to having one or vice-versa?
+ artifact_state_change = bool(content_artifact.artifact) ^ bool(
+ to_update_ca_artifact[key]
+ )
+ # Do both current and incoming have an artifact?
+ both_have_artifact = (
+ content_artifact.artifact and to_update_ca_artifact[key]
+ )
+ # If both sides have an artifact, do they have the same sha256?
+ same_artifact_hash = both_have_artifact and (
+ content_artifact.artifact.sha256 == to_update_ca_artifact[key].sha256
+ )
+ # Only update if there was an actual change
+ if artifact_state_change or (both_have_artifact and not same_artifact_hash):
+ content_artifact.artifact = to_update_ca_artifact[key]
+ to_update_ca_bulk.append(content_artifact)
# to_update_ca_bulk are the CAs that we know are already persisted.
# We need to update their artifact_ids, and wish to do it in bulk to
# avoid hundreds of round-trips to the database.
- #
- # To avoid deadlocks in high-concurrency environments with overlapping
- # content, we need to update the rows in some defined order. Unfortunately,
- # postgres doesn't support order-on-update - but it *does* support ordering
- # on select-for-update. So, we select-for-update, in pulp_id order, the
- # rows we're about to update as one db-call, and then do the update in a
- # second.
- #
- # NOTE: select-for-update requires being in an atomic-transaction. We are
- # **already in an atomic transaction** at this point as a result of the
- # "with transaction.atomic():", above.
- ids = [k.pulp_id for k in to_update_ca_bulk]
- # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- subq = (
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- )
- # NOTE: it might look like you can "safely" make this request
- # "more efficient". You'd be wrong, and would only be removing an
- # ordering-guardrail preventing deadlock. Don't touch.
- len(ContentArtifact.objects.filter(pk__in=subq).values_list())
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ if to_update_ca_bulk:
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
- # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # To avoid a deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
# trust the pulp_id (by the time we go to create a CA, it may already exist,
# and be replaced by the 'real' one), we sort by their "natural key".
content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
+
self._post_save(batch)
await sync_to_async(process_batch)()
| contentartifact bulk_update can still deadlock
**Version**
all
**Describe the bug**
There have been several attempts to address deadlocks that occur under high-load, high-concurrency sync operations with overlapping content. The deadlock stems from ContentArtifact being deduplicated across repositories that contain the same Contents. At this point, the remaining issue arises when parties to the deadlock are attempting to bulk_update the 'artifact' field of any existing (already-persisted) ContentArtifacts for the Content-batch they are syncing.
**To Reproduce**
Here is a script using pulp-cli that can reproduce the behavior at will. It requires use of the Very Large pulp-file performance fixture. It assumes you have the 'jq' package available.
```
#!/bin/bash
URLS=(\
https://fixtures.pulpproject.org/file-perf/PULP_MANIFEST \
)
NAMES=(\
file-perf \
)
# Make sure we're concurent-enough
num_workers=`sudo systemctl status pulpcore-worker* | grep "service - Pulp Worker" | wc -l`
echo "Current num-workers ${num_workers}"
if [ ${num_workers} -lt 10 ]
then
for (( i=${num_workers}+1; i<=10; i++ ))
do
echo "Starting worker ${i}"
sudo systemctl start pulpcore-worker@${i}
done
fi
echo "CLEANUP"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote destroy --name ${NAMES[$n]}-${i}
pulp file repository destroy --name ${NAMES[$n]}-${i}
done
done
pulp orphan cleanup --protection-time 0
echo "SETUP URLS AND REMOTES"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote create --name ${NAMES[$n]}-${i} --url ${URLS[$n]} | jq .pulp_href
pulp file repository create --name ${NAMES[$n]}-${i} --remote ${NAMES[$n]}-${i} | jq .pulp_href
done
done
starting_failed=`pulp task list --limit 10000 --state failed | jq length`
echo "SYNCING..."
for i in {1..9}
do
for n in ${!NAMES[@]}
do
pulp -b file repository sync --name ${NAMES[$n]}-${i}
done
done
sleep 5
echo "WAIT FOR COMPLETION...."
while true
do
running=`pulp task list --limit 10000 --state running | jq length`
echo -n "."
sleep 5
if [ ${running} -eq 0 ]
then
echo "DONE"
break
fi
done
failed=`pulp task list --limit 10000 --state failed | jq length`
echo "FAILURES : ${failed}"
if [ ${failed} -gt ${starting_failed} ]
then
echo "FAILED: " ${failed} - ${starting_failed}
exit
fi
```
**Expected behavior**
All syncs should happen without deadlocks.
**Additional context**
This is the latest step in fixing the codepath that has attempted to be addressed by issues: #3192 #3111 #2430
| 2022-10-20T22:35:17 |
||
pulp/pulpcore | 3,347 | pulp__pulpcore-3347 | [
"3235"
] | 2054f5022c24d0a4bfc4515d4967d64fff2b109d | diff --git a/pulpcore/app/viewsets/custom_filters.py b/pulpcore/app/viewsets/custom_filters.py
--- a/pulpcore/app/viewsets/custom_filters.py
+++ b/pulpcore/app/viewsets/custom_filters.py
@@ -12,6 +12,7 @@
from django.urls import Resolver404, resolve
from django.db.models import ObjectDoesNotExist
from django_filters import BaseInFilter, CharFilter, DateTimeFilter, Filter
+from django_filters.constants import EMPTY_VALUES
from django_filters.fields import IsoDateTimeField
from rest_framework import serializers
from rest_framework.serializers import ValidationError as DRFValidationError
@@ -19,6 +20,8 @@
from pulpcore.app.models import ContentArtifact, Label, RepositoryVersion, Publication
from pulpcore.app.viewsets import NamedModelViewSet
+EMPTY_VALUES = (*EMPTY_VALUES, "null")
+
class ReservedResourcesFilter(Filter):
"""
@@ -142,15 +145,37 @@ def __init__(self, *args, **kwargs):
self.allow_null = kwargs.pop("allow_null", False)
super().__init__(*args, **kwargs)
- def filter(self, qs, value):
- """
- Args:
- qs (django.db.models.query.QuerySet): The Queryset to filter
- value (string): href containing pk for the foreign key instance
+ def _resolve_uri(self, uri):
+ try:
+ return resolve(urlparse(uri).path)
+ except Resolver404:
+ raise serializers.ValidationError(
+ detail=_("URI couldn't be resolved: {uri}".format(uri=uri))
+ )
- Returns:
- django.db.models.query.QuerySet: Queryset filtered by the foreign key pk
- """
+ def _check_subclass(self, qs, uri, match):
+ fields_model = getattr(qs.model, self.field_name).get_queryset().model
+ lookups_model = match.func.cls.queryset.model
+ if not issubclass(lookups_model, fields_model):
+ raise serializers.ValidationError(
+ detail=_("URI is not a valid href for {field_name} model: {uri}").format(
+ field_name=self.field_name, uri=uri
+ )
+ )
+
+ def _check_valid_uuid(self, uuid):
+ if not uuid:
+ return True
+ try:
+ UUID(uuid, version=4)
+ except ValueError:
+ raise serializers.ValidationError(detail=_("UUID invalid: {uuid}").format(uuid=uuid))
+
+ def _validations(self, *args, **kwargs):
+ self._check_valid_uuid(kwargs["match"].kwargs.get("pk"))
+ self._check_subclass(*args, **kwargs)
+
+ def filter(self, qs, value):
if value is None:
# value was not supplied by the user
@@ -160,33 +185,27 @@ def filter(self, qs, value):
raise serializers.ValidationError(
detail=_("No value supplied for {name} filter.").format(name=self.field_name)
)
- elif self.allow_null and (value == "null" or value == ""):
- key = f"{self.field_name}__isnull"
- lookup = True
- else:
- try:
- match = resolve(urlparse(value).path)
- except Resolver404:
- raise serializers.ValidationError(detail=_("URI not valid: {u}").format(u=value))
- fields_model = getattr(qs.model, self.field_name).get_queryset().model
- lookups_model = match.func.cls.queryset.model
- if not issubclass(lookups_model, fields_model):
- raise serializers.ValidationError(detail=_("URI not valid: {u}").format(u=value))
+ if self.allow_null and value in EMPTY_VALUES:
+ return qs.filter(**{f"{self.field_name}__isnull": True})
+ if self.lookup_expr == "in":
+ matches = {uri: self._resolve_uri(uri) for uri in value}
+ [self._validations(qs, uri=uri, match=matches[uri]) for uri in matches]
+ value = [pk if (pk := matches[match].kwargs.get("pk")) else match for match in matches]
+ else:
+ match = self._resolve_uri(value)
+ self._validations(qs, uri=value, match=match)
if pk := match.kwargs.get("pk"):
- try:
- UUID(pk, version=4)
- except ValueError:
- raise serializers.ValidationError(detail=_("UUID invalid: {u}").format(u=pk))
-
- key = "{}__pk".format(self.field_name)
- lookup = pk
+ value = pk
else:
- key = f"{self.field_name}__in"
- lookup = match.func.cls.queryset
+ return qs.filter(**{f"{self.field_name}__in": match.func.cls.queryset})
- return qs.filter(**{key: lookup})
+ return super().filter(qs, value)
+
+
+class HyperlinkRelatedInFilter(BaseInFilter, HyperlinkRelatedFilter):
+ pass
class IsoDateTimeFilter(DateTimeFilter):
diff --git a/pulpcore/app/viewsets/task.py b/pulpcore/app/viewsets/task.py
--- a/pulpcore/app/viewsets/task.py
+++ b/pulpcore/app/viewsets/task.py
@@ -25,6 +25,7 @@
from pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NAME_FILTER_OPTIONS
from pulpcore.app.viewsets.custom_filters import (
HyperlinkRelatedFilter,
+ HyperlinkRelatedInFilter,
IsoDateTimeFilter,
ReservedResourcesFilter,
ReservedResourcesInFilter,
@@ -39,6 +40,7 @@
class TaskFilter(BaseFilterSet):
state = filters.ChoiceFilter(choices=TASK_CHOICES)
worker = HyperlinkRelatedFilter()
+ worker__in = HyperlinkRelatedInFilter(field_name="worker")
name = filters.CharFilter()
logging_cid = filters.CharFilter()
started_at = IsoDateTimeFilter(field_name="started_at")
@@ -62,7 +64,6 @@ class Meta:
model = Task
fields = {
"state": ["exact", "in"],
- "worker": ["exact", "in"],
"name": ["contains"],
"logging_cid": ["exact", "contains"],
"started_at": DATETIME_FILTER_OPTIONS,
| diff --git a/pulpcore/tests/functional/api/test_tasking.py b/pulpcore/tests/functional/api/test_tasking.py
--- a/pulpcore/tests/functional/api/test_tasking.py
+++ b/pulpcore/tests/functional/api/test_tasking.py
@@ -269,3 +269,20 @@ def test_search_task_using_an_invalid_name(tasks_api_client):
search_results = tasks_api_client.list(name=str(uuid4()))
assert not search_results.results and not search_results.count
+
+
[email protected]
+def test_filter_tasks_using_worker__in_filter(tasks_api_client, dispatch_task):
+
+ task1_href = dispatch_task("time.sleep", (0,))
+ task2_href = dispatch_task("time.sleep", (0,))
+
+ task1 = monitor_task(task1_href)
+ task2 = monitor_task(task2_href)
+
+ search_results = tasks_api_client.list(worker__in=(task1.worker, task2.worker))
+
+ tasks_hrefs = [task.pulp_href for task in search_results.results]
+
+ assert task1_href in tasks_hrefs
+ assert task2_href in tasks_hrefs
| task filter `worker__in` does not accept hrefs
**Version**
main branch, probably everything to at least 3.21
**Describe the bug**
Filter complains about the parameter not being a uuid.
**To Reproduce**
List task with `worker__in` query string.
| 2022-10-21T19:42:31 |
|
pulp/pulpcore | 3,375 | pulp__pulpcore-3375 | [
"3187",
"3187"
] | 72bfdcf6a7969ee8ffe3bce2ee4c96bd7bbe592c | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -2,6 +2,7 @@
import json
import logging
import os
+import os.path
import subprocess
import tarfile
@@ -66,6 +67,15 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
and method != FS_EXPORT_METHODS.WRITE
):
raise RuntimeError(_("Only write is supported for non-filesystem storage."))
+ os.makedirs(path)
+ export_not_on_same_filesystem = (
+ settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem"
+ and os.stat(settings.MEDIA_ROOT).st_dev != os.stat(path).st_dev
+ )
+
+ if method == FS_EXPORT_METHODS.HARDLINK and export_not_on_same_filesystem:
+ log.info(_("Hard link cannot be created, file will be copied."))
+ method = FS_EXPORT_METHODS.WRITE
for relative_path, artifact in relative_paths_to_artifacts.items():
dest = os.path.join(path, relative_path)
@@ -73,9 +83,11 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
if method == FS_EXPORT_METHODS.SYMLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.symlink(src, dest)
elif method == FS_EXPORT_METHODS.HARDLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.link(src, dest)
elif method == FS_EXPORT_METHODS.WRITE:
with open(dest, "wb") as f, artifact.file as af:
| FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
| 2022-11-01T02:04:09 |
||
pulp/pulpcore | 3,376 | pulp__pulpcore-3376 | [
"3187",
"3187"
] | 10663c477d62ec821f042637201cacd0b562426a | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -2,6 +2,7 @@
import json
import logging
import os
+import os.path
import subprocess
import tarfile
@@ -66,6 +67,15 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
and method != FS_EXPORT_METHODS.WRITE
):
raise RuntimeError(_("Only write is supported for non-filesystem storage."))
+ os.makedirs(path)
+ export_not_on_same_filesystem = (
+ settings.DEFAULT_FILE_STORAGE == "pulpcore.app.models.storage.FileSystem"
+ and os.stat(settings.MEDIA_ROOT).st_dev != os.stat(path).st_dev
+ )
+
+ if method == FS_EXPORT_METHODS.HARDLINK and export_not_on_same_filesystem:
+ log.info(_("Hard link cannot be created, file will be copied."))
+ method = FS_EXPORT_METHODS.WRITE
for relative_path, artifact in relative_paths_to_artifacts.items():
dest = os.path.join(path, relative_path)
@@ -73,9 +83,11 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
if method == FS_EXPORT_METHODS.SYMLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.symlink(src, dest)
elif method == FS_EXPORT_METHODS.HARDLINK:
src = os.path.join(settings.MEDIA_ROOT, artifact.file.name)
+ os.path.lexists(dest) and os.unlink(dest)
os.link(src, dest)
elif method == FS_EXPORT_METHODS.WRITE:
with open(dest, "wb") as f, artifact.file as af:
| FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
FS Exporter should copy if it can't hard link
**Version**
Pulpcore 3.16 and above
**Describe the bug**
The FS Exporter works well when exporting with hardlinks, but does not handle the case where the linking might fail because pulp and exports directory might be on different mounted volumes.
**To Reproduce**
Steps to reproduce the behavior:
- Setup an nfs or sshfs mount
- add the mount directory to `ALLOWED_EXPORT_PATHS` in `/etc/pulp/settings.py`
- Export a repo to this mount point via fs exporter
**Actual Behavior**
```
Error: [Errno 18] Invalid cross-device link: '/var/lib/pulp/media/artifact/7a/831f9f90bf4d21027572cb503d20b702de8e8785b02c0397445c2e481d81b3' -> '/exports/repo/Packages/b/bear-4.1-1.noarch.rpm'
```
**Expected behavior**
If it can't hard link, it should instead try to make a right along the lines of -> https://github.com/pulp/pulp-2to3-migration/blob/main/pulp_2to3_migration/app/plugin/content.py#L163-L170
**Additional context**
It was discussed during the https://github.com/pulp/pulpcore/pull/2951 PR but got missed in that commit.
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2125444
| 2022-11-01T02:04:10 |
||
pulp/pulpcore | 3,377 | pulp__pulpcore-3377 | [
"3284"
] | 72bfdcf6a7969ee8ffe3bce2ee4c96bd7bbe592c | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -148,53 +148,51 @@ def process_batch():
)
key = (d_content.content.pk, d_artifact.relative_path)
to_update_ca_artifact[key] = d_artifact.artifact
+
# Query db once and update each object in memory for bulk_update call
for content_artifact in to_update_ca_query.iterator():
key = (content_artifact.content_id, content_artifact.relative_path)
- # Maybe remove dict elements after to reduce memory?
- content_artifact.artifact = to_update_ca_artifact[key]
- to_update_ca_bulk.append(content_artifact)
+ # Same content/relpath/artifact-sha means no change to the
+ # contentartifact, ignore. This prevents us from colliding with any
+ # concurrent syncs with overlapping identical content. "Someone" updated
+ # the contentartifacts to match what we would be doing, so we don't need
+ # to do an (unnecessary) db-update, which was opening us up for a variety
+ # of potential deadlock scenarios.
+ #
+ # We start knowing that we're comparing CAs with same content/rel-path,
+ # because that's what we're using for the key to look up the incoming CA.
+ # So now let's compare artifacts, incoming vs current.
+ #
+ # Are we changing from no-artifact to having one or vice-versa?
+ artifact_state_change = bool(content_artifact.artifact) ^ bool(
+ to_update_ca_artifact[key]
+ )
+ # Do both current and incoming have an artifact?
+ both_have_artifact = (
+ content_artifact.artifact and to_update_ca_artifact[key]
+ )
+ # If both sides have an artifact, do they have the same sha256?
+ same_artifact_hash = both_have_artifact and (
+ content_artifact.artifact.sha256 == to_update_ca_artifact[key].sha256
+ )
+ # Only update if there was an actual change
+ if artifact_state_change or (both_have_artifact and not same_artifact_hash):
+ content_artifact.artifact = to_update_ca_artifact[key]
+ to_update_ca_bulk.append(content_artifact)
# to_update_ca_bulk are the CAs that we know are already persisted.
# We need to update their artifact_ids, and wish to do it in bulk to
# avoid hundreds of round-trips to the database.
- #
- # To avoid deadlocks in high-concurrency environments with overlapping
- # content, we need to update the rows in some defined order. Unfortunately,
- # postgres doesn't support order-on-update - but it *does* support ordering
- # on select-for-update. So, we select-for-update, in pulp_id order, the
- # rows we're about to update as one db-call, and then do the update in a
- # second.
- #
- # NOTE: select-for-update requires being in an atomic-transaction. We are
- # **already in an atomic transaction** at this point as a result of the
- # "with transaction.atomic():", above.
- ids = [k.pulp_id for k in to_update_ca_bulk]
- # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- subq = (
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- )
- # NOTE: it might look like you can "safely" make this request
- # "more efficient". You'd be wrong, and would only be removing an
- # ordering-guardrail preventing deadlock. Don't touch.
- len(ContentArtifact.objects.filter(pk__in=subq).values_list())
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+ if to_update_ca_bulk:
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
- # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ # To avoid a deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
# trust the pulp_id (by the time we go to create a CA, it may already exist,
# and be replaced by the 'real' one), we sort by their "natural key".
content_artifact_bulk.sort(key=lambda x: ContentArtifact.sort_key(x))
ContentArtifact.objects.bulk_get_or_create(content_artifact_bulk)
+
self._post_save(batch)
await sync_to_async(process_batch)()
| contentartifact bulk_update can still deadlock
**Version**
all
**Describe the bug**
There have been several attempts to address deadlocks that occur under high-load, high-concurrency sync operations with overlapping content. The deadlock stems from ContentArtifact being deduplicated across repositories that contain the same Contents. At this point, the remaining issue arises when parties to the deadlock are attempting to bulk_update the 'artifact' field of any existing (already-persisted) ContentArtifacts for the Content-batch they are syncing.
**To Reproduce**
Here is a script using pulp-cli that can reproduce the behavior at will. It requires use of the Very Large pulp-file performance fixture. It assumes you have the 'jq' package available.
```
#!/bin/bash
URLS=(\
https://fixtures.pulpproject.org/file-perf/PULP_MANIFEST \
)
NAMES=(\
file-perf \
)
# Make sure we're concurent-enough
num_workers=`sudo systemctl status pulpcore-worker* | grep "service - Pulp Worker" | wc -l`
echo "Current num-workers ${num_workers}"
if [ ${num_workers} -lt 10 ]
then
for (( i=${num_workers}+1; i<=10; i++ ))
do
echo "Starting worker ${i}"
sudo systemctl start pulpcore-worker@${i}
done
fi
echo "CLEANUP"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote destroy --name ${NAMES[$n]}-${i}
pulp file repository destroy --name ${NAMES[$n]}-${i}
done
done
pulp orphan cleanup --protection-time 0
echo "SETUP URLS AND REMOTES"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote create --name ${NAMES[$n]}-${i} --url ${URLS[$n]} | jq .pulp_href
pulp file repository create --name ${NAMES[$n]}-${i} --remote ${NAMES[$n]}-${i} | jq .pulp_href
done
done
starting_failed=`pulp task list --limit 10000 --state failed | jq length`
echo "SYNCING..."
for i in {1..9}
do
for n in ${!NAMES[@]}
do
pulp -b file repository sync --name ${NAMES[$n]}-${i}
done
done
sleep 5
echo "WAIT FOR COMPLETION...."
while true
do
running=`pulp task list --limit 10000 --state running | jq length`
echo -n "."
sleep 5
if [ ${running} -eq 0 ]
then
echo "DONE"
break
fi
done
failed=`pulp task list --limit 10000 --state failed | jq length`
echo "FAILURES : ${failed}"
if [ ${failed} -gt ${starting_failed} ]
then
echo "FAILED: " ${failed} - ${starting_failed}
exit
fi
```
**Expected behavior**
All syncs should happen without deadlocks.
**Additional context**
This is the latest step in fixing the codepath that has attempted to be addressed by issues: #3192 #3111 #2430
| 2022-11-01T02:05:51 |
||
pulp/pulpcore | 3,379 | pulp__pulpcore-3379 | [
"3370"
] | 2054f5022c24d0a4bfc4515d4967d64fff2b109d | diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@ def create(self, request, exporter_pk):
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
| Export is not locking on the exported repositories
SSIA
| 2022-11-02T17:17:43 |
||
pulp/pulpcore | 3,381 | pulp__pulpcore-3381 | [
"3370"
] | bf15d4b7279339704ed5b7b22414e2363e3d6b44 | diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@ def create(self, request, exporter_pk):
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
| Export is not locking on the exported repositories
SSIA
| 2022-11-03T09:15:15 |
||
pulp/pulpcore | 3,382 | pulp__pulpcore-3382 | [
"3370"
] | 3d7058c7b0a966a301de54aa50d3de6ab7466856 | diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@ def create(self, request, exporter_pk):
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
| Export is not locking on the exported repositories
SSIA
| 2022-11-03T09:15:37 |
||
pulp/pulpcore | 3,383 | pulp__pulpcore-3383 | [
"3370"
] | 9d26a0488a01cf91cd14f8664babaf28ef321204 | diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -146,6 +146,7 @@ def create(self, request, exporter_pk):
task = dispatch(
pulp_export,
exclusive_resources=[exporter],
+ shared_resources=exporter.repositories.all(),
kwargs={"exporter_pk": str(exporter.pk), "params": request.data},
)
| Export is not locking on the exported repositories
SSIA
| 2022-11-03T09:16:02 |
||
pulp/pulpcore | 3,392 | pulp__pulpcore-3392 | [
"3391"
] | 993a9247f0c065c996f1c4fe941e40de4e946b78 | diff --git a/pulpcore/app/apps.py b/pulpcore/app/apps.py
--- a/pulpcore/app/apps.py
+++ b/pulpcore/app/apps.py
@@ -250,9 +250,10 @@ def _populate_access_policies(sender, apps, verbosity, **kwargs):
viewset_name=viewset_name
)
)
- if not created and not db_access_policy.customized:
+ elif not db_access_policy.customized:
dirty = False
- for key, value in access_policy.items():
+ for key in ["statements", "creation_hooks", "queryset_scoping"]:
+ value = access_policy.get(key)
if getattr(db_access_policy, key, None) != value:
setattr(db_access_policy, key, value)
dirty = True
| After pulp-containerupgrade 2.9-->main push to new repos fails
**Version**
Please provide the versions of the pulpcore and pulp_container packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.
**Describe the bug**
**To Reproduce**
1. install 2.9
2. push repo ipanova/busybox
3. upgrade to main
4. push new content to ipanova/busybox. Succeed
5. push to new repo ipanova/test
6. fail with 500
**Push operatiosns fail whether with ipanova user who has according perms or admin**
For some reason the traceback is swallowed, i could extract it only from the remote debugger:
`Out[2]: rest_framework.exceptions.APIException("Creation hook 'add_perms_to_distribution_group' was not registered for this view set.")
`
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
```python
$ http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password
HTTP/1.1 200 OK
Access-Control-Expose-Headers: Correlation-ID
Allow: GET, HEAD, OPTIONS
Connection: keep-alive
Content-Length: 2722
Content-Type: application/json
Correlation-ID: f71cf0881d7d47c4885d248d7fe3ef82
Date: Mon, 31 Oct 2022 18:00:46 GMT
Referrer-Policy: same-origin
Server: nginx
Vary: Accept, Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"creation_hooks": [
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": true,
"group_type": "owners"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "collaborators"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "consumers"
},
"permissions": [
"container.view_containerpushrepository"
]
}
],
"customized": false,
"permissions_assignment": [
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": true,
"group_type": "owners"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "collaborators"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "consumers"
},
"permissions": [
"container.view_containerpushrepository"
]
}
],
"pulp_created": "2022-10-31T17:21:58.412317Z",
"pulp_href": "/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/",
"queryset_scoping": {
"function": "get_push_repos_qs",
"parameters": {
"dist_perm": "container.view_containerdistribution",
"ns_perm": "container.view_containernamespace"
}
},
"statements": [
{
"action": [
"list"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"retrieve"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_view_containerpush_repository or has_distribution_perms:container.view_containerdistribution"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"tag",
"untag",
"remove_image",
"sign",
"remove_signatures"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_modify_content_containerpushrepository or has_distribution_perms:container.modify_content_containerpushrepository"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"update",
"partial_update"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_change_containerpushrepository or has_distribution_perms:container.change_containerdistribution"
],
"effect": "allow",
"principal": "authenticated"
}
],
"viewset_name": "repositories/container/container-push"
}
]
}
(pulp) [vagrant@pulp3-source-fedora36 nginx]$
```
| Here are the steps that can be taken via API ( or via CLI, it is also available) for the deployments that have uncustomized access policy.
1. Find the href for the uncustomized container push viewset access policy
```python
http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password | jq -r '.results[] | .customized'
false
http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password | jq -r '.results[] | .pulp_href'
/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/
```
2. reset the access policy
```python
http POST https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/reset/ --auth admin:password
```
3. restart services
4. re-try push operation. it should succeed.
There's a shortcut for resetting an access policy in the cli:
`pulp access-policy reset --help`. | 2022-11-10T09:59:20 |
|
pulp/pulpcore | 3,397 | pulp__pulpcore-3397 | [
"3396"
] | 0099278c82d912aaed73eed78c924eb6c76cd817 | diff --git a/pulpcore/app/protobuf/analytics_pb2.py b/pulpcore/app/protobuf/analytics_pb2.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/protobuf/analytics_pb2.py
@@ -0,0 +1,77 @@
+# -*- coding: utf-8 -*-
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: analytics.proto
+"""Generated protocol buffer code."""
+from google.protobuf import descriptor as _descriptor
+from google.protobuf import descriptor_pool as _descriptor_pool
+from google.protobuf import message as _message
+from google.protobuf import reflection as _reflection
+from google.protobuf import symbol_database as _symbol_database
+
+# @@protoc_insertion_point(imports)
+
+_sym_db = _symbol_database.Default()
+
+
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
+ b'\n\x0f\x61nalytics.proto"\xe7\x02\n\tAnalytics\x12\x11\n\tsystem_id\x18\x01 \x02(\t\x12\x39\n\x13online_content_apps\x18\x02 \x01(\x0b\x32\x1c.Analytics.OnlineContentApps\x12\x30\n\x0eonline_workers\x18\x03 \x01(\x0b\x32\x18.Analytics.OnlineWorkers\x12(\n\ncomponents\x18\x04 \x03(\x0b\x32\x14.Analytics.Component\x12\x1a\n\x12postgresql_version\x18\x05 \x01(\r\x1a\x35\n\x11OnlineContentApps\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a\x31\n\rOnlineWorkers\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a*\n\tComponent\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\x0f\n\x07version\x18\x02 \x02(\t'
+)
+
+
+_ANALYTICS = DESCRIPTOR.message_types_by_name["Analytics"]
+_ANALYTICS_ONLINECONTENTAPPS = _ANALYTICS.nested_types_by_name["OnlineContentApps"]
+_ANALYTICS_ONLINEWORKERS = _ANALYTICS.nested_types_by_name["OnlineWorkers"]
+_ANALYTICS_COMPONENT = _ANALYTICS.nested_types_by_name["Component"]
+Analytics = _reflection.GeneratedProtocolMessageType(
+ "Analytics",
+ (_message.Message,),
+ {
+ "OnlineContentApps": _reflection.GeneratedProtocolMessageType(
+ "OnlineContentApps",
+ (_message.Message,),
+ {
+ "DESCRIPTOR": _ANALYTICS_ONLINECONTENTAPPS,
+ "__module__": "analytics_pb2"
+ # @@protoc_insertion_point(class_scope:Analytics.OnlineContentApps)
+ },
+ ),
+ "OnlineWorkers": _reflection.GeneratedProtocolMessageType(
+ "OnlineWorkers",
+ (_message.Message,),
+ {
+ "DESCRIPTOR": _ANALYTICS_ONLINEWORKERS,
+ "__module__": "analytics_pb2"
+ # @@protoc_insertion_point(class_scope:Analytics.OnlineWorkers)
+ },
+ ),
+ "Component": _reflection.GeneratedProtocolMessageType(
+ "Component",
+ (_message.Message,),
+ {
+ "DESCRIPTOR": _ANALYTICS_COMPONENT,
+ "__module__": "analytics_pb2"
+ # @@protoc_insertion_point(class_scope:Analytics.Component)
+ },
+ ),
+ "DESCRIPTOR": _ANALYTICS,
+ "__module__": "analytics_pb2"
+ # @@protoc_insertion_point(class_scope:Analytics)
+ },
+)
+_sym_db.RegisterMessage(Analytics)
+_sym_db.RegisterMessage(Analytics.OnlineContentApps)
+_sym_db.RegisterMessage(Analytics.OnlineWorkers)
+_sym_db.RegisterMessage(Analytics.Component)
+
+if _descriptor._USE_C_DESCRIPTORS == False:
+
+ DESCRIPTOR._options = None
+ _ANALYTICS._serialized_start = 20
+ _ANALYTICS._serialized_end = 379
+ _ANALYTICS_ONLINECONTENTAPPS._serialized_start = 231
+ _ANALYTICS_ONLINECONTENTAPPS._serialized_end = 284
+ _ANALYTICS_ONLINEWORKERS._serialized_start = 286
+ _ANALYTICS_ONLINEWORKERS._serialized_end = 335
+ _ANALYTICS_COMPONENT._serialized_start = 337
+ _ANALYTICS_COMPONENT._serialized_end = 379
+# @@protoc_insertion_point(module_scope)
diff --git a/pulpcore/app/protobuf/telemetry_pb2.py b/pulpcore/app/protobuf/telemetry_pb2.py
deleted file mode 100644
--- a/pulpcore/app/protobuf/telemetry_pb2.py
+++ /dev/null
@@ -1,67 +0,0 @@
-# -*- coding: utf-8 -*-
-# Generated by the protocol buffer compiler. DO NOT EDIT!
-# source: telemetry.proto
-"""Generated protocol buffer code."""
-from google.protobuf import descriptor as _descriptor
-from google.protobuf import descriptor_pool as _descriptor_pool
-from google.protobuf import message as _message
-from google.protobuf import reflection as _reflection
-from google.protobuf import symbol_database as _symbol_database
-# @@protoc_insertion_point(imports)
-
-_sym_db = _symbol_database.Default()
-
-
-
-
-DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0ftelemetry.proto\"\xcb\x02\n\tTelemetry\x12\x11\n\tsystem_id\x18\x01 \x02(\t\x12\x39\n\x13online_content_apps\x18\x02 \x01(\x0b\x32\x1c.Telemetry.OnlineContentApps\x12\x30\n\x0eonline_workers\x18\x03 \x01(\x0b\x32\x18.Telemetry.OnlineWorkers\x12(\n\ncomponents\x18\x04 \x03(\x0b\x32\x14.Telemetry.Component\x1a\x35\n\x11OnlineContentApps\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a\x31\n\rOnlineWorkers\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a*\n\tComponent\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\x0f\n\x07version\x18\x02 \x02(\t')
-
-
-
-_TELEMETRY = DESCRIPTOR.message_types_by_name['Telemetry']
-_TELEMETRY_ONLINECONTENTAPPS = _TELEMETRY.nested_types_by_name['OnlineContentApps']
-_TELEMETRY_ONLINEWORKERS = _TELEMETRY.nested_types_by_name['OnlineWorkers']
-_TELEMETRY_COMPONENT = _TELEMETRY.nested_types_by_name['Component']
-Telemetry = _reflection.GeneratedProtocolMessageType('Telemetry', (_message.Message,), {
-
- 'OnlineContentApps' : _reflection.GeneratedProtocolMessageType('OnlineContentApps', (_message.Message,), {
- 'DESCRIPTOR' : _TELEMETRY_ONLINECONTENTAPPS,
- '__module__' : 'telemetry_pb2'
- # @@protoc_insertion_point(class_scope:Telemetry.OnlineContentApps)
- })
- ,
-
- 'OnlineWorkers' : _reflection.GeneratedProtocolMessageType('OnlineWorkers', (_message.Message,), {
- 'DESCRIPTOR' : _TELEMETRY_ONLINEWORKERS,
- '__module__' : 'telemetry_pb2'
- # @@protoc_insertion_point(class_scope:Telemetry.OnlineWorkers)
- })
- ,
-
- 'Component' : _reflection.GeneratedProtocolMessageType('Component', (_message.Message,), {
- 'DESCRIPTOR' : _TELEMETRY_COMPONENT,
- '__module__' : 'telemetry_pb2'
- # @@protoc_insertion_point(class_scope:Telemetry.Component)
- })
- ,
- 'DESCRIPTOR' : _TELEMETRY,
- '__module__' : 'telemetry_pb2'
- # @@protoc_insertion_point(class_scope:Telemetry)
- })
-_sym_db.RegisterMessage(Telemetry)
-_sym_db.RegisterMessage(Telemetry.OnlineContentApps)
-_sym_db.RegisterMessage(Telemetry.OnlineWorkers)
-_sym_db.RegisterMessage(Telemetry.Component)
-
-if _descriptor._USE_C_DESCRIPTORS == False:
-
- DESCRIPTOR._options = None
- _TELEMETRY._serialized_start=20
- _TELEMETRY._serialized_end=351
- _TELEMETRY_ONLINECONTENTAPPS._serialized_start=203
- _TELEMETRY_ONLINECONTENTAPPS._serialized_end=256
- _TELEMETRY_ONLINEWORKERS._serialized_start=258
- _TELEMETRY_ONLINEWORKERS._serialized_end=307
- _TELEMETRY_COMPONENT._serialized_start=309
- _TELEMETRY_COMPONENT._serialized_end=351
-# @@protoc_insertion_point(module_scope)
diff --git a/pulpcore/app/tasks/telemetry.py b/pulpcore/app/tasks/telemetry.py
--- a/pulpcore/app/tasks/telemetry.py
+++ b/pulpcore/app/tasks/telemetry.py
@@ -6,13 +6,14 @@
import async_timeout
from asgiref.sync import sync_to_async
+from django.db import connection
from google.protobuf.json_format import MessageToJson
from pulpcore.app.apps import pulp_plugin_configs
from pulpcore.app.models import SystemID
from pulpcore.app.models.status import ContentAppStatus
from pulpcore.app.models.task import Worker
-from pulpcore.app.protobuf.telemetry_pb2 import Telemetry
+from pulpcore.app.protobuf.analytics_pb2 import Analytics
logger = logging.getLogger(__name__)
@@ -22,15 +23,15 @@
DEV_URL = "https://dev.analytics.pulpproject.org/"
-def get_telemetry_posting_url():
+def get_analytics_posting_url():
"""
- Return either the dev or production telemetry FQDN url.
+ Return either the dev or production analytics FQDN url.
Production version string examples: ["3.21.1", "1.11.0"]
Developer version string example: ["3.20.3.dev", "2.0.0a6"]
Returns:
- The FQDN string of either the dev or production telemetry site.
+ The FQDN string of either the dev or production analytics site.
"""
for app in pulp_plugin_configs():
if not app.version.count(".") == 2: # Only two periods allowed in prod version strings
@@ -44,6 +45,14 @@ def get_telemetry_posting_url():
return PRODUCTION_URL
+def _get_postgresql_version_string():
+ return connection.cursor().connection.server_version
+
+
+async def _postgresql_version(analytics):
+ analytics.postgresql_version = await sync_to_async(_get_postgresql_version_string)()
+
+
async def _num_hosts(qs):
hosts = set()
items = await sync_to_async(list)(qs.all())
@@ -52,40 +61,41 @@ async def _num_hosts(qs):
return len(hosts)
-async def _versions_data(telemetry):
+async def _versions_data(analytics):
for app in pulp_plugin_configs():
- new_component = telemetry.components.add()
+ new_component = analytics.components.add()
new_component.name = app.label
new_component.version = app.version
-async def _online_content_apps_data(telemetry):
+async def _online_content_apps_data(analytics):
online_content_apps_qs = ContentAppStatus.objects.online()
- telemetry.online_content_apps.processes = await sync_to_async(online_content_apps_qs.count)()
- telemetry.online_content_apps.hosts = await _num_hosts(online_content_apps_qs)
+ analytics.online_content_apps.processes = await sync_to_async(online_content_apps_qs.count)()
+ analytics.online_content_apps.hosts = await _num_hosts(online_content_apps_qs)
-async def _online_workers_data(telemetry):
+async def _online_workers_data(analytics):
online_workers_qs = Worker.objects.online_workers()
- telemetry.online_workers.processes = await sync_to_async(online_workers_qs.count)()
- telemetry.online_workers.hosts = await _num_hosts(online_workers_qs)
+ analytics.online_workers.processes = await sync_to_async(online_workers_qs.count)()
+ analytics.online_workers.hosts = await _num_hosts(online_workers_qs)
-async def _system_id(telemetry):
+async def _system_id(analytics):
system_id_obj = await sync_to_async(SystemID.objects.get)()
- telemetry.system_id = str(system_id_obj.pk)
+ analytics.system_id = str(system_id_obj.pk)
async def post_telemetry():
- url = get_telemetry_posting_url()
+ url = get_analytics_posting_url()
- telemetry = Telemetry()
+ analytics = Analytics()
awaitables = (
- _system_id(telemetry),
- _versions_data(telemetry),
- _online_content_apps_data(telemetry),
- _online_workers_data(telemetry),
+ _system_id(analytics),
+ _versions_data(analytics),
+ _online_content_apps_data(analytics),
+ _online_workers_data(analytics),
+ _postgresql_version(analytics),
)
await asyncio.gather(*awaitables)
@@ -93,20 +103,20 @@ async def post_telemetry():
try:
async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(300):
- async with session.post(url, data=telemetry.SerializeToString()) as resp:
+ async with session.post(url, data=analytics.SerializeToString()) as resp:
if resp.status == 200:
logger.info(
- ("Submitted telemetry to %s. " "Information submitted includes %s"),
+ ("Submitted analytics to %s. " "Information submitted includes %s"),
url,
- json.loads(MessageToJson(telemetry)),
+ json.loads(MessageToJson(analytics)),
)
else:
logger.warning(
- "Sending telemetry failed with statuscode %s from %s",
+ "Sending analytics failed with statuscode %s from %s",
resp.status,
url,
)
except asyncio.TimeoutError:
- logger.error("Timed out while sending telemetry to %s", url)
+ logger.error("Timed out while sending analytics to %s", url)
except aiohttp.ClientError as err:
- logger.error("Error sending telemetry to %s: %r", url, err)
+ logger.error("Error sending analytics to %s: %r", url, err)
| Gather postgresql version metric collection
As a pulp user have the postgresql server gathered consistent with [this proposal](https://hackmd.io/zJ1dJe8qQtmzr0JiM1jptw).
| 2022-11-11T15:01:42 |
||
pulp/pulpcore | 3,399 | pulp__pulpcore-3399 | [
"3398"
] | 993a9247f0c065c996f1c4fe941e40de4e946b78 | diff --git a/pulpcore/openapi/__init__.py b/pulpcore/openapi/__init__.py
--- a/pulpcore/openapi/__init__.py
+++ b/pulpcore/openapi/__init__.py
@@ -428,20 +428,20 @@ def parse(self, input_request, public):
# Adding query parameters
if "parameters" in operation and schema.method.lower() == "get":
- fields_paramenter = build_parameter_type(
+ fields_parameter = build_parameter_type(
name="fields",
- schema={"type": "string"},
+ schema={"type": "array", "items": {"type": "string"}},
location=OpenApiParameter.QUERY,
description="A list of fields to include in the response.",
)
- operation["parameters"].append(fields_paramenter)
- not_fields_paramenter = build_parameter_type(
+ operation["parameters"].append(fields_parameter)
+ exclude_fields_parameter = build_parameter_type(
name="exclude_fields",
- schema={"type": "string"},
+ schema={"type": "array", "items": {"type": "string"}},
location=OpenApiParameter.QUERY,
description="A list of fields to exclude from the response.",
)
- operation["parameters"].append(not_fields_paramenter)
+ operation["parameters"].append(exclude_fields_parameter)
# Normalise path for any provided mount url.
if path.startswith("/"):
| fields and exclude_fields querystring parameters should apecify array-of-strings in openapi
`pulp debug openapi operation --id tasks_read | jq '.operation.parameters[]|select(.name=="fields")'`
current behaviour:
```json
{
"in": "query",
"name": "fields",
"schema": {
"type": "string"
},
"description": "A list of fields to include in the response."
}
```
expected behaviour:
```json
{
"in": "query",
"name": "fields",
"schema": {
"type": "array",
"items": {
"type": "string"
}
},
"description": "A list of fields to include in the response."
}
```
| 2022-11-14T09:41:59 |
||
pulp/pulpcore | 3,402 | pulp__pulpcore-3402 | [
"3391"
] | c2f77824efaad59ca20c85e8919944033d4d9fba | diff --git a/pulpcore/app/apps.py b/pulpcore/app/apps.py
--- a/pulpcore/app/apps.py
+++ b/pulpcore/app/apps.py
@@ -241,9 +241,10 @@ def _populate_access_policies(sender, apps, verbosity, **kwargs):
viewset_name=viewset_name
)
)
- if not created and not db_access_policy.customized:
+ elif not db_access_policy.customized:
dirty = False
- for key, value in access_policy.items():
+ for key in ["statements", "creation_hooks", "queryset_scoping"]:
+ value = access_policy.get(key)
if getattr(db_access_policy, key, None) != value:
setattr(db_access_policy, key, value)
dirty = True
| After pulp-containerupgrade 2.9-->main push to new repos fails
**Version**
Please provide the versions of the pulpcore and pulp_container packages in use, and how they are installed. If you are using Pulp via Katello, please provide the Katello version.
**Describe the bug**
**To Reproduce**
1. install 2.9
2. push repo ipanova/busybox
3. upgrade to main
4. push new content to ipanova/busybox. Succeed
5. push to new repo ipanova/test
6. fail with 500
**Push operatiosns fail whether with ipanova user who has according perms or admin**
For some reason the traceback is swallowed, i could extract it only from the remote debugger:
`Out[2]: rest_framework.exceptions.APIException("Creation hook 'add_perms_to_distribution_group' was not registered for this view set.")
`
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
```python
$ http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password
HTTP/1.1 200 OK
Access-Control-Expose-Headers: Correlation-ID
Allow: GET, HEAD, OPTIONS
Connection: keep-alive
Content-Length: 2722
Content-Type: application/json
Correlation-ID: f71cf0881d7d47c4885d248d7fe3ef82
Date: Mon, 31 Oct 2022 18:00:46 GMT
Referrer-Policy: same-origin
Server: nginx
Vary: Accept, Cookie
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
{
"count": 1,
"next": null,
"previous": null,
"results": [
{
"creation_hooks": [
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": true,
"group_type": "owners"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "collaborators"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "consumers"
},
"permissions": [
"container.view_containerpushrepository"
]
}
],
"customized": false,
"permissions_assignment": [
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": true,
"group_type": "owners"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "collaborators"
},
"permissions": [
"container.view_containerpushrepository",
"container.modify_content_containerpushrepository",
"container.change_containerpushrepository"
]
},
{
"function": "add_perms_to_distribution_group",
"parameters": {
"add_user_to_group": false,
"group_type": "consumers"
},
"permissions": [
"container.view_containerpushrepository"
]
}
],
"pulp_created": "2022-10-31T17:21:58.412317Z",
"pulp_href": "/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/",
"queryset_scoping": {
"function": "get_push_repos_qs",
"parameters": {
"dist_perm": "container.view_containerdistribution",
"ns_perm": "container.view_containernamespace"
}
},
"statements": [
{
"action": [
"list"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"retrieve"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_view_containerpush_repository or has_distribution_perms:container.view_containerdistribution"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"tag",
"untag",
"remove_image",
"sign",
"remove_signatures"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_modify_content_containerpushrepository or has_distribution_perms:container.modify_content_containerpushrepository"
],
"effect": "allow",
"principal": "authenticated"
},
{
"action": [
"update",
"partial_update"
],
"condition_expression": [
"has_namespace_obj_perms:container.namespace_change_containerpushrepository or has_distribution_perms:container.change_containerdistribution"
],
"effect": "allow",
"principal": "authenticated"
}
],
"viewset_name": "repositories/container/container-push"
}
]
}
(pulp) [vagrant@pulp3-source-fedora36 nginx]$
```
| Here are the steps that can be taken via API ( or via CLI, it is also available) for the deployments that have uncustomized access policy.
1. Find the href for the uncustomized container push viewset access policy
```python
http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password | jq -r '.results[] | .customized'
false
http https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/?viewset_name=repositories/container/container-push --auth admin:password | jq -r '.results[] | .pulp_href'
/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/
```
2. reset the access policy
```python
http POST https://PULP3-SOURCE-FEDORA36.puffy.example.com/pulp/api/v3/access_policies/219b492a-1334-4f68-b0a7-2244beb205ac/reset/ --auth admin:password
```
3. restart services
4. re-try push operation. it should succeed.
There's a shortcut for resetting an access policy in the cli:
`pulp access-policy reset --help`. | 2022-11-15T11:01:09 |
|
pulp/pulpcore | 3,410 | pulp__pulpcore-3410 | [
"3409"
] | ab7f4b7c7e1939ae605a8d7cf996c6498c83ddd7 | diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py
+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
@@ -11,6 +11,11 @@ def move_remote_url_credentials(apps, schema_editor):
for remote in Remote.objects.filter(url__contains="@").iterator():
url = urlparse(remote.url)
+ if '@' not in url.netloc:
+ # URLs can have an @ in other places than the netloc,
+ # but those do not indicate credentials
+ continue
+
if not remote.username:
remote.username = url.username
if not remote.password:
| diff --git a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
--- a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
+++ b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
@@ -13,9 +13,13 @@ def setUp(self):
self.remote = Remote.objects.create(
name="test-url", url="http://elladan:[email protected]", pulp_type="file"
)
+ self.remote_without_credentials = Remote.objects.create(
+ name="test-url-no-credentials", url="https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/", pulp_type="file"
+ )
def tearDown(self):
self.remote.delete()
+ self.remote_without_credentials.delete()
def test_move_remote_url_credentials(self):
apps_mock = Mock()
@@ -29,3 +33,14 @@ def test_move_remote_url_credentials(self):
self.assertEqual(self.remote.url, "http://rivendell.org")
self.assertEqual(self.remote.username, "elladan")
self.assertEqual(self.remote.password, "lembas")
+
+ def test_accept_remote_without_credentials_but_with_at(self):
+ apps_mock = Mock()
+ apps_mock.get_model = Mock(return_value=Remote)
+
+ # use import_module due to underscores
+ migration = import_module("pulpcore.app.migrations.0077_move_remote_url_credentials")
+ migration.move_remote_url_credentials(apps_mock, None)
+
+ self.remote_without_credentials = Remote.objects.get(name="test-url-no-credentials")
+ self.assertEqual(self.remote_without_credentials.url, "https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/")
| 0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc
**Version**
3.18.10
**Describe the bug**
Migration 0077 fails when you have a remote that has an @ somewhere in the path
```
Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):
File "/usr/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())
File "/usr/lib/python3.9/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3.9/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py", line 19, in move_remote_url_credentials
_, url_split = url.netloc.rsplit("@", maxsplit=1)
ValueError: not enough values to unpack (expected 2, got 1)
```
**To Reproduce**
Steps to reproduce the behavior:
* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`
* Try to migrate 0077
**Expected behavior**
migration aplies
**Additional context**
https://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088
| 2022-11-17T11:50:41 |
|
pulp/pulpcore | 3,411 | pulp__pulpcore-3411 | [
"3409"
] | 055d5a664fae77e5fd0037451968423156a2a611 | diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py
+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
@@ -11,6 +11,11 @@ def move_remote_url_credentials(apps, schema_editor):
for remote in Remote.objects.filter(url__contains="@").iterator():
url = urlparse(remote.url)
+ if '@' not in url.netloc:
+ # URLs can have an @ in other places than the netloc,
+ # but those do not indicate credentials
+ continue
+
if not remote.username:
remote.username = url.username
if not remote.password:
| diff --git a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
--- a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
+++ b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
@@ -13,9 +13,13 @@ def setUp(self):
self.remote = Remote.objects.create(
name="test-url", url="http://elladan:[email protected]", pulp_type="file"
)
+ self.remote_without_credentials = Remote.objects.create(
+ name="test-url-no-credentials", url="https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/", pulp_type="file"
+ )
def tearDown(self):
self.remote.delete()
+ self.remote_without_credentials.delete()
def test_move_remote_url_credentials(self):
apps_mock = Mock()
@@ -29,3 +33,14 @@ def test_move_remote_url_credentials(self):
self.assertEqual(self.remote.url, "http://rivendell.org")
self.assertEqual(self.remote.username, "elladan")
self.assertEqual(self.remote.password, "lembas")
+
+ def test_accept_remote_without_credentials_but_with_at(self):
+ apps_mock = Mock()
+ apps_mock.get_model = Mock(return_value=Remote)
+
+ # use import_module due to underscores
+ migration = import_module("pulpcore.app.migrations.0077_move_remote_url_credentials")
+ migration.move_remote_url_credentials(apps_mock, None)
+
+ self.remote_without_credentials = Remote.objects.get(name="test-url-no-credentials")
+ self.assertEqual(self.remote_without_credentials.url, "https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/")
| 0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc
**Version**
3.18.10
**Describe the bug**
Migration 0077 fails when you have a remote that has an @ somewhere in the path
```
Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):
File "/usr/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())
File "/usr/lib/python3.9/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3.9/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py", line 19, in move_remote_url_credentials
_, url_split = url.netloc.rsplit("@", maxsplit=1)
ValueError: not enough values to unpack (expected 2, got 1)
```
**To Reproduce**
Steps to reproduce the behavior:
* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`
* Try to migrate 0077
**Expected behavior**
migration aplies
**Additional context**
https://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088
| 2022-11-17T15:57:37 |
|
pulp/pulpcore | 3,412 | pulp__pulpcore-3412 | [
"3409"
] | cdc773503890c4416fbe7e6158e1d30709ef90de | diff --git a/pulpcore/app/migrations/0077_move_remote_url_credentials.py b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
--- a/pulpcore/app/migrations/0077_move_remote_url_credentials.py
+++ b/pulpcore/app/migrations/0077_move_remote_url_credentials.py
@@ -11,6 +11,11 @@ def move_remote_url_credentials(apps, schema_editor):
for remote in Remote.objects.filter(url__contains="@").iterator():
url = urlparse(remote.url)
+ if '@' not in url.netloc:
+ # URLs can have an @ in other places than the netloc,
+ # but those do not indicate credentials
+ continue
+
if not remote.username:
remote.username = url.username
if not remote.password:
| diff --git a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
--- a/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
+++ b/pulpcore/tests/unit/migrations/test_0077_move_remote_url_credentials.py
@@ -13,9 +13,13 @@ def setUp(self):
self.remote = Remote.objects.create(
name="test-url", url="http://elladan:[email protected]", pulp_type="file"
)
+ self.remote_without_credentials = Remote.objects.create(
+ name="test-url-no-credentials", url="https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/", pulp_type="file"
+ )
def tearDown(self):
self.remote.delete()
+ self.remote_without_credentials.delete()
def test_move_remote_url_credentials(self):
apps_mock = Mock()
@@ -29,3 +33,14 @@ def test_move_remote_url_credentials(self):
self.assertEqual(self.remote.url, "http://rivendell.org")
self.assertEqual(self.remote.username, "elladan")
self.assertEqual(self.remote.password, "lembas")
+
+ def test_accept_remote_without_credentials_but_with_at(self):
+ apps_mock = Mock()
+ apps_mock.get_model = Mock(return_value=Remote)
+
+ # use import_module due to underscores
+ migration = import_module("pulpcore.app.migrations.0077_move_remote_url_credentials")
+ migration.move_remote_url_credentials(apps_mock, None)
+
+ self.remote_without_credentials = Remote.objects.get(name="test-url-no-credentials")
+ self.assertEqual(self.remote_without_credentials.url, "https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/")
| 0077_move_remote_url_credentials.py fails on Remotes that have @ in path, not netloc
**Version**
3.18.10
**Describe the bug**
Migration 0077 fails when you have a remote that has an @ somewhere in the path
```
Applying core.0077_move_remote_url_credentials...Traceback (most recent call last):
File "/usr/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore==3.18.10', 'console_scripts', 'pulpcore-manager')())
File "/usr/lib/python3.9/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 117, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/usr/lib/python3.9/site-packages/django/db/migrations/executor.py", line 227, in apply_migration
state = migration.apply(state, schema_editor)
File "/usr/lib/python3.9/site-packages/django/db/migrations/migration.py", line 126, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/usr/lib/python3.9/site-packages/django/db/migrations/operations/special.py", line 190, in database_forwards
self.code(from_state.apps, schema_editor)
File "/usr/lib/python3.9/site-packages/pulpcore/app/migrations/0077_move_remote_url_credentials.py", line 19, in move_remote_url_credentials
_, url_split = url.netloc.rsplit("@", maxsplit=1)
ValueError: not enough values to unpack (expected 2, got 1)
```
**To Reproduce**
Steps to reproduce the behavior:
* Have a remote `https://download.copr.fedorainfracloud.org/results/@caddy/caddy/epel-8-x86_64/`
* Try to migrate 0077
**Expected behavior**
migration aplies
**Additional context**
https://community.theforeman.org/t/foreman-3-3-katello-4-5-upgrade-failed-pulpcore-manager-migrate-noinput/31088
| 2022-11-17T15:57:38 |
|
pulp/pulpcore | 3,462 | pulp__pulpcore-3462 | [
"3461"
] | 4e25949176d72c5dbe1c7623a9c47d253a18b085 | diff --git a/pulpcore/app/modelresource.py b/pulpcore/app/modelresource.py
--- a/pulpcore/app/modelresource.py
+++ b/pulpcore/app/modelresource.py
@@ -59,6 +59,7 @@ class Meta:
"next_version",
"repository_ptr",
"remote",
+ "pulp_labels",
)
| Database errors raised when importing content
**Version**
Main pulpcore branch. The issue arose after merging the labels refractor work (https://github.com/pulp/pulpcore/commit/4e25949176d72c5dbe1c7623a9c47d253a18b085) .
Reproducible in pulp_file and pulp_rpm.
**Describe the bug**
```
pulp [d32341b1-78b2-44da-b43d-e51121df9e95]: pulpcore.tasking.pulpcore_worker:INFO: Task 4c2b456b-d9a8-4238-bb45-7b63f403229c failed (Unexpected end of string
LINE 1: ...le.file', '365f08db-ac00-4e21-8abf-af0f047064cd', '{}', '', ...
^
)
pulp [d32341b1-78b2-44da-b43d-e51121df9e95]: pulpcore.tasking.pulpcore_worker:INFO: File "/home/vagrant/devel/pulpcore/pulpcore/tasking/pulpcore_worker.py", line 444, in _perform_task
result = func(*args, **kwargs)
File "/home/vagrant/devel/pulpcore/pulpcore/app/tasks/importer.py", line 236, in import_repository_version
for a_result in _import_file(os.path.join(rv_path, filename), res_class, retry=True):
File "/home/vagrant/devel/pulpcore/pulpcore/app/tasks/importer.py", line 138, in _import_file
a_result = resource.import_data(data, raise_errors=True)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/import_export/resources.py", line 819, in import_data
return self.import_data_inner(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/import_export/resources.py", line 871, in import_data_inner
raise row_result.errors[-1].error
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/import_export/resources.py", line 743, in import_row
self.save_instance(instance, new, using_transactions, dry_run)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/import_export/resources.py", line 500, in save_instance
instance.save()
File "/home/vagrant/devel/pulpcore/pulpcore/app/models/repository.py", line 95, in save
super().save(*args, **kwargs)
File "/home/vagrant/devel/pulpcore/pulpcore/app/models/base.py", line 203, in save
return super().save(*args, **kwargs)
File "/usr/lib64/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django_lifecycle/mixins.py", line 169, in save
save(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/base.py", line 739, in save
self.save_base(using=using, force_insert=force_insert,
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/base.py", line 775, in save_base
parent_inserted = self._save_parents(cls, using, update_fields)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/base.py", line 804, in _save_parents
updated = self._save_table(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/base.py", line 881, in _save_table
results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/base.py", line 919, in _do_insert
return manager._insert(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/query.py", line 1270, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1416, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 79, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
```
| Looks like django-import-export does not support hstore:
https://github.com/django-import-export/django-import-export/issues/752
That issue was closed as "stale" in 2019.
We can start by marking pulp_labels as excluded in modelresource, to avoid the problem right now, and wait for import/export stakeholders to tell us if they actually care going forward.
(NB: we weren't exporting Labels prior to the HStore change, so that doesn't change the semantics of import/export) | 2022-12-13T21:25:38 |
|
pulp/pulpcore | 3,469 | pulp__pulpcore-3469 | [
"3468"
] | da28503be201cc558837e32ee13b05439693c894 | diff --git a/pulpcore/plugin/util.py b/pulpcore/plugin/util.py
--- a/pulpcore/plugin/util.py
+++ b/pulpcore/plugin/util.py
@@ -12,4 +12,4 @@
remove_role,
)
-from pulpcore.app.util import get_artifact_url, gpg_verify, verify_signature # noqa
+from pulpcore.app.util import get_artifact_url, get_url, gpg_verify, verify_signature # noqa
| Expose "get_url" via the plugin interface
| 2022-12-14T21:02:35 |
||
pulp/pulpcore | 3,479 | pulp__pulpcore-3479 | [
"3446"
] | 4b5c5c0b91aea140094430ed12cdf6a14bc49189 | diff --git a/pulpcore/app/models/importer.py b/pulpcore/app/models/importer.py
--- a/pulpcore/app/models/importer.py
+++ b/pulpcore/app/models/importer.py
@@ -1,3 +1,4 @@
+from django.core.exceptions import ObjectDoesNotExist
from django.db import models
from pulpcore.app.models import (
@@ -51,8 +52,24 @@ def repo_mapping(self):
@repo_mapping.setter
def repo_mapping(self, mapping):
self.repo_map.all().delete()
+ failed_repos = []
+ # Process a mapping to find the destinations that map to the sources
+ # Record all failing destinations to report to the user.
+ # Only create the mappings if *everything* worked - otherweise, fail with error.
+ the_map = {}
for source, repo_name in mapping.items():
- repo = Repository.objects.get(name=repo_name)
+ try:
+ repo = Repository.objects.get(name=repo_name)
+ the_map[source] = repo
+ except ObjectDoesNotExist:
+ failed_repos.append(repo_name)
+ continue
+
+ if failed_repos: # We had a failure - report and leave
+ raise ObjectDoesNotExist(f"names: {str(failed_repos)}")
+
+ # Everything worked - create the mapping
+ for source, repo in the_map.items():
self.repo_map.create(source_repo=source, repository=repo)
class Meta:
diff --git a/pulpcore/app/serializers/importer.py b/pulpcore/app/serializers/importer.py
--- a/pulpcore/app/serializers/importer.py
+++ b/pulpcore/app/serializers/importer.py
@@ -1,6 +1,7 @@
import os
from gettext import gettext as _
+from django.core.exceptions import ObjectDoesNotExist
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -78,9 +79,11 @@ def create(self, validated_data):
importer = super().create(validated_data)
try:
importer.repo_mapping = repo_mapping
- except Exception as err:
+ except ObjectDoesNotExist as err:
importer.delete()
- raise serializers.ValidationError(_("Bad repo mapping: {}").format(err))
+ raise serializers.ValidationError(
+ _("Failed to find repositories from repo_mapping: {}").format(err)
+ )
else:
return importer
diff --git a/pulpcore/app/viewsets/user.py b/pulpcore/app/viewsets/user.py
--- a/pulpcore/app/viewsets/user.py
+++ b/pulpcore/app/viewsets/user.py
@@ -217,8 +217,10 @@ def create(self, request, group_pk):
)
try:
user = User.objects.get(**request.data)
- except (User.DoesNotExist, FieldError) as exc:
- raise ValidationError(str(exc))
+ except (User.DoesNotExist, FieldError):
+ raise ValidationError(
+ _("Could not find user {}").format(request.data.get("username", None))
+ )
group.user_set.add(user)
group.save()
serializer = GroupUserSerializer(user, context={"request": request})
| diff --git a/pulpcore/tests/functional/api/test_users_groups.py b/pulpcore/tests/functional/api/test_users_groups.py
--- a/pulpcore/tests/functional/api/test_users_groups.py
+++ b/pulpcore/tests/functional/api/test_users_groups.py
@@ -1,5 +1,6 @@
import pytest
import uuid
+from pulpcore.client.pulpcore.exceptions import ApiException
@pytest.mark.parallel
@@ -58,3 +59,13 @@ def test_filter_groups(groups_api_client, gen_object_with_cleanup):
assert len(groups.results) == 1
groups = groups_api_client.list(name=f"{prefix}_newbees")
assert len(groups.results) == 1
+
+
[email protected]
+def test_groups_add_bad_user(groups_api_client, groups_users_api_client, gen_object_with_cleanup):
+ """Test that adding a nonexistent user to a group fails."""
+
+ prefix = str(uuid.uuid4())
+ group_href = gen_object_with_cleanup(groups_api_client, {"name": f"{prefix}_newbees"}).pulp_href
+ with pytest.raises(ApiException, match="foo"):
+ groups_users_api_client.create(group_href, group_user={"username": "foo"})
| Better exception handling
I believe it is a good practice to not send the exception/stack trace itself to the final user when dealing with errors.
Currently we're doing it in two places:
1. https://github.com/pulp/pulpcore/blob/2c573c4d03514e743b57357165d001c43acd90c2/pulpcore/app/viewsets/user.py#L221-L221
and
2. https://github.com/pulp/pulpcore/blob/2c573c4d03514e743b57357165d001c43acd90c2/pulpcore/app/serializers/importer.py#L83-L83
Also, the second place deals directly using the `Exception` object, where it could use a more specific exception.
| 2022-12-20T19:56:27 |
|
pulp/pulpcore | 3,500 | pulp__pulpcore-3500 | [
"3495"
] | 772eea8c9eb61291ca309771ea59a786deaeaf59 | diff --git a/pulpcore/app/apps.py b/pulpcore/app/apps.py
--- a/pulpcore/app/apps.py
+++ b/pulpcore/app/apps.py
@@ -408,7 +408,9 @@ def _migrate_remaining_labels(sender, apps, verbosity, **kwargs):
.annotate(label_data=RawSQL("hstore(array_agg(key), array_agg(value))", []))
.values("label_data")
)
- model.objects.update(pulp_labels=label_subq)
+ model.objects.annotate(old_labels=label_subq).exclude(old_labels={}).update(
+ pulp_labels=label_subq
+ )
Label.objects.filter(content_type=ctype).delete()
if Label.objects.count():
diff --git a/pulpcore/app/management/commands/datarepair-labels.py b/pulpcore/app/management/commands/datarepair-labels.py
--- a/pulpcore/app/management/commands/datarepair-labels.py
+++ b/pulpcore/app/management/commands/datarepair-labels.py
@@ -69,7 +69,9 @@ def handle(self, *args, **options):
.annotate(label_data=RawSQL("hstore(array_agg(key), array_agg(value))", []))
.values("label_data")
)
- model.objects.update(pulp_labels=label_subq)
+ model.objects.annotate(old_labels=label_subq).exclude(old_labels={}).update(
+ pulp_labels=label_subq
+ )
Label.objects.filter(content_type=ctype).delete()
if Label.objects.count() and purge and not dry_run:
| django.db.utils.IntegrityError when upgrading to pulpcore 3.22
**Version**
pulpcore 3.22.0
**Describe the bug**
When upgrading to pulpcore 3.22, the pulp_labels migration fails with:
```
django.db.utils.IntegrityError: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, test, null, 1, file.file, null, null, f, null).
```
**To Reproduce**
1. In pulpcore < 3.22, create 2 repos: one with labels, one without.
2. Upgrade to pulpcore 3.22
**Expected behavior**
No migration failure.
| Full stacktrace below.
I believe the issue is that the subquery in this line will return null if there's no labels for a model which violates the non-null constraint on `pulp_labels`:
https://github.com/pulp/pulpcore/commit/3d67cc1be8a332cc4af9f36e9c52fbb10916ebd6#diff-1a15d2d8642c07f47172cac10d8c48b6a9229b6081510f66f9e32ba95659bf5bR411
```
psycopg2.errors.NotNullViolation: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, tester2, null, 1, f
ile.file, null, null, f, null).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/pulp/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore', 'console_scripts', 'pulpcore-manager')())
File "/home/vagrant/devel/pulpcore/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/commands/migrate.py", line 268, in handle
emit_post_migrate_signal(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/sql.py", line 42, in emit_post_migrate_signal
models.signals.post_migrate.send(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/dispatch/dispatcher.py", line 180, in send
return [
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/dispatch/dispatcher.py", line 181, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
File "/home/vagrant/devel/pulpcore/pulpcore/app/apps.py", line 411, in _migrate_remaining_labels
model.objects.update(pulp_labels=label_subq)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/query.py", line 783, in update
rows = query.get_compiler(self.db).execute_sql(CURSOR)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1567, in execute_sql
aux_rows = query.get_compiler(self.using).execute_sql(result_type)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1559, in execute_sql
cursor = super().execute_sql(result_type)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 79, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, tester2, null, 1, f
ile.file, null, null, f, null).
```
Thank you for finding this!
It's probably better to limit the update to all objects that have actual labels assigned. | 2023-01-11T09:48:48 |
|
pulp/pulpcore | 3,502 | pulp__pulpcore-3502 | [
"3495"
] | 6d048666897e02491ea2bc704ccb5fc0887e714b | diff --git a/pulpcore/app/apps.py b/pulpcore/app/apps.py
--- a/pulpcore/app/apps.py
+++ b/pulpcore/app/apps.py
@@ -408,7 +408,9 @@ def _migrate_remaining_labels(sender, apps, verbosity, **kwargs):
.annotate(label_data=RawSQL("hstore(array_agg(key), array_agg(value))", []))
.values("label_data")
)
- model.objects.update(pulp_labels=label_subq)
+ model.objects.annotate(old_labels=label_subq).exclude(old_labels={}).update(
+ pulp_labels=label_subq
+ )
Label.objects.filter(content_type=ctype).delete()
if Label.objects.count():
diff --git a/pulpcore/app/management/commands/datarepair-labels.py b/pulpcore/app/management/commands/datarepair-labels.py
--- a/pulpcore/app/management/commands/datarepair-labels.py
+++ b/pulpcore/app/management/commands/datarepair-labels.py
@@ -69,7 +69,9 @@ def handle(self, *args, **options):
.annotate(label_data=RawSQL("hstore(array_agg(key), array_agg(value))", []))
.values("label_data")
)
- model.objects.update(pulp_labels=label_subq)
+ model.objects.annotate(old_labels=label_subq).exclude(old_labels={}).update(
+ pulp_labels=label_subq
+ )
Label.objects.filter(content_type=ctype).delete()
if Label.objects.count() and purge and not dry_run:
| django.db.utils.IntegrityError when upgrading to pulpcore 3.22
**Version**
pulpcore 3.22.0
**Describe the bug**
When upgrading to pulpcore 3.22, the pulp_labels migration fails with:
```
django.db.utils.IntegrityError: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, test, null, 1, file.file, null, null, f, null).
```
**To Reproduce**
1. In pulpcore < 3.22, create 2 repos: one with labels, one without.
2. Upgrade to pulpcore 3.22
**Expected behavior**
No migration failure.
| Full stacktrace below.
I believe the issue is that the subquery in this line will return null if there's no labels for a model which violates the non-null constraint on `pulp_labels`:
https://github.com/pulp/pulpcore/commit/3d67cc1be8a332cc4af9f36e9c52fbb10916ebd6#diff-1a15d2d8642c07f47172cac10d8c48b6a9229b6081510f66f9e32ba95659bf5bR411
```
psycopg2.errors.NotNullViolation: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, tester2, null, 1, f
ile.file, null, null, f, null).
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/pulp/bin/pulpcore-manager", line 33, in <module>
sys.exit(load_entry_point('pulpcore', 'console_scripts', 'pulpcore-manager')())
File "/home/vagrant/devel/pulpcore/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/commands/migrate.py", line 268, in handle
emit_post_migrate_signal(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/core/management/sql.py", line 42, in emit_post_migrate_signal
models.signals.post_migrate.send(
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/dispatch/dispatcher.py", line 180, in send
return [
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/dispatch/dispatcher.py", line 181, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
File "/home/vagrant/devel/pulpcore/pulpcore/app/apps.py", line 411, in _migrate_remaining_labels
model.objects.update(pulp_labels=label_subq)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/query.py", line 783, in update
rows = query.get_compiler(self.db).execute_sql(CURSOR)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1567, in execute_sql
aux_rows = query.get_compiler(self.using).execute_sql(result_type)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1559, in execute_sql
cursor = super().execute_sql(result_type)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 79, in _execute
with self.db.wrap_database_errors:
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/pulp/lib64/python3.10/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.IntegrityError: null value in column "pulp_labels" of relation "core_repository" violates not-null constraint
DETAIL: Failing row contains (bbb38cf3-c632-4d51-a91f-94b23c7d3544, 2023-01-10 14:06:50.628502+00, 2023-01-10 14:06:50.632373+00, tester2, null, 1, f
ile.file, null, null, f, null).
```
Thank you for finding this!
It's probably better to limit the update to all objects that have actual labels assigned. | 2023-01-11T15:20:43 |
|
pulp/pulpcore | 3,511 | pulp__pulpcore-3511 | [
"3413"
] | 77cb2fcf9edaf026ed07d007b4513a47cb826951 | diff --git a/pulpcore/app/serializers/exporter.py b/pulpcore/app/serializers/exporter.py
--- a/pulpcore/app/serializers/exporter.py
+++ b/pulpcore/app/serializers/exporter.py
@@ -294,6 +294,12 @@ class FilesystemExportSerializer(ExportSerializer):
write_only=True,
)
+ start_repository_version = RepositoryVersionRelatedField(
+ help_text=_("The URI of the last-exported-repo-version."),
+ required=False,
+ write_only=True,
+ )
+
def validate(self, data):
if ("publication" not in data and "repository_version" not in data) or (
"publication" in data and "repository_version" in data
@@ -305,7 +311,11 @@ def validate(self, data):
class Meta:
model = models.FilesystemExport
- fields = ExportSerializer.Meta.fields + ("publication", "repository_version")
+ fields = ExportSerializer.Meta.fields + (
+ "publication",
+ "repository_version",
+ "start_repository_version",
+ )
class FilesystemExporterSerializer(ExporterSerializer):
diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -105,7 +105,7 @@ def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_M
def _export_publication_to_file_system(
- path, publication, method=FS_EXPORT_METHODS.WRITE, allow_missing=False
+ path, publication, start_repo_version=None, method=FS_EXPORT_METHODS.WRITE, allow_missing=False
):
"""
Export a publication to the file system.
@@ -114,9 +114,19 @@ def _export_publication_to_file_system(
path (str): Path to place the exported data
publication_pk (str): Publication pk
"""
+ difference_content_artifacts = []
content_artifacts = ContentArtifact.objects.filter(
pk__in=publication.published_artifact.values_list("content_artifact__pk", flat=True)
)
+ if start_repo_version:
+ start_version_content_artifacts = ContentArtifact.objects.filter(
+ artifact__in=start_repo_version.artifacts
+ )
+ difference_content_artifacts = set(
+ content_artifacts.difference(start_version_content_artifacts).values_list(
+ "pk", flat=True
+ )
+ )
if publication.pass_through:
content_artifacts |= ContentArtifact.objects.filter(
@@ -138,8 +148,11 @@ def _export_publication_to_file_system(
"content_artifact", "content_artifact__artifact"
).iterator():
# Artifact isn't guaranteed to be present
- if pa.content_artifact.artifact:
+ if pa.content_artifact.artifact and (
+ start_repo_version is None or pa.content_artifact.pk in difference_content_artifacts
+ ):
relative_path_to_artifacts[pa.relative_path] = pa.content_artifact.artifact
+
_export_to_file_system(path, relative_path_to_artifacts, method)
@@ -160,7 +173,7 @@ def _export_location_is_clean(path):
return True
-def fs_publication_export(exporter_pk, publication_pk):
+def fs_publication_export(exporter_pk, publication_pk, start_repo_version_pk=None):
"""
Export a publication to the file system using an exporter.
@@ -170,25 +183,41 @@ def fs_publication_export(exporter_pk, publication_pk):
"""
exporter = Exporter.objects.get(pk=exporter_pk).cast()
publication = Publication.objects.get(pk=publication_pk).cast()
+
+ start_repo_version = None
+ if start_repo_version_pk:
+ start_repo_version = RepositoryVersion.objects.get(pk=start_repo_version_pk)
+
+ params = {"publication": publication_pk}
+ if start_repo_version:
+ params["start_repository_version"] = start_repo_version_pk
+
export = FilesystemExport.objects.create(
exporter=exporter,
- params={"publication": publication_pk},
+ params=params,
task=Task.current(),
)
ExportedResource.objects.create(export=export, content_object=publication)
CreatedResource.objects.create(content_object=export)
-
log.info(
- "Exporting: file_system_exporter={exporter}, publication={publication}, path={path}".format(
- exporter=exporter.name, publication=publication.pk, path=exporter.path
+ "Exporting: file_system_exporter={exporter}, publication={publication}, "
+ "start_repo_version={start_repo_version}, path={path}".format(
+ exporter=exporter.name,
+ publication=publication.pk,
+ start_repo_version=start_repo_version_pk,
+ path=exporter.path,
)
)
+
if not _export_location_is_clean(exporter.path):
raise RuntimeError(_("Cannot export to directories that contain existing data."))
- _export_publication_to_file_system(exporter.path, publication, exporter.method)
+
+ _export_publication_to_file_system(
+ exporter.path, publication, start_repo_version=start_repo_version, method=exporter.method
+ )
-def fs_repo_version_export(exporter_pk, repo_version_pk):
+def fs_repo_version_export(exporter_pk, repo_version_pk, start_repo_version_pk=None):
"""
Export a repository version to the file system using an exporter.
@@ -198,9 +227,17 @@ def fs_repo_version_export(exporter_pk, repo_version_pk):
"""
exporter = Exporter.objects.get(pk=exporter_pk).cast()
repo_version = RepositoryVersion.objects.get(pk=repo_version_pk)
+ start_repo_version = None
+ if start_repo_version_pk:
+ start_repo_version = RepositoryVersion.objects.get(pk=start_repo_version_pk)
+
+ params = {"repository_version": repo_version_pk}
+ if start_repo_version:
+ params["start_repository_version"] = start_repo_version_pk
+
export = FilesystemExport.objects.create(
exporter=exporter,
- params={"repository_version": repo_version_pk},
+ params=params,
task=Task.current(),
)
ExportedResource.objects.create(export=export, content_object=repo_version)
@@ -208,18 +245,31 @@ def fs_repo_version_export(exporter_pk, repo_version_pk):
log.info(
"Exporting: file_system_exporter={exporter}, repo_version={repo_version}, "
- "path={path}".format(
- exporter=exporter.name, repo_version=repo_version.pk, path=exporter.path
+ "start_repo_version={start_repo_version}, path={path}".format(
+ exporter=exporter.name,
+ repo_version=repo_version.pk,
+ start_repo_version=start_repo_version_pk,
+ path=exporter.path,
)
)
content_artifacts = ContentArtifact.objects.filter(content__in=repo_version.content)
_validate_fs_export(content_artifacts)
+ difference_content_artifacts = []
+ if start_repo_version:
+ start_version_content_artifacts = ContentArtifact.objects.filter(
+ artifact__in=start_repo_version.artifacts
+ )
+ difference_content_artifacts = set(
+ content_artifacts.difference(start_version_content_artifacts).values_list(
+ "pk", flat=True
+ )
+ )
- relative_path_to_artifacts = {
- ca.relative_path: ca.artifact
- for ca in content_artifacts.select_related("artifact").iterator()
- }
+ relative_path_to_artifacts = {}
+ for ca in content_artifacts.select_related("artifact").iterator():
+ if start_repo_version is None or ca.pk in difference_content_artifacts:
+ relative_path_to_artifacts[ca.relative_path] = ca.artifact
_export_to_file_system(exporter.path, relative_path_to_artifacts, exporter.method)
diff --git a/pulpcore/app/viewsets/exporter.py b/pulpcore/app/viewsets/exporter.py
--- a/pulpcore/app/viewsets/exporter.py
+++ b/pulpcore/app/viewsets/exporter.py
@@ -159,13 +159,22 @@ def create(self, request, exporter_pk):
serializer = FilesystemExportSerializer(data=request.data, context={"exporter": exporter})
serializer.is_valid(raise_exception=True)
+ start_repository_version_pk = None
+ if request.data.get("start_repository_version"):
+ start_repository_version_pk = self.get_resource(
+ request.data["start_repository_version"], RepositoryVersion
+ ).pk
+
if request.data.get("publication"):
publication = self.get_resource(request.data["publication"], Publication)
-
task = dispatch(
fs_publication_export,
exclusive_resources=[exporter],
- kwargs={"exporter_pk": exporter.pk, "publication_pk": publication.pk},
+ kwargs={
+ "exporter_pk": exporter.pk,
+ "publication_pk": publication.pk,
+ "start_repo_version_pk": start_repository_version_pk,
+ },
)
else:
repo_version = self.get_resource(request.data["repository_version"], RepositoryVersion)
@@ -173,7 +182,11 @@ def create(self, request, exporter_pk):
task = dispatch(
fs_repo_version_export,
exclusive_resources=[exporter],
- kwargs={"exporter_pk": str(exporter.pk), "repo_version_pk": repo_version.pk},
+ kwargs={
+ "exporter_pk": str(exporter.pk),
+ "repo_version_pk": repo_version.pk,
+ "start_repo_version_pk": start_repository_version_pk,
+ },
)
return OperationPostponedResponse(task, request)
| Provide start_version and end version for fs exporter.
**Is your feature request related to a problem? Please describe.**
Pulp today can export a full yum repository with fs-exporter. You pass it a publication and it will export it. However customers want the 'incremental export' option for this exporter similar to the way the regular exporter functions.
Notice how regular pulp exporter takes `versions` and `start_versions` parameter here => https://docs.pulpproject.org/pulpcore/restapi.html#tag/Exporters:-Pulp-Exports/operation/exporters_core_pulp_exports_create
Pulp exports everything from there. We would need something similar for fs exporters also and will simpler because fs export concentrates on a single repo instead of multiple as the regular one does.
**Describe the solution you'd like**
Check out https://docs.pulpproject.org/pulpcore/restapi.html#tag/Exporters:-Filesystem-Exports/operation/exporters_core_filesystem_exports_create
Make the export take a publication as it does today and a start repository version
Ideally whatever is exported
1. Will be delta between `publication_version - start_version` in terms of rpms (mostly added)
2. The metadata of publication_version will be copied verbatim (no comparison necessary)
So this repository export is not 'complete' since it has only the added/changed rpms but the metadata of the suggested publication.
Idea is
* User has already exported Rhel - 7 (using the fs exporter) - full export
* User consumes content from the export
* Several days later the user now wants the new content for RHEL 7. Instead of exporting 50GB worth of content again the user's export should only consist of rpms that got added after the start version and latest metadata
* User should be able to copy the contents of this export over the regular rhel export directory
and use that to sync repodata.
**Describe alternatives you've considered**
I thought about other approaches but I think providing a start_version is the least invasive way imo that adequately serves us.
**Additional context**
The popularity of the fs export approach has made adding this functionality necessary. Recommend giving it a high priority.
https://bugzilla.redhat.com/show_bug.cgi?id=2143497
| 2023-01-13T23:13:32 |
||
pulp/pulpcore | 3,521 | pulp__pulpcore-3521 | [
"3584"
] | 8b05c805857bf37725b1cb2cb563b16752946ee0 | diff --git a/pulpcore/app/serializers/base.py b/pulpcore/app/serializers/base.py
--- a/pulpcore/app/serializers/base.py
+++ b/pulpcore/app/serializers/base.py
@@ -26,6 +26,188 @@
log = getLogger(__name__)
+# Field mixins
+
+
+class HrefFieldMixin:
+ """A mixin to configure related fields to generate relative hrefs."""
+
+ @property
+ def context(self):
+ # Removes the request from the context to display relative hrefs.
+ res = dict(super().context)
+ res["request"] = None
+ return res
+
+
+class _MatchingRegexViewName(object):
+ """This is a helper class to help defining object matching rules for master-detail.
+
+ If you can be specific, please specify the `view_name`, but if you cannot, this allows
+ you to specify a regular expression like .e.g. `r"repositories(-.*/.*)?-detail"` to
+ identify whether the provided resources view name belongs to any repository type.
+ """
+
+ __slots__ = ("pattern",)
+
+ def __init__(self, pattern):
+ self.pattern = pattern
+
+ def __repr__(self):
+ return f'{self.__class__.__name__}(r"{self.pattern}")'
+
+ def __eq__(self, other):
+ return re.fullmatch(self.pattern, other) is not None
+
+
+class _DetailFieldMixin(HrefFieldMixin):
+ """Mixin class containing code common to DetailIdentityField and DetailRelatedField"""
+
+ def __init__(self, view_name=None, view_name_pattern=None, **kwargs):
+ if view_name is None:
+ # set view name to prevent a DRF assertion that view_name is not None
+ # Anything that accesses self.view_name after __init__
+ # needs to have it set before being called. Unfortunately, a model instance
+ # is required to derive this value, so we can't make a view_name property.
+ if view_name_pattern:
+ view_name = _MatchingRegexViewName(view_name_pattern)
+ else:
+ log.warn(
+ _(
+ "Please provide either 'view_name' or 'view_name_pattern' for {} on {}."
+ ).format(self.__class__.__name__, traceback.extract_stack()[-4][2])
+ )
+ view_name = _MatchingRegexViewName(r".*")
+ super().__init__(view_name, **kwargs)
+
+ def _view_name(self, obj):
+ # this is probably memoizeable based on the model class if we want to get cachey
+ try:
+ obj = obj.cast()
+ except AttributeError:
+ # The normal message that comes up here is unhelpful, so do like other DRF
+ # fails do and be a little more helpful in the exception message.
+ msg = (
+ 'Expected a detail model instance, not {}. Do you need to add "many=True" to '
+ "this field definition in its serializer?"
+ ).format(type(obj))
+ raise ValueError(msg)
+ return get_view_name_for_model(obj, "detail")
+
+ def get_url(self, obj, view_name, request, *args, **kwargs):
+ # ignore the passed in view name and return the url to the cast unit, not the generic unit
+ view_name = self._view_name(obj)
+ return super().get_url(obj, view_name, request, *args, **kwargs)
+
+
+# Fields
+
+
+class IdentityField(
+ HrefFieldMixin,
+ serializers.HyperlinkedIdentityField,
+):
+ """IdentityField for use in the pulp_href field of non-Master/Detail Serializers.
+
+ When using this field on a serializer, it will serialize the related resource as a relative URL.
+ """
+
+
+class RelatedField(
+ HrefFieldMixin,
+ serializers.HyperlinkedRelatedField,
+):
+ """RelatedField when relating to non-Master/Detail models
+
+ When using this field on a serializer, it will serialize the related resource as a relative URL.
+ """
+
+
+class RelatedResourceField(RelatedField):
+ """RelatedResourceField when relating a Resource object models.
+
+ This field should be used to relate a list of non-homogeneous resources. e.g.:
+ CreatedResource and ExportedResource models that store relationships to arbitrary
+ resources.
+
+ Specific implementation requires the model to be defined in the Meta:.
+ """
+
+ def to_representation(self, data):
+ # If the content object was deleted
+ if data.content_object is None:
+ return None
+ try:
+ if not data.content_object.complete:
+ return None
+ except AttributeError:
+ pass
+
+ # query parameters can be ignored because we are looking just for 'pulp_href'; still,
+ # we need to use the request object due to contextual references required by some
+ # serializers
+ request = get_request_without_query_params(self.context)
+
+ viewset = get_viewset_for_model(data.content_object)
+ serializer = viewset.serializer_class(data.content_object, context={"request": request})
+ return serializer.data.get("pulp_href")
+
+
+class DetailIdentityField(_DetailFieldMixin, serializers.HyperlinkedIdentityField):
+ """IdentityField for use in the pulp_href field of Master/Detail Serializers
+
+ When using this field on a Serializer, it will automatically cast objects to their Detail type
+ base on the Serializer's Model before generating URLs for them.
+
+ Subclasses must indicate the Master model they represent by declaring a queryset
+ in their class body, usually <MasterModelImplementation>.objects.all().
+ """
+
+
+class DetailRelatedField(_DetailFieldMixin, serializers.HyperlinkedRelatedField):
+ """RelatedField for use when relating to Master/Detail models
+
+ When using this field on a Serializer, relate it to the Master model in a
+ Master/Detail relationship, and it will automatically cast objects to their Detail type
+ before generating URLs for them.
+
+ Subclasses must indicate the Master model they represent by declaring a queryset
+ in their class body, usually <MasterModelImplementation>.objects.all().
+ """
+
+ def get_object(self, *args, **kwargs):
+ # return the cast object, not the generic contentunit
+ return super().get_object(*args, **kwargs).cast()
+
+ def use_pk_only_optimization(self):
+ """
+ If the lookup field is `pk`, DRF substitutes a PKOnlyObject as an optimization. This
+ optimization breaks with Detail fields like this one which need access to their Meta
+ class to get the relevant `view_name`.
+ """
+ return False
+
+
+class NestedIdentityField(HrefFieldMixin, NestedHyperlinkedIdentityField):
+ """NestedIdentityField for use with nested resources.
+
+ When using this field in a serializer, it serializes the resource as a relative URL.
+ """
+
+
+class NestedRelatedField(
+ HrefFieldMixin,
+ NestedHyperlinkedRelatedField,
+):
+ """NestedRelatedField for use when relating to nested resources.
+
+ When using this field in a serializer, it serializes the related resource as a relative URL.
+ """
+
+
+# Serializer mixins
+
+
def validate_unknown_fields(initial_data, defined_fields):
"""
This will raise a `ValidationError` if a serializer is passed fields that are unknown.
@@ -49,6 +231,36 @@ def validate(self, data):
return data
+class HiddenFieldsMixin(serializers.Serializer):
+ """
+ Adds a list field of hidden (write only) fields and whether their values are set
+ so clients can tell if they are overwriting an existing value.
+ For example this could be any sensitive information such as a password, name or token.
+ The list contains dictionaries with keys `name` and `is_set`.
+ """
+
+ hidden_fields = serializers.SerializerMethodField(
+ help_text=_("List of hidden (write only) fields")
+ )
+
+ def get_hidden_fields(
+ self, obj
+ ) -> List[TypedDict("hidden_fields", {"name": str, "is_set": bool})]:
+ hidden_fields = []
+
+ # returns false if field is "" or None
+ def _is_set(field_name):
+ field_value = getattr(obj, field_name)
+ return field_value != "" and field_value is not None
+
+ fields = self.get_fields()
+ for field_name in fields:
+ if fields[field_name].write_only:
+ hidden_fields.append({"name": field_name, "is_set": _is_set(field_name)})
+
+ return hidden_fields
+
+
class GetOrCreateSerializerMixin:
"""A mixin that provides a get_or_create with validation in the serializer"""
@@ -71,6 +283,9 @@ def get_or_create(cls, natural_key, default_values=None):
return result
+# Serializers
+
+
class ModelSerializer(
ValidateFieldsMixin, QueryFieldsMixin, serializers.HyperlinkedModelSerializer
):
@@ -214,180 +429,6 @@ def update(self, instance, validated_data):
return instance
-class _MatchingRegexViewName(object):
- """This is a helper class to help defining object matching rules for master-detail.
-
- If you can be specific, please specify the `view_name`, but if you cannot, this allows
- you to specify a regular expression like .e.g. `r"repositories(-.*/.*)?-detail"` to
- identify whether the provided resources viewn name belongs to any repository type.
- """
-
- __slots__ = ("pattern",)
-
- def __init__(self, pattern):
- self.pattern = pattern
-
- def __repr__(self):
- return f'{self.__class__.__name__}(r"{self.pattern}")'
-
- def __eq__(self, other):
- return re.fullmatch(self.pattern, other) is not None
-
-
-class _DetailFieldMixin:
- """Mixin class containing code common to DetailIdentityField and DetailRelatedField"""
-
- def __init__(self, view_name=None, view_name_pattern=None, **kwargs):
- if view_name is None:
- # set view name to prevent a DRF assertion that view_name is not None
- # Anything that accesses self.view_name after __init__
- # needs to have it set before being called. Unfortunately, a model instance
- # is required to derive this value, so we can't make a view_name property.
- if view_name_pattern:
- view_name = _MatchingRegexViewName(view_name_pattern)
- else:
- log.warn(
- _(
- "Please provide either 'view_name' or 'view_name_pattern' for {} on {}."
- ).format(self.__class__.__name__, traceback.extract_stack()[-4][2])
- )
- view_name = _MatchingRegexViewName(r".*")
- super().__init__(view_name, **kwargs)
-
- def _view_name(self, obj):
- # this is probably memoizeable based on the model class if we want to get cachey
- try:
- obj = obj.cast()
- except AttributeError:
- # The normal message that comes up here is unhelpful, so do like other DRF
- # fails do and be a little more helpful in the exception message.
- msg = (
- 'Expected a detail model instance, not {}. Do you need to add "many=True" to '
- "this field definition in its serializer?"
- ).format(type(obj))
- raise ValueError(msg)
- return get_view_name_for_model(obj, "detail")
-
- def get_url(self, obj, view_name, request, *args, **kwargs):
- # ignore the passed in view name and return the url to the cast unit, not the generic unit
- request = None
- view_name = self._view_name(obj)
- return super().get_url(obj, view_name, request, *args, **kwargs)
-
-
-class IdentityField(serializers.HyperlinkedIdentityField):
- """IdentityField for use in the pulp_href field of non-Master/Detail Serializers.
-
- The get_url method is overriden so relative URLs are returned.
- """
-
- def get_url(self, obj, view_name, request, *args, **kwargs):
- # ignore the passed in view name and return the url to the cast unit, not the generic unit
- request = None
- return super().get_url(obj, view_name, request, *args, **kwargs)
-
-
-class RelatedField(serializers.HyperlinkedRelatedField):
- """RelatedField when relating to non-Master/Detail models
-
- When using this field on a serializer, it will serialize the related resource as a relative URL.
- """
-
- def get_url(self, obj, view_name, request, *args, **kwargs):
- # ignore the passed in view name and return the url to the cast unit, not the generic unit
- request = None
- return super().get_url(obj, view_name, request, *args, **kwargs)
-
-
-class RelatedResourceField(RelatedField):
- """RelatedResourceField when relating a Resource object models.
-
- This field should be used to relate a list of non-homogeneous resources. e.g.:
- CreatedResource and ExportedResource models that store relationships to arbitrary
- resources.
-
- Specific implementation requires the model to be defined in the Meta:.
- """
-
- def to_representation(self, data):
- # If the content object was deleted
- if data.content_object is None:
- return None
- try:
- if not data.content_object.complete:
- return None
- except AttributeError:
- pass
-
- # query parameters can be ignored because we are looking just for 'pulp_href'; still,
- # we need to use the request object due to contextual references required by some
- # serializers
- request = get_request_without_query_params(self.context)
-
- viewset = get_viewset_for_model(data.content_object)
- serializer = viewset.serializer_class(data.content_object, context={"request": request})
- return serializer.data.get("pulp_href")
-
-
-class DetailIdentityField(_DetailFieldMixin, serializers.HyperlinkedIdentityField):
- """IdentityField for use in the pulp_href field of Master/Detail Serializers
-
- When using this field on a Serializer, it will automatically cast objects to their Detail type
- base on the Serializer's Model before generating URLs for them.
-
- Subclasses must indicate the Master model they represent by declaring a queryset
- in their class body, usually <MasterModelImplementation>.objects.all().
- """
-
-
-class DetailRelatedField(_DetailFieldMixin, serializers.HyperlinkedRelatedField):
- """RelatedField for use when relating to Master/Detail models
-
- When using this field on a Serializer, relate it to the Master model in a
- Master/Detail relationship, and it will automatically cast objects to their Detail type
- before generating URLs for them.
-
- Subclasses must indicate the Master model they represent by declaring a queryset
- in their class body, usually <MasterModelImplementation>.objects.all().
- """
-
- def get_object(self, *args, **kwargs):
- # return the cast object, not the generic contentunit
- return super().get_object(*args, **kwargs).cast()
-
- def use_pk_only_optimization(self):
- """
- If the lookup field is `pk`, DRF substitutes a PKOnlyObject as an optimization. This
- optimization breaks with Detail fields like this one which need access to their Meta
- class to get the relevant `view_name`.
- """
- return False
-
-
-class NestedIdentityField(NestedHyperlinkedIdentityField):
- """NestedIdentityField for use with nested resources.
-
- When using this field in a serializer, it serializes the resource as a relative URL.
- """
-
- def get_url(self, obj, view_name, request, *args, **kwargs):
- # ignore the passed in view name and return the url to the cast unit, not the generic unit
- request = None
- return super().get_url(obj, view_name, request, *args, **kwargs)
-
-
-class NestedRelatedField(NestedHyperlinkedRelatedField):
- """NestedRelatedField for use when relating to nested resources.
-
- When using this field in a serializer, it serializes the related resource as a relative URL.
- """
-
- def get_url(self, obj, view_name, request, *args, **kwargs):
- # ignore the passed in view name and return the url to the cast unit, not the generic unit
- request = None
- return super().get_url(obj, view_name, request, *args, **kwargs)
-
-
class AsyncOperationResponseSerializer(serializers.Serializer):
"""
Serializer for asynchronous operations.
@@ -414,33 +455,3 @@ class TaskGroupOperationResponseSerializer(serializers.Serializer):
view_name="task-groups-detail",
allow_null=False,
)
-
-
-class HiddenFieldsMixin(serializers.Serializer):
- """
- Adds a list field of hidden (write only) fields and whether their values are set
- so clients can tell if they are overwriting an existing value.
- For example this could be any sensitive information such as a password, name or token.
- The list contains dictionaries with keys `name` and `is_set`.
- """
-
- hidden_fields = serializers.SerializerMethodField(
- help_text=_("List of hidden (write only) fields")
- )
-
- def get_hidden_fields(
- self, obj
- ) -> List[TypedDict("hidden_fields", {"name": str, "is_set": bool})]:
- hidden_fields = []
-
- # returns false if field is "" or None
- def _is_set(field_name):
- field_value = getattr(obj, field_name)
- return field_value != "" and field_value is not None
-
- fields = self.get_fields()
- for field_name in fields:
- if fields[field_name].write_only:
- hidden_fields.append({"name": field_name, "is_set": _is_set(field_name)})
-
- return hidden_fields
| [task] Refactor linked serializer fields
| 2023-01-23T13:48:30 |
||
pulp/pulpcore | 3,523 | pulp__pulpcore-3523 | [
"3522"
] | 9e42d069313b7fb1f852ca5ed4d0aca3452cfe7a | diff --git a/pulpcore/app/models/content.py b/pulpcore/app/models/content.py
--- a/pulpcore/app/models/content.py
+++ b/pulpcore/app/models/content.py
@@ -20,6 +20,7 @@
from django.db import IntegrityError, models, transaction
from django.forms.models import model_to_dict
from django.utils.timezone import now
+from django_guid import get_guid
from django_lifecycle import BEFORE_UPDATE, BEFORE_SAVE, hook
from pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS
@@ -734,6 +735,16 @@ class SigningService(BaseModel):
pubkey_fingerprint = models.TextField()
script = models.TextField()
+ def _env_variables(self, env_vars=None):
+ guid = get_guid()
+ env = {
+ "PULP_SIGNING_KEY_FINGERPRINT": self.pubkey_fingerprint,
+ "CORRELATION_ID": guid if guid else "",
+ }
+ if env_vars:
+ env.update(env_vars)
+ return env
+
def sign(self, filename, env_vars=None):
"""
Signs the file provided via 'filename' by invoking an external script (or executable).
@@ -752,12 +763,9 @@ def sign(self, filename, env_vars=None):
Returns:
A dictionary as validated by the validate() method.
"""
- env = {"PULP_SIGNING_KEY_FINGERPRINT": self.pubkey_fingerprint}
- if env_vars:
- env.update(env_vars)
completed_process = subprocess.run(
[self.script, filename],
- env=env,
+ env=self._env_variables(env_vars),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
@@ -774,13 +782,10 @@ def sign(self, filename, env_vars=None):
async def asign(self, filename, env_vars=None):
"""Async version of sign."""
- env = {"PULP_SIGNING_KEY_FINGERPRINT": self.pubkey_fingerprint}
- if env_vars:
- env.update(env_vars)
process = await asyncio.create_subprocess_exec(
self.script,
filename,
- env=env,
+ env=self._env_variables(env_vars),
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE,
)
| Correlation ID should be available to signing cli
The Correlation ID / guid is used throughout Pulp to correlate logs with a particular request, however it is not made available to the signing cli script. It should be, in case that script wants to log things too.
| 2023-01-24T16:23:21 |
||
pulp/pulpcore | 3,537 | pulp__pulpcore-3537 | [
"3541"
] | 77cb2fcf9edaf026ed07d007b4513a47cb826951 | diff --git a/pulpcore/plugin/stages/artifact_stages.py b/pulpcore/plugin/stages/artifact_stages.py
--- a/pulpcore/plugin/stages/artifact_stages.py
+++ b/pulpcore/plugin/stages/artifact_stages.py
@@ -465,7 +465,8 @@ async def run(self):
for ra in existing_ras:
for c_type in Artifact.COMMON_DIGEST_FIELDS:
checksum = await sync_to_async(getattr)(ra, c_type)
- if checksum:
+ # pick the first occurence of RA from ACS
+ if checksum and checksum not in existing_ras_dict:
existing_ras_dict[checksum] = {
"remote": ra.remote,
"url": ra.url,
| Improve the logic in the ACSHandleStage
**Version**
main
**Describe the bug**
If there are ACS that point to the same content, pick the first RA instead of last
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
| 2023-01-30T15:52:06 |
||
pulp/pulpcore | 3,540 | pulp__pulpcore-3540 | [
"3407"
] | 70b1eb04a4349bedfa6781c8598b53a99f9a63a1 | diff --git a/pulpcore/tasking/pulpcore_worker.py b/pulpcore/tasking/pulpcore_worker.py
--- a/pulpcore/tasking/pulpcore_worker.py
+++ b/pulpcore/tasking/pulpcore_worker.py
@@ -54,7 +54,11 @@
_logger = logging.getLogger(__name__)
random.seed()
+# Number of heartbeats for a task to finish on graceful worker shutdown (approx)
TASK_GRACE_INTERVAL = 3
+# Number of heartbeats between attempts to kill the subprocess (approx)
+TASK_KILL_INTERVAL = 1
+# Number of heartbeats between cleaning up worker processes (approx)
WORKER_CLEANUP_INTERVAL = 100
# Randomly chosen
TASK_SCHEDULING_LOCK = 42
@@ -121,6 +125,7 @@ def _signal_handler(self, thesignal, frame):
_logger.info(_("Worker %s was requested to shut down."), self.name)
+ self.task_grace_timeout = TASK_GRACE_INTERVAL
self.shutdown_requested = True
def handle_worker_heartbeat(self):
@@ -166,7 +171,7 @@ def worker_cleanup(self):
def beat(self):
if self.worker.last_heartbeat < timezone.now() - timedelta(seconds=self.heartbeat_period):
self.worker = self.handle_worker_heartbeat()
- if self.shutdown_requested:
+ if self.task_grace_timeout > 0:
self.task_grace_timeout -= 1
self.worker_cleanup_countdown -= 1
if self.worker_cleanup_countdown <= 0:
@@ -303,7 +308,6 @@ def supervise_task(self, task):
This function must only be called while holding the lock for that task."""
- self.task_grace_timeout = TASK_GRACE_INTERVAL
task.worker = self.worker
task.save(update_fields=["worker"])
cancel_state = None
@@ -317,11 +321,15 @@ def supervise_task(self, task):
item = connection.connection.notifies.pop(0)
if item.channel == "pulp_worker_cancel" and item.payload == str(task.pk):
_logger.info(_("Received signal to cancel current task %s."), task.pk)
- os.kill(task_process.pid, signal.SIGUSR1)
cancel_state = TASK_STATES.CANCELED
# ignore all other notifications
if cancel_state:
- break
+ if self.task_grace_timeout > 0:
+ _logger.info("Wait for canceled task to abort.")
+ else:
+ self.task_grace_timeout = TASK_KILL_INTERVAL
+ _logger.info("Aborting current task %s due to cancelation.", task.pk)
+ os.kill(task_process.pid, signal.SIGUSR1)
r, w, x = select.select(
[self.sentinel, connection.connection, task_process.sentinel],
@@ -344,10 +352,8 @@ def supervise_task(self, task):
)
else:
_logger.info("Aborting current task %s due to worker shutdown.", task.pk)
- os.kill(task_process.pid, signal.SIGUSR1)
cancel_state = TASK_STATES.FAILED
cancel_reason = "Aborted during worker shutdown."
- break
task_process.join()
if not cancel_state and task_process.exitcode != 0:
_logger.warning(
| diff --git a/pulpcore/app/tasks/test.py b/pulpcore/app/tasks/test.py
--- a/pulpcore/app/tasks/test.py
+++ b/pulpcore/app/tasks/test.py
@@ -1,3 +1,14 @@
+import backoff
+
+
def dummy_task():
"""Dummy task, that can be used in tests."""
pass
+
+
[email protected]_exception(backoff.expo, BaseException)
+def gooey_task(interval):
+ """A sleep task that tries to avoid being killed by ignoring all exceptions."""
+ from time import sleep
+
+ sleep(interval)
diff --git a/pulpcore/tests/functional/api/test_tasking.py b/pulpcore/tests/functional/api/test_tasking.py
--- a/pulpcore/tests/functional/api/test_tasking.py
+++ b/pulpcore/tests/functional/api/test_tasking.py
@@ -105,7 +105,7 @@ def test_delete_cancel_running_task(dispatch_task, tasks_api_client):
for i in range(10):
task = tasks_api_client.read(task_href)
- if task.state != "running":
+ if task.state == "running":
break
time.sleep(1)
@@ -275,3 +275,23 @@ def test_filter_tasks_using_worker__in_filter(tasks_api_client, dispatch_task, m
assert task1_href in tasks_hrefs
assert task2_href in tasks_hrefs
+
+
+def test_cancel_gooey_task(tasks_api_client, dispatch_task, monitor_task):
+ task_href = dispatch_task("pulpcore.app.tasks.test.gooey_task", (60,))
+ for i in range(10):
+ task = tasks_api_client.read(task_href)
+ if task.state == "running":
+ break
+ time.sleep(1)
+
+ task = tasks_api_client.tasks_cancel(task_href, {"state": "canceled"})
+
+ if task.state == "canceling":
+ for i in range(30):
+ if task.state != "canceling":
+ break
+ time.sleep(1)
+ task = tasks_api_client.read(task_href)
+
+ assert task.state == "canceled"
diff --git a/pulpcore/tests/functional/utils.py b/pulpcore/tests/functional/utils.py
--- a/pulpcore/tests/functional/utils.py
+++ b/pulpcore/tests/functional/utils.py
@@ -23,7 +23,7 @@ class PulpTaskError(Exception):
def __init__(self, task):
"""Provide task info to exception."""
- description = task.to_dict()["error"]["description"]
+ description = task.to_dict()["error"].get("description")
super().__init__(self, f"Pulp task failed ({description})")
self.task = task
| Task cancellation gets stuck
**Version**
Pulpcore 3.16 via Satellite 6.11
**Describe the bug**
If "Reclaim Space" (Actions::Pulp3::CapsuleContent::ReclaimSpace) action is executed and canceled after that task will get stuck in canceling not only in foreman tasks but also in pulp3.
The task was apparently stuck in "canceling" state for more than 6 days.
**To Reproduce**
How reproducible:
Always
Steps to Reproduce:
1. Execute "Reclaim Space" on Satellite
2. Cancel task (Actions::Pulp3::CapsuleContent::ReclaimSpace)
**Expected results:**
Task will get canceled
**Actual results:**
Task is stuck in foreman-tasks and in pulp3
**Additional context**
https://bugzilla.redhat.com/show_bug.cgi?id=2143290
| This was independently reproduced on a lab system
Removing triage tag because it was reproduced, adding prio tag because it can block up the tasking system.
As far as i heard, rebooting _all_ workers unblocks the tasking system. I would wonder if rebooting just the worker stuck on that task would be sufficient too.
Can we add logging from the worker when the cancel happens?
I haven't been able to recreate this in a just-pulp env yet. What would be *really* useful, is the journalctl output from the 60 sec around the time the cancel was *issued*. Looking at the sys.exit() call that ends a worker that's been cancelled, I can imagine a series of Unfortunate Events that would result in the worker-process not actually exiting - but I'd think they'd all leave traces of unusual exceptions in the logs.
Investigation continues. | 2023-01-31T10:54:53 |
pulp/pulpcore | 3,557 | pulp__pulpcore-3557 | [
"3404"
] | 334ad3e8b6849542d1b81b3ec526ab6077785849 | diff --git a/pulpcore/app/tasks/reclaim_space.py b/pulpcore/app/tasks/reclaim_space.py
--- a/pulpcore/app/tasks/reclaim_space.py
+++ b/pulpcore/app/tasks/reclaim_space.py
@@ -47,7 +47,7 @@ def reclaim_space(repo_pks, keeplist_rv_pks=None, force=False):
if not content.cast().PROTECTED_FROM_RECLAIM:
unprotected.append(content.pulp_type)
- ca_qs = ContentArtifact.objects.filter(
+ ca_qs = ContentArtifact.objects.select_related("content", "artifact").filter(
content__in=c_reclaim_qs.values("pk"), artifact__isnull=False
)
if not force:
| Long-running space reclamation task
**Version**
```
"versions": [
{
"component": "core",
"version": "3.21.0",
"package": "pulpcore"
},
{
"component": "container",
"version": "2.14.2",
"package": "pulp-container"
},
{
"component": "rpm",
"version": "3.18.5",
"package": "pulp-rpm"
},
{
"component": "python",
"version": "3.7.2",
"package": "pulp-python"
},
{
"component": "ostree",
"version": "2.0.0a6",
"package": "pulp-ostree"
},
{
"component": "file",
"version": "1.11.1",
"package": "pulp-file"
},
{
"component": "deb",
"version": "2.20.0",
"package": "pulp_deb"
},
{
"component": "certguard",
"version": "1.5.5",
"package": "pulp-certguard"
},
{
"component": "ansible",
"version": "0.15.0",
"package": "pulp-ansible"
}
],
```
Katello nightly (4.7)
**Describe the bug**
I noticed my reclaim space task was taking over 20 minutes in an environment with 63 repositories and 91485 rpm content units (to give some perspective). PostgresSQL was being heavily taxed and taking 100% of one CPU core. I tried to cancel it, but the cancellation was stuck so I needed to restart Pulpcore to stop the space reclamation.
Here's the task output after it was canceled forcefully:
```
{
"pulp_href": "/pulp/api/v3/tasks/bce46114-a5d9-445a-a898-217210bf1975/",
"pulp_created": "2022-11-15T16:38:50.639518Z",
"state": "failed",
"name": "pulpcore.app.tasks.reclaim_space.reclaim_space",
"logging_cid": "c658f06c-3b76-49f6-a514-b19dd3bfbe52",
"started_at": "2022-11-15T16:38:50.688113Z",
"finished_at": "2022-11-15T17:09:06.918179Z",
"error": {
"reason": "Worker has gone missing."
},
"worker": "/pulp/api/v3/workers/80173b0a-f731-4c7b-b3ec-ed993369044e/",
"parent_task": null,
"child_tasks": [],
"task_group": null,
"progress_reports": [],
"created_resources": [],
"reserved_resources_record": [
"shared:/pulp/api/v3/repositories/rpm/rpm/4ad1fb8e-ef06-42e6-a83a-00da97551dce/",
"shared:/pulp/api/v3/repositories/rpm/rpm/3224e11b-ec85-4e3d-8d7b-fd44dcfd184d/",
"shared:/pulp/api/v3/repositories/rpm/rpm/d0f49692-31dd-4709-9e52-27be83167a3f/",
"shared:/pulp/api/v3/repositories/rpm/rpm/bef78a95-9555-467b-9fe6-66650c081757/",
"shared:/pulp/api/v3/repositories/rpm/rpm/e5838919-ba35-4497-b8a0-98c10af8941b/",
"shared:/pulp/api/v3/repositories/rpm/rpm/7987e671-61e6-4d07-9c9b-ca7a07367d91/",
"shared:/pulp/api/v3/repositories/rpm/rpm/acd01e87-640a-4584-b52f-c999e937b55f/",
"shared:/pulp/api/v3/repositories/rpm/rpm/b01a1f40-c195-48c0-a05c-77b7748d6338/",
"shared:/pulp/api/v3/repositories/rpm/rpm/504b40fe-5d7f-456e-bc95-683878609791/",
"shared:/pulp/api/v3/repositories/rpm/rpm/8a1a3998-ff6c-460c-b26b-010ac57023a9/",
"shared:/pulp/api/v3/repositories/rpm/rpm/a1a44856-a028-4a2e-a539-aa73d3ef9ff3/",
"shared:/pulp/api/v3/repositories/rpm/rpm/1cde5855-eab1-4ac3-ac2f-f02a22541619/",
"shared:/pulp/api/v3/repositories/deb/apt/509de38c-7ae7-4f7b-a37c-db8404488a51/",
"shared:/pulp/api/v3/repositories/rpm/rpm/cdd44804-8324-48ce-9e61-4ae6770d0427/",
"shared:/pulp/api/v3/repositories/rpm/rpm/dfe18547-f2bf-4c41-9b9e-32d6cb1e2f5e/",
"shared:/pulp/api/v3/repositories/rpm/rpm/d867837e-c35f-475d-9bb5-9c9bde465b19/",
"shared:/pulp/api/v3/repositories/rpm/rpm/a0bcd8d6-8e6d-4e05-83d1-8cbfbc28d8d9/",
"shared:/pulp/api/v3/repositories/rpm/rpm/b0169f69-55cc-4ce1-830c-f444152c6853/"
]
}
```
**To Reproduce**
Run space reclamation on an environment with a similar amount of repositories and content to what I posted above.
**Expected behavior**
After chatting with @dralley it sounds like this may be slower performance than expected.
| Obviously we should attempt to reproduce this and do some profiling, but this part of the query stands out as being a potential N+1 query situation
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/reclaim_space.py#L50-L58
@ianballou Just for context, do you remember if this was on a system backed by SSD or HDD?
@dralley this was a Katello VM running on an SSD.
Also at the time, it seemed like the task was completely locked into some postgres query. I couldn't even cancel it.
Note: the cancellation issue is filed separately here https://github.com/pulp/pulpcore/issues/3407 and other users have hit it too, it's not a total one-off.
A quick update:
1 - We tested it locally cloning the Fedora 37 repo, about 60-70GB, and them cloning it to 70 repos. After that, we just called the `reclaim_space_task`. Tried this a couple of times (downloaded over 1TB along the week) and the issue have not triggered.
2 - @ianballou started a Katello VM, with aprox. 35GB of Pulp repos. We followed the resource utilization and called the `reclaim_space_task`. Again, things ran smoothly.
After checking some user reports we verified that this was possibly triggered by a low-memory situation.
Make sense to open a new issue or continue this one to find which was this low-memory value? @ianballou @dralley
@ianballou @decko Which user reports led to the conclusion it may be memory related? (Partly I just want to make sure they get linked up because I've only seen one or two that aren't easily visible from this issue and I don't recall seeing anything there. Not doubting the conclusion.)
Do the profiles of where the task is spending its time (regardless of whether it actually takes a long time) show anything interesting?
Oh also there is a setting you can enable to plot the memory usage of tasks over time, I am not sure if this was just a general memory usage issue or one which was related to this specific task, but it can be useful in cases where you think a task might be problematic.
https://github.com/pulp/pulpcore/blob/main/docs/configuration/settings.rst#task_diagnostics
Sidebar: maybe we could extend that to also log system memory consumption and perhaps even swap and plot them alongside the task memory consumption? That seems like a useful ability.
> @ianballou @decko Which user reports led to the conclusion it may be memory related? (Partly I just want to make sure they get linked up because I've only seen one or two that aren't easily visible from this issue and I don't recall seeing anything there. Not doubting the conclusion.)
@dralley it occurred in a private discussion, the gist of it was that increasing RAM solved the problem. PM me for more details if you'd like.
> @ianballou @decko Which user reports led to the conclusion it may be memory related? (Partly I just want to make sure they get linked up because I've only seen one or two that aren't easily visible from this issue and I don't recall seeing anything there. Not doubting the conclusion.)
>
> Do the profiles of where the task is spending its time (regardless of whether it actually takes a long time) show anything interesting?
Not so far. Also, I just changed a query to have a select_related statement to avoid a N+1 situation, but I didn't saw any relevant change on the profiling. | 2023-02-08T16:43:47 |
|
pulp/pulpcore | 3,559 | pulp__pulpcore-3559 | [
"3284"
] | 547ff7f072e39e982deac7a2fb6bba46caface7f | diff --git a/pulpcore/plugin/stages/content_stages.py b/pulpcore/plugin/stages/content_stages.py
--- a/pulpcore/plugin/stages/content_stages.py
+++ b/pulpcore/plugin/stages/content_stages.py
@@ -149,42 +149,39 @@ async def run(self):
# Query db once and update each object in memory for bulk_update call
for content_artifact in to_update_ca_query.iterator():
key = (content_artifact.content_id, content_artifact.relative_path)
- # Maybe remove dict elements after to reduce memory?
- content_artifact.artifact = to_update_ca_artifact[key]
- to_update_ca_bulk.append(content_artifact)
+ # Same content/relpath/artifact-sha means no change to the
+ # contentartifact, ignore. This prevents us from colliding with any
+ # concurrent syncs with overlapping identical content. "Someone" updated
+ # the contentartifacts to match what we would be doing, so we don't need
+ # to do an (unnecessary) db-update, which was opening us up for a variety
+ # of potential deadlock scenarios.
+ #
+ # We start knowing that we're comparing CAs with same content/rel-path,
+ # because that's what we're using for the key to look up the incoming CA.
+ # So now let's compare artifacts, incoming vs current.
+ #
+ # Are we changing from no-artifact to having one or vice-versa?
+ artifact_state_change = bool(content_artifact.artifact) ^ bool(
+ to_update_ca_artifact[key]
+ )
+ # Do both current and incoming have an artifact?
+ both_have_artifact = content_artifact.artifact and to_update_ca_artifact[key]
+ # If both sides have an artifact, do they have the same sha256?
+ same_artifact_hash = both_have_artifact and (
+ content_artifact.artifact.sha256 == to_update_ca_artifact[key].sha256
+ )
+ # Only update if there was an actual change
+ if artifact_state_change or (both_have_artifact and not same_artifact_hash):
+ content_artifact.artifact = to_update_ca_artifact[key]
+ to_update_ca_bulk.append(content_artifact)
# to_update_ca_bulk are the CAs that we know are already persisted.
# We need to update their artifact_ids, and wish to do it in bulk to
# avoid hundreds of round-trips to the database.
- #
- # To avoid deadlocks in high-concurrency environments with overlapping
- # content, we need to update the rows in some defined order. Unfortunately,
- # postgres doesn't support order-on-update - but it *does* support ordering
- # on select-for-update. So, we select-for-update, in pulp_id order, the
- # rows we're about to update as one db-call, and then do the update in a
- # second.
- #
- # NOTE: select-for-update requires being in an atomic-transaction. We are
- # **already in an atomic transaction** at this point as a result of the
- # "with transaction.atomic():", above.
- ids = [k.pulp_id for k in to_update_ca_bulk]
- # "len()" forces the QuerySet to be evaluated. Using exist() or count() won't
- # work for us - Django is smart enough to either not-order, or even
- # not-emit, a select-for-update in these cases.
- #
- # To maximize performance, we make sure to only ask for pulp_ids, and
- # avoid instantiating a python-object for the affected CAs by using
- # values_list()
- subq = (
- ContentArtifact.objects.filter(pulp_id__in=ids)
- .only("pulp_id")
- .order_by("pulp_id")
- .select_for_update()
- )
- len(subq.values_list())
- ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
-
- # To avoid a similar deadlock issue when calling get_or_create, we sort the
+ if to_update_ca_bulk:
+ ContentArtifact.objects.bulk_update(to_update_ca_bulk, ["artifact"])
+
+ # To avoid a deadlock issue when calling get_or_create, we sort the
# "new" CAs to make sure inserts happen in a defined order. Since we can't
# trust the pulp_id (by the time we go to create a CA, it may already exist,
# and be replaced by the 'real' one), we sort by their "natural key".
| diff --git a/pulpcore/tests/functional/api/using_plugin/test_distributions.py b/pulpcore/tests/functional/api/using_plugin/test_distributions.py
--- a/pulpcore/tests/functional/api/using_plugin/test_distributions.py
+++ b/pulpcore/tests/functional/api/using_plugin/test_distributions.py
@@ -414,6 +414,7 @@ def test_content_served_immediate_with_range_request_start_value_larger_than_con
headers={"Range": "bytes=10485860-10485870"},
)
+ @unittest.skip("skip - CI weirdness unrelated to code-state")
def test_content_served_after_db_restart(self):
"""
Assert that content can be downloaded after the database has been restarted.
| contentartifact bulk_update can still deadlock
**Version**
all
**Describe the bug**
There have been several attempts to address deadlocks that occur under high-load, high-concurrency sync operations with overlapping content. The deadlock stems from ContentArtifact being deduplicated across repositories that contain the same Contents. At this point, the remaining issue arises when parties to the deadlock are attempting to bulk_update the 'artifact' field of any existing (already-persisted) ContentArtifacts for the Content-batch they are syncing.
**To Reproduce**
Here is a script using pulp-cli that can reproduce the behavior at will. It requires use of the Very Large pulp-file performance fixture. It assumes you have the 'jq' package available.
```
#!/bin/bash
URLS=(\
https://fixtures.pulpproject.org/file-perf/PULP_MANIFEST \
)
NAMES=(\
file-perf \
)
# Make sure we're concurent-enough
num_workers=`sudo systemctl status pulpcore-worker* | grep "service - Pulp Worker" | wc -l`
echo "Current num-workers ${num_workers}"
if [ ${num_workers} -lt 10 ]
then
for (( i=${num_workers}+1; i<=10; i++ ))
do
echo "Starting worker ${i}"
sudo systemctl start pulpcore-worker@${i}
done
fi
echo "CLEANUP"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote destroy --name ${NAMES[$n]}-${i}
pulp file repository destroy --name ${NAMES[$n]}-${i}
done
done
pulp orphan cleanup --protection-time 0
echo "SETUP URLS AND REMOTES"
for n in ${!NAMES[@]}
do
for i in {1..9}
do
pulp file remote create --name ${NAMES[$n]}-${i} --url ${URLS[$n]} | jq .pulp_href
pulp file repository create --name ${NAMES[$n]}-${i} --remote ${NAMES[$n]}-${i} | jq .pulp_href
done
done
starting_failed=`pulp task list --limit 10000 --state failed | jq length`
echo "SYNCING..."
for i in {1..9}
do
for n in ${!NAMES[@]}
do
pulp -b file repository sync --name ${NAMES[$n]}-${i}
done
done
sleep 5
echo "WAIT FOR COMPLETION...."
while true
do
running=`pulp task list --limit 10000 --state running | jq length`
echo -n "."
sleep 5
if [ ${running} -eq 0 ]
then
echo "DONE"
break
fi
done
failed=`pulp task list --limit 10000 --state failed | jq length`
echo "FAILURES : ${failed}"
if [ ${failed} -gt ${starting_failed} ]
then
echo "FAILED: " ${failed} - ${starting_failed}
exit
fi
```
**Expected behavior**
All syncs should happen without deadlocks.
**Additional context**
This is the latest step in fixing the codepath that has attempted to be addressed by issues: #3192 #3111 #2430
| https://bugzilla.redhat.com/show_bug.cgi?id=2062526
https://bugzilla.redhat.com/show_bug.cgi?id=2082209 | 2023-02-09T16:13:06 |
pulp/pulpcore | 3,575 | pulp__pulpcore-3575 | [
"3574"
] | a3eb5051bea5dba6fd6a8ee43932713492fa8fa2 | diff --git a/pulpcore/app/redis_connection.py b/pulpcore/app/redis_connection.py
--- a/pulpcore/app/redis_connection.py
+++ b/pulpcore/app/redis_connection.py
@@ -1,5 +1,5 @@
from redis import Redis
-from aioredis import Redis as aRedis
+from redis.asyncio import Redis as aRedis
from pulpcore.app.settings import settings
diff --git a/pulpcore/cache/cache.py b/pulpcore/cache/cache.py
--- a/pulpcore/cache/cache.py
+++ b/pulpcore/cache/cache.py
@@ -11,7 +11,7 @@
from aiohttp.web_exceptions import HTTPFound
from redis import ConnectionError
-from aioredis import ConnectionError as AConnectionError
+from redis.asyncio import ConnectionError as AConnectionError
from pulpcore.app.settings import settings
from pulpcore.app.redis_connection import (
| Aioredis is no longer supported, replace the dependency with redis-py (which we are already using)
**Version**
3.16+
**Describe the bug**
pulpcore depends on aioredis, which is no longer supported, and has been subsumed by the standard redis library which we are already using in synchronous contexts
https://github.com/aio-libs/aioredis-py#-aioredis-is-now-in-redis-py-420rc1-
| 2023-02-15T18:05:47 |
||
pulp/pulpcore | 3,577 | pulp__pulpcore-3577 | [
"3570"
] | 929faafdf750111f0eced3094349f660db7e8ded | diff --git a/pulpcore/plugin/viewsets/__init__.py b/pulpcore/plugin/viewsets/__init__.py
--- a/pulpcore/plugin/viewsets/__init__.py
+++ b/pulpcore/plugin/viewsets/__init__.py
@@ -36,6 +36,7 @@
from pulpcore.app.viewsets.custom_filters import ( # noqa
CharInFilter,
+ LabelFilter,
LabelSelectFilter,
RepositoryVersionFilter,
)
| LabelFilter is not exported the plugin directory.
**Version**
latest
**Describe the bug**
LabelSelectFilter is exported via the plugin/viewsets directory, but has been marked as deprecated by LabelFIlter ... which is not exported via the plugin/viewsets directory.
**To Reproduce**
```from pulpcore.plugin.viewsets import LabelFilter``` produces an import error.
**Expected behavior**
```from pulpcore.plugin.viewsets import LabelFilter``` does not produce an import error.
**Additional context**
Add any other context about the problem here. Please provide links to any previous discussions via Discourse or Bugzilla.
| 2023-02-16T11:37:06 |
||
pulp/pulpcore | 3,586 | pulp__pulpcore-3586 | [
"3075"
] | 534336600a2dd219e80b1cb5a006d7864319535c | diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -123,14 +123,17 @@ def _import_file(fpath, resource_class, retry=False):
curr_attempt=curr_attempt,
)
)
-
- # Last attempt, we raise an exception on any problem.
- # This will either succeed, or log a fatal error and fail.
- try:
- a_result = resource.import_data(data, raise_errors=True)
- except Exception as e: # noqa log on ANY exception and then re-raise
- log.error(f"FATAL import-failure importing {fpath}")
- raise
+ else:
+ break
+ else:
+ # The while condition is not fulfilled, so we proceed to the last attempt,
+ # we raise an exception on any problem. This will either succeed, or log a
+ # fatal error and fail.
+ try:
+ a_result = resource.import_data(data, raise_errors=True)
+ except Exception as e: # noqa log on ANY exception and then re-raise
+ log.error(f"FATAL import-failure importing {fpath}")
+ raise
else:
a_result = resource.import_data(data, raise_errors=True)
yield a_result
| The same content is imported multiple times in a row
Exported resources are imported `MAX_ATTEMPTS` times in a row which is unnecessary:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L107-L133
Introducing a new `yield`ing else branch would help to resolve this problem.
| 2023-02-20T15:00:28 |
||
pulp/pulpcore | 3,587 | pulp__pulpcore-3587 | [
"3075"
] | 4ea7455b40be735d713139cd60edb13cfe004141 | diff --git a/pulpcore/app/tasks/importer.py b/pulpcore/app/tasks/importer.py
--- a/pulpcore/app/tasks/importer.py
+++ b/pulpcore/app/tasks/importer.py
@@ -123,14 +123,17 @@ def _import_file(fpath, resource_class, retry=False):
curr_attempt=curr_attempt,
)
)
-
- # Last attempt, we raise an exception on any problem.
- # This will either succeed, or log a fatal error and fail.
- try:
- a_result = resource.import_data(data, raise_errors=True)
- except Exception as e: # noqa log on ANY exception and then re-raise
- log.error(f"FATAL import-failure importing {fpath}")
- raise
+ else:
+ break
+ else:
+ # The while condition is not fulfilled, so we proceed to the last attempt,
+ # we raise an exception on any problem. This will either succeed, or log a
+ # fatal error and fail.
+ try:
+ a_result = resource.import_data(data, raise_errors=True)
+ except Exception as e: # noqa log on ANY exception and then re-raise
+ log.error(f"FATAL import-failure importing {fpath}")
+ raise
else:
a_result = resource.import_data(data, raise_errors=True)
yield a_result
| The same content is imported multiple times in a row
Exported resources are imported `MAX_ATTEMPTS` times in a row which is unnecessary:
https://github.com/pulp/pulpcore/blob/main/pulpcore/app/tasks/importer.py#L107-L133
Introducing a new `yield`ing else branch would help to resolve this problem.
| 2023-02-20T15:00:29 |
||
pulp/pulpcore | 3,593 | pulp__pulpcore-3593 | [
"3590"
] | da675497e2bc173b7cd23bf5f4ca400cc25fb412 | diff --git a/pulpcore/openapi/__init__.py b/pulpcore/openapi/__init__.py
--- a/pulpcore/openapi/__init__.py
+++ b/pulpcore/openapi/__init__.py
@@ -20,13 +20,17 @@
)
from drf_spectacular.settings import spectacular_settings
from drf_spectacular.types import OpenApiTypes
-from drf_spectacular.utils import OpenApiParameter
+from drf_spectacular.utils import OpenApiParameter, extend_schema_field
from rest_framework import mixins, serializers
from rest_framework.schemas.utils import get_pk_description
from pulpcore.app.apps import pulp_plugin_configs
+# Python does not distinguish integer sizes. The safest assumtion is that they are large.
+extend_schema_field(OpenApiTypes.INT64)(serializers.IntegerField)
+
+
class PulpAutoSchema(AutoSchema):
"""Pulp Auto Schema."""
| openapi spec does not specify integer format leading to unusable apis
**Version**
3.22.0
**Describe the bug**
Currently the openapi spec just specifies an integer for the type, without adding the 'format'. (https://swagger.io/docs/specification/data-models/data-types/)
This works fine for most things, but for things related to filesize or storage, this tends to break when using lower level languages like go:
For example the status api errors when using generated go bindings:
```
json: cannot unmarshal number 1022488477696 into Go struct field StatusResponseStorage.storage.total of type int32
```
I'm sure there are other areas where this could be an issue, such as a file of size greater than ~4GB, reporting its filesize would also hit this problem.
**To Reproduce**
Generate Go bindings, see that int32 is used for things like storage amount in the status api.
**Expected behavior**
format: int64 should be used for values that could be large (file sizes, disk sizes, anything else?)
**Additional context**
https://swagger.io/docs/specification/data-models/data-types/
| 2023-02-22T09:56:56 |
||
pulp/pulpcore | 3,605 | pulp__pulpcore-3605 | [
"3604"
] | d27b1d05f7be7bd12f7c3089a765c1a60a9b9e10 | diff --git a/pulpcore/plugin/util.py b/pulpcore/plugin/util.py
--- a/pulpcore/plugin/util.py
+++ b/pulpcore/plugin/util.py
@@ -12,4 +12,11 @@
remove_role,
)
-from pulpcore.app.util import get_artifact_url, get_url, gpg_verify, verify_signature # noqa
+from pulpcore.app.util import ( # noqa
+ extract_pk,
+ get_artifact_url,
+ get_url,
+ gpg_verify,
+ raise_for_unknown_content_units,
+ verify_signature,
+)
| Expose functions for parsing URIs and validating content units to plugin writers
| 2023-02-27T14:43:16 |
||
pulp/pulpcore | 3,622 | pulp__pulpcore-3622 | [
"3631"
] | c3bbdb211d5f62951bd83a8b8292cdd566c0eac4 | diff --git a/pulpcore/app/viewsets/custom_filters.py b/pulpcore/app/viewsets/custom_filters.py
--- a/pulpcore/app/viewsets/custom_filters.py
+++ b/pulpcore/app/viewsets/custom_filters.py
@@ -279,6 +279,10 @@ class LabelFilter(Filter):
def __init__(self, *args, **kwargs):
kwargs.setdefault("help_text", _("Filter labels by search string"))
+ if "label_field_name" in kwargs:
+ self.label_field_name = kwargs.pop("label_field_name")
+ else:
+ self.label_field_name = "pulp_labels"
super().__init__(*args, **kwargs)
def filter(self, qs, value):
@@ -293,6 +297,11 @@ def filter(self, qs, value):
Raises:
rest_framework.exceptions.ValidationError: on invalid search string
"""
+
+ # NOTE: can't use self.field_name because the default for that is the name
+ # of the method on the filter class (which is pulp_label_select on all of
+ # the pulp filtersets)
+ field_name = self.label_field_name
if value is None:
# user didn't supply a value
return qs
@@ -307,17 +316,19 @@ def filter(self, qs, value):
raise DRFValidationError(_("Cannot use an operator with '{}'.").format(key))
if op == "=":
- qs = qs.filter(**{f"pulp_labels__{key}": val})
+ qs = qs.filter(**{f"{field_name}__{key}": val})
elif op == "!=":
- qs = qs.filter(pulp_labels__has_key=key).exclude(**{f"pulp_labels__{key}": val})
+ qs = qs.filter(**{f"{field_name}__has_key": key}).exclude(
+ **{f"{field_name}__{key}": val}
+ )
elif op == "~":
- qs = qs.filter(**{f"pulp_labels__{key}__icontains": val})
+ qs = qs.filter(**{f"{field_name}__{key}__icontains": val})
else:
# 'foo', '!foo'
if key.startswith("!"):
- qs = qs.exclude(pulp_labels__has_key=key[1:])
+ qs = qs.exclude(**{f"{field_name}__has_key": key[1:]})
else:
- qs = qs.filter(pulp_labels__has_key=key)
+ qs = qs.filter(**{f"{field_name}__has_key": key})
return qs
| Allow customizing the field name on LabelFilter
**Is your feature request related to a problem? Please describe.**
As a plugin writer I would like to be able to customize the field that LabelFilter filters the queryset on.
**Describe the solution you'd like**
LabelFilter should accept a parameter that allows it to search on a custom field name.
| 2023-03-02T15:54:23 |
||
pulp/pulpcore | 3,640 | pulp__pulpcore-3640 | [
"3639"
] | 4054dccca01574139add3efa3c67fa95fc13f54a | diff --git a/pulpcore/app/protobuf/analytics_pb2.py b/pulpcore/app/protobuf/analytics_pb2.py
--- a/pulpcore/app/protobuf/analytics_pb2.py
+++ b/pulpcore/app/protobuf/analytics_pb2.py
@@ -2,76 +2,32 @@
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: analytics.proto
"""Generated protocol buffer code."""
+from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
-from google.protobuf import message as _message
-from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
-
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
-DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
- b'\n\x0f\x61nalytics.proto"\xe7\x02\n\tAnalytics\x12\x11\n\tsystem_id\x18\x01 \x02(\t\x12\x39\n\x13online_content_apps\x18\x02 \x01(\x0b\x32\x1c.Analytics.OnlineContentApps\x12\x30\n\x0eonline_workers\x18\x03 \x01(\x0b\x32\x18.Analytics.OnlineWorkers\x12(\n\ncomponents\x18\x04 \x03(\x0b\x32\x14.Analytics.Component\x12\x1a\n\x12postgresql_version\x18\x05 \x01(\r\x1a\x35\n\x11OnlineContentApps\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a\x31\n\rOnlineWorkers\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a*\n\tComponent\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\x0f\n\x07version\x18\x02 \x02(\t'
-)
-_ANALYTICS = DESCRIPTOR.message_types_by_name["Analytics"]
-_ANALYTICS_ONLINECONTENTAPPS = _ANALYTICS.nested_types_by_name["OnlineContentApps"]
-_ANALYTICS_ONLINEWORKERS = _ANALYTICS.nested_types_by_name["OnlineWorkers"]
-_ANALYTICS_COMPONENT = _ANALYTICS.nested_types_by_name["Component"]
-Analytics = _reflection.GeneratedProtocolMessageType(
- "Analytics",
- (_message.Message,),
- {
- "OnlineContentApps": _reflection.GeneratedProtocolMessageType(
- "OnlineContentApps",
- (_message.Message,),
- {
- "DESCRIPTOR": _ANALYTICS_ONLINECONTENTAPPS,
- "__module__": "analytics_pb2"
- # @@protoc_insertion_point(class_scope:Analytics.OnlineContentApps)
- },
- ),
- "OnlineWorkers": _reflection.GeneratedProtocolMessageType(
- "OnlineWorkers",
- (_message.Message,),
- {
- "DESCRIPTOR": _ANALYTICS_ONLINEWORKERS,
- "__module__": "analytics_pb2"
- # @@protoc_insertion_point(class_scope:Analytics.OnlineWorkers)
- },
- ),
- "Component": _reflection.GeneratedProtocolMessageType(
- "Component",
- (_message.Message,),
- {
- "DESCRIPTOR": _ANALYTICS_COMPONENT,
- "__module__": "analytics_pb2"
- # @@protoc_insertion_point(class_scope:Analytics.Component)
- },
- ),
- "DESCRIPTOR": _ANALYTICS,
- "__module__": "analytics_pb2"
- # @@protoc_insertion_point(class_scope:Analytics)
- },
-)
-_sym_db.RegisterMessage(Analytics)
-_sym_db.RegisterMessage(Analytics.OnlineContentApps)
-_sym_db.RegisterMessage(Analytics.OnlineWorkers)
-_sym_db.RegisterMessage(Analytics.Component)
+DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x0f\x61nalytics.proto\"\x84\x04\n\tAnalytics\x12\x11\n\tsystem_id\x18\x01 \x02(\t\x12\x39\n\x13online_content_apps\x18\x02 \x01(\x0b\x32\x1c.Analytics.OnlineContentApps\x12\x30\n\x0eonline_workers\x18\x03 \x01(\x0b\x32\x18.Analytics.OnlineWorkers\x12(\n\ncomponents\x18\x04 \x03(\x0b\x32\x14.Analytics.Component\x12\x1a\n\x12postgresql_version\x18\x05 \x01(\r\x12(\n\nrbac_stats\x18\x06 \x01(\x0b\x32\x14.Analytics.RBACStats\x1a\x35\n\x11OnlineContentApps\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a\x31\n\rOnlineWorkers\x12\x11\n\tprocesses\x18\x01 \x01(\r\x12\r\n\x05hosts\x18\x02 \x01(\r\x1a*\n\tComponent\x12\x0c\n\x04name\x18\x01 \x02(\t\x12\x0f\n\x07version\x18\x02 \x02(\t\x1aq\n\tRBACStats\x12\r\n\x05users\x18\x01 \x01(\r\x12\x0e\n\x06groups\x18\x02 \x01(\r\x12\x0f\n\x07\x64omains\x18\x03 \x01(\r\x12\x1e\n\x16\x63ustom_access_policies\x18\x04 \x01(\r\x12\x14\n\x0c\x63ustom_roles\x18\x05 \x01(\r')
+_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
+_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'analytics_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
- DESCRIPTOR._options = None
- _ANALYTICS._serialized_start = 20
- _ANALYTICS._serialized_end = 379
- _ANALYTICS_ONLINECONTENTAPPS._serialized_start = 231
- _ANALYTICS_ONLINECONTENTAPPS._serialized_end = 284
- _ANALYTICS_ONLINEWORKERS._serialized_start = 286
- _ANALYTICS_ONLINEWORKERS._serialized_end = 335
- _ANALYTICS_COMPONENT._serialized_start = 337
- _ANALYTICS_COMPONENT._serialized_end = 379
+ DESCRIPTOR._options = None
+ _ANALYTICS._serialized_start=20
+ _ANALYTICS._serialized_end=536
+ _ANALYTICS_ONLINECONTENTAPPS._serialized_start=273
+ _ANALYTICS_ONLINECONTENTAPPS._serialized_end=326
+ _ANALYTICS_ONLINEWORKERS._serialized_start=328
+ _ANALYTICS_ONLINEWORKERS._serialized_end=377
+ _ANALYTICS_COMPONENT._serialized_start=379
+ _ANALYTICS_COMPONENT._serialized_end=421
+ _ANALYTICS_RBACSTATS._serialized_start=423
+ _ANALYTICS_RBACSTATS._serialized_end=536
# @@protoc_insertion_point(module_scope)
diff --git a/pulpcore/app/tasks/analytics.py b/pulpcore/app/tasks/analytics.py
--- a/pulpcore/app/tasks/analytics.py
+++ b/pulpcore/app/tasks/analytics.py
@@ -6,11 +6,14 @@
import async_timeout
from asgiref.sync import sync_to_async
+from django.conf import settings
from django.db import connection
+from django.contrib.auth import get_user_model
from google.protobuf.json_format import MessageToJson
from pulpcore.app.apps import pulp_plugin_configs
-from pulpcore.app.models import SystemID
+from pulpcore.app.models import SystemID, Group, Domain, AccessPolicy
+from pulpcore.app.models.role import Role
from pulpcore.app.models.status import ContentAppStatus
from pulpcore.app.models.task import Worker
from pulpcore.app.protobuf.analytics_pb2 import Analytics
@@ -23,6 +26,9 @@
DEV_URL = "https://dev.analytics.pulpproject.org/"
+User = get_user_model()
+
+
def get_analytics_posting_url():
"""
Return either the dev or production analytics FQDN url.
@@ -85,6 +91,21 @@ async def _system_id(analytics):
analytics.system_id = str(system_id_obj.pk)
+async def _rbac_stats(analytics):
+ analytics.rbac_stats.users = await sync_to_async(User.objects.count)()
+ analytics.rbac_stats.groups = await sync_to_async(Group.objects.count)()
+ if settings.DOMAIN_ENABLED:
+ analytics.rbac_stats.domains = await sync_to_async(Domain.objects.count)()
+ else:
+ analytics.rbac_stats.domains = 0
+ analytics.rbac_stats.custom_access_policies = await sync_to_async(
+ AccessPolicy.objects.filter(customized=True).count
+ )()
+ analytics.rbac_stats.custom_roles = await sync_to_async(
+ Role.objects.filter(locked=False).count
+ )()
+
+
async def post_analytics():
url = get_analytics_posting_url()
@@ -96,6 +117,7 @@ async def post_analytics():
_online_content_apps_data(analytics),
_online_workers_data(analytics),
_postgresql_version(analytics),
+ _rbac_stats(analytics),
)
await asyncio.gather(*awaitables)
| analytics.pulpproject.org should collect statistics about RBAC usage
The proposed metrics are outlined here[0].
[0] https://github.com/pulp/analytics.pulpproject.org/issues/84
| 2023-03-07T15:12:17 |
||
pulp/pulpcore | 3,642 | pulp__pulpcore-3642 | [
"3641"
] | 8ea3c803f534dfb8975d9b8549231d111bd56cbb | diff --git a/pulpcore/app/viewsets/content.py b/pulpcore/app/viewsets/content.py
--- a/pulpcore/app/viewsets/content.py
+++ b/pulpcore/app/viewsets/content.py
@@ -121,7 +121,12 @@ def scope_queryset(self, qs):
repo_viewset = get_viewset_for_model(repo)()
setattr(repo_viewset, "request", self.request)
scoped_repos.extend(repo_viewset.get_queryset().values_list("pk", flat=True))
- return qs.filter(repositories__in=scoped_repos)
+
+ # calling the distinct clause at end of the query ensures that no duplicates from
+ # joined tables will be returned to the end-user; this behaviour is documented at
+ # https://docs.djangoproject.com/en/3.2/topics/db/queries, in the section Spanning
+ # multi-valued relationships
+ return qs.filter(repositories__in=scoped_repos).distinct()
return qs
| Content endpoints return duplicates when using a non-superuser
**Version**
The main branch; parallelized pulp_rpm.tests.functional.api.test_rbac_crud
**Describe the bug**
After querying an endpoint hosting content referenced by multiple repositories, while logged in a non-superuser, Pulp returns duplicated content.
**To Reproduce**
1. Create a user with `view_content` permissions.
2. Create/Sync two repositories with the same content.
3. Send a GET request with the query parameter `repository_version=one_of_the_repos` to the endpoint hosting the content.
4. Watch the returned duplicates (same pulp_hrefs).
**Expected behaviour**
No duplicates should be returned from the endpoint.
| 2023-03-07T16:24:41 |
||
pulp/pulpcore | 3,644 | pulp__pulpcore-3644 | [
"3638"
] | 0b7519481317e922157f01b34b740aebc22901d2 | diff --git a/pulpcore/app/templatetags/pulp_urls.py b/pulpcore/app/templatetags/pulp_urls.py
--- a/pulpcore/app/templatetags/pulp_urls.py
+++ b/pulpcore/app/templatetags/pulp_urls.py
@@ -2,7 +2,7 @@
from django import template
from django.conf import settings
-from django.utils.encoding import force_text
+from django.utils.encoding import force_str
from django.utils.html import escape
from django.utils.safestring import SafeData, mark_safe
@@ -35,7 +35,7 @@ def trim_url(x, limit=trim_url_limit):
return limit is not None and (len(x) > limit and ("%s..." % x[: max(0, limit - 3)])) or x
safe_input = isinstance(text, SafeData)
- words = word_split_re.split(force_text(text))
+ words = word_split_re.split(force_str(text))
for i, word in enumerate(words):
if settings.V3_API_ROOT in word:
# Deal with punctuation.
| Refactor templatetags app to use force_str instead of deprecated force_text
Since we're testing and moving to Django 4.x(expect to stay on 4.2 LTS when it's released) we expect to find some code marked as deprecated as we move on.
This task is about the moving from the deprecated `force_text` to the new `force_str` function usage on our codebase.
This was spotted here -> https://github.com/pulp/pulpcore/pull/3618#issuecomment-1452483429
| 2023-03-07T21:44:10 |
||
pulp/pulpcore | 3,646 | pulp__pulpcore-3646 | [
"3645"
] | 0b7519481317e922157f01b34b740aebc22901d2 | diff --git a/pulpcore/app/views/repair.py b/pulpcore/app/views/repair.py
--- a/pulpcore/app/views/repair.py
+++ b/pulpcore/app/views/repair.py
@@ -22,7 +22,7 @@ def post(self, request):
Repair artifacts.
"""
serializer = RepairSerializer(data=request.data)
- serializer.is_valid()
+ serializer.is_valid(raise_exception=True)
verify_checksums = serializer.validated_data["verify_checksums"]
| The validation of input parameters for the repair endpoint is omitted
```
curl -X POST -H 'Content-Type: application/json' -H 'Authorization: Basic YWRtaW46cGFzc3dvcmQ=' -d '[]' http://localhost:5001/pulp/api/v3/repair/
```
```
pulp [804a07335b9f4417ad0c71dde478634e]: django.request:ERROR: Internal Server Error: /pulp/api/v3/repair/
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 70, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/src/pulpcore/pulpcore/app/views/repair.py", line 27, in post
verify_checksums = serializer.validated_data["verify_checksums"]
KeyError: 'verify_checksums'
```
| 2023-03-08T20:29:07 |
||
pulp/pulpcore | 3,664 | pulp__pulpcore-3664 | [
"3663"
] | f87ffcad94f1372c1e83bb692c2cef81e1a46fa4 | diff --git a/pulpcore/app/modelresource.py b/pulpcore/app/modelresource.py
--- a/pulpcore/app/modelresource.py
+++ b/pulpcore/app/modelresource.py
@@ -31,6 +31,8 @@ def before_import_row(self, row, **kwargs):
kwargs: args passed along from the import() call.
"""
+ super().before_import_row(row, **kwargs)
+
# the export converts None to blank strings but sha384 and sha512 have unique constraints
# that get triggered if they are blank. convert checksums back into None if they are blank.
for checksum in ALL_KNOWN_CONTENT_CHECKSUMS:
@@ -44,13 +46,19 @@ class Meta:
"pulp_created",
"pulp_last_updated",
)
- import_id_fields = ("sha256",)
+ import_id_fields = (
+ "pulp_domain",
+ "sha256",
+ )
class RepositoryResource(QueryModelResource):
class Meta:
model = Repository
- import_id_fields = ("name",)
+ import_id_fields = (
+ "pulp_domain",
+ "name",
+ )
exclude = (
"pulp_id",
"pulp_created",
@@ -98,6 +106,7 @@ def before_import_row(self, row, **kwargs):
Returns:
(tablib.Dataset row): row that now points to the new downstream uuid for its content.
"""
+ super().before_import_row(row, **kwargs)
linked_content = Content.objects.get(upstream_id=row["content"])
row["content"] = str(linked_content.pulp_id)
diff --git a/pulpcore/plugin/importexport.py b/pulpcore/plugin/importexport.py
--- a/pulpcore/plugin/importexport.py
+++ b/pulpcore/plugin/importexport.py
@@ -1,4 +1,5 @@
from import_export import resources
+from pulpcore.app.util import get_domain_pk
class QueryModelResource(resources.ModelResource):
@@ -19,6 +20,21 @@ class QueryModelResource(resources.ModelResource):
(driven by repo_version)
"""
+ def before_import_row(self, row, **kwargs):
+ """
+ Sets pulp_domain/_pulp_domain to the current-domain on import.
+ Args:
+ row (tablib.Dataset row): incoming import-row representing a single Variant.
+ kwargs: args passed along from the import() call.
+ """
+ # There is probably a more pythonic/elegant way to do the following - but I am deliberately
+ # opting for "verbose but COMPLETELY CLEAR" here.
+ if "_pulp_domain" in row:
+ row["_pulp_domain"] = get_domain_pk()
+
+ if "pulp_domain" in row:
+ row["pulp_domain"] = get_domain_pk()
+
def set_up_queryset(self):
return None
| PuilpImport fails post-Domains (whether domains are enabled or not)
**Version**
core/3.23
**Describe the bug**
Domains adds a field to content "pulp_domain", which contains the pulp_id of the associated domain (or of the default-domain). Export exports that field, so import tries to hook it back up on the downstream side. The downstream "default" domain has a different pulp_domain, so the import fails with a traceback like this:
```
pulp [b2a311abd716444aada1d7ff310a5eb1]: pulpcore.tasking.pulpcore_worker:INFO: File "/src/pulpcore/pulpcore/tasking/pulpcore_worker.py", line 458, in _perform_task
result = func(*args, **kwargs)
File "/src/pulpcore/pulpcore/app/tasks/importer.py", line 482, in pulp_import
for ar_result in _import_file(os.path.join(temp_dir, ARTIFACT_FILE), ArtifactResource):
File "/src/pulpcore/pulpcore/app/tasks/importer.py", line 143, in _import_file
a_result = resource.import_data(data, raise_errors=True)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 813, in import_data
result = self.import_data_inner(
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 882, in import_data_inner
raise row_result.errors[-1].error
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 737, in import_row
self.import_obj(instance, row, dry_run, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 557, in import_obj
self.import_field(field, obj, data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 540, in import_field
field.save(obj, data, is_m2m, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/fields.py", line 119, in save
cleaned = self.clean(data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/fields.py", line 75, in clean
value = self.widget.clean(value, row=data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/widgets.py", line 423, in clean
return self.get_queryset(value, row, **kwargs).get(**{self.field: val})
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
```
**To Reproduce**
* Sync a repo.
* Export it.
* Reset the pulp-database (or move the export to a different instance)
* Attempt to import it.
**Expected behavior**
Import succeeds.
| 2023-03-17T19:40:18 |
||
pulp/pulpcore | 3,668 | pulp__pulpcore-3668 | [
"3667"
] | f87ffcad94f1372c1e83bb692c2cef81e1a46fa4 | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -37,7 +37,8 @@
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.extlinks', 'sphinx.ext.autodoc', 'sphinx.ext.autosummary',
- 'napoleon_django', 'sphinx.ext.napoleon', 'sphinxcontrib.openapi']
+ 'napoleon_django', 'sphinx.ext.napoleon', 'sphinxcontrib.openapi',
+ 'sphinxcontrib.jquery']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
@@ -119,7 +120,7 @@
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] if sphinx_rtd_theme else []
+#html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] if sphinx_rtd_theme else []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
| The search bar issue in sphinx docs persists
https://github.com/pulp/pulpcore/issues/3569
The search feature bug in docs was not resolved.
https://docs.pulpproject.org/pulpcore
| 2023-03-22T19:16:54 |
||
pulp/pulpcore | 3,670 | pulp__pulpcore-3670 | [
"3663"
] | 3f947117a41191e92892375db806a9a3195629bc | diff --git a/pulpcore/app/modelresource.py b/pulpcore/app/modelresource.py
--- a/pulpcore/app/modelresource.py
+++ b/pulpcore/app/modelresource.py
@@ -31,6 +31,8 @@ def before_import_row(self, row, **kwargs):
kwargs: args passed along from the import() call.
"""
+ super().before_import_row(row, **kwargs)
+
# the export converts None to blank strings but sha384 and sha512 have unique constraints
# that get triggered if they are blank. convert checksums back into None if they are blank.
for checksum in ALL_KNOWN_CONTENT_CHECKSUMS:
@@ -44,13 +46,19 @@ class Meta:
"pulp_created",
"pulp_last_updated",
)
- import_id_fields = ("sha256",)
+ import_id_fields = (
+ "pulp_domain",
+ "sha256",
+ )
class RepositoryResource(QueryModelResource):
class Meta:
model = Repository
- import_id_fields = ("name",)
+ import_id_fields = (
+ "pulp_domain",
+ "name",
+ )
exclude = (
"pulp_id",
"pulp_created",
@@ -98,6 +106,7 @@ def before_import_row(self, row, **kwargs):
Returns:
(tablib.Dataset row): row that now points to the new downstream uuid for its content.
"""
+ super().before_import_row(row, **kwargs)
linked_content = Content.objects.get(upstream_id=row["content"])
row["content"] = str(linked_content.pulp_id)
diff --git a/pulpcore/plugin/importexport.py b/pulpcore/plugin/importexport.py
--- a/pulpcore/plugin/importexport.py
+++ b/pulpcore/plugin/importexport.py
@@ -1,4 +1,5 @@
from import_export import resources
+from pulpcore.app.util import get_domain_pk
class QueryModelResource(resources.ModelResource):
@@ -19,6 +20,21 @@ class QueryModelResource(resources.ModelResource):
(driven by repo_version)
"""
+ def before_import_row(self, row, **kwargs):
+ """
+ Sets pulp_domain/_pulp_domain to the current-domain on import.
+ Args:
+ row (tablib.Dataset row): incoming import-row representing a single Variant.
+ kwargs: args passed along from the import() call.
+ """
+ # There is probably a more pythonic/elegant way to do the following - but I am deliberately
+ # opting for "verbose but COMPLETELY CLEAR" here.
+ if "_pulp_domain" in row:
+ row["_pulp_domain"] = get_domain_pk()
+
+ if "pulp_domain" in row:
+ row["pulp_domain"] = get_domain_pk()
+
def set_up_queryset(self):
return None
| PuilpImport fails post-Domains (whether domains are enabled or not)
**Version**
core/3.23
**Describe the bug**
Domains adds a field to content "pulp_domain", which contains the pulp_id of the associated domain (or of the default-domain). Export exports that field, so import tries to hook it back up on the downstream side. The downstream "default" domain has a different pulp_domain, so the import fails with a traceback like this:
```
pulp [b2a311abd716444aada1d7ff310a5eb1]: pulpcore.tasking.pulpcore_worker:INFO: File "/src/pulpcore/pulpcore/tasking/pulpcore_worker.py", line 458, in _perform_task
result = func(*args, **kwargs)
File "/src/pulpcore/pulpcore/app/tasks/importer.py", line 482, in pulp_import
for ar_result in _import_file(os.path.join(temp_dir, ARTIFACT_FILE), ArtifactResource):
File "/src/pulpcore/pulpcore/app/tasks/importer.py", line 143, in _import_file
a_result = resource.import_data(data, raise_errors=True)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 813, in import_data
result = self.import_data_inner(
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 882, in import_data_inner
raise row_result.errors[-1].error
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 737, in import_row
self.import_obj(instance, row, dry_run, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 557, in import_obj
self.import_field(field, obj, data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/resources.py", line 540, in import_field
field.save(obj, data, is_m2m, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/fields.py", line 119, in save
cleaned = self.clean(data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/fields.py", line 75, in clean
value = self.widget.clean(value, row=data, **kwargs)
File "/usr/local/lib/python3.8/site-packages/import_export/widgets.py", line 423, in clean
return self.get_queryset(value, row, **kwargs).get(**{self.field: val})
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 435, in get
raise self.model.DoesNotExist(
```
**To Reproduce**
* Sync a repo.
* Export it.
* Reset the pulp-database (or move the export to a different instance)
* Attempt to import it.
**Expected behavior**
Import succeeds.
| 2023-03-23T14:02:32 |
||
pulp/pulpcore | 3,677 | pulp__pulpcore-3677 | [
"3673"
] | 3879ae14e20947d99029d77baab1b09414ded6b7 | diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -412,7 +412,7 @@ async def list_directory(self, repo_version, publication, path):
"""
def file_or_directory_name(directory_path, relative_path):
- result = re.match(r"({})([^\/]*)(\/*)".format(directory_path), relative_path)
+ result = re.match(r"({})([^\/]*)(\/*)".format(re.escape(directory_path)), relative_path)
return "{}{}".format(result.groups()[1], result.groups()[2])
def list_directory_blocking():
| Special characters in the directory listing regex should be escaped
Steps to reproduce:
```
http://localhost:5001/pulp/content/43036ebe-5d0e-413c-881c-42c576b067b2/deltas/KK/LCzQ9nUs0sVU9+94E2ek3Dyc+ToKjMpe1pNqMthcg-CpWGufXYjIIlkgTANq_eDb9yzAgANtrjdjgp+Rw8UcU/
```
```
[2023-03-25 22:44:06 +0000] [5590] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_protocol.py", line 433, in _handle_request
resp = await request_handler(request)
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_app.py", line 504, in _handle
resp = await handler(request)
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_middlewares.py", line 117, in impl
return await handler(request)
File "/src/pulpcore/pulpcore/content/authentication.py", line 48, in authenticate
return await handler(request)
File "/src/pulpcore/pulpcore/content/handler.py", line 245, in stream_content
return await self._match_and_stream(path, request)
File "/src/pulpcore/pulpcore/content/handler.py", line 625, in _match_and_stream
dir_list, dates = await self.list_directory(repo_version, None, rel_path)
File "/src/pulpcore/pulpcore/content/handler.py", line 458, in list_directory
return await sync_to_async(list_directory_blocking)()
File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 448, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File "/usr/lib64/python3.8/asyncio/tasks.py", line 455, in wait_for
return await fut
File "/usr/lib64/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 490, in thread_handler
return func(*args, **kwargs)
File "/src/pulpcore/pulpcore/content/handler.py", line 449, in list_directory_blocking
name = file_or_directory_name(path, ca.relative_path)
File "/src/pulpcore/pulpcore/content/handler.py", line 416, in file_or_directory_name
return "{}{}".format(result.groups()[1], result.groups()[2])
AttributeError: 'NoneType' object has no attribute 'groups'
```
Affected code:
https://github.com/pulp/pulpcore/blob/3879ae14e20947d99029d77baab1b09414ded6b7/pulpcore/content/handler.py#L415
pulp_ostree might expose directories which contain special characters, like this:
```
deltas/KK/LCzQ9nUs0sVU9+94E2ek3Dyc+ToKjMpe1pNqMthcg-CpWGufXYjIIlkgTANq_eDb9yzAgANtrjdjgp+Rw8UcU/
```
Resolution:
We should use something like `re.escape(directory_path))` to handle this scenario.
| 2023-03-27T05:59:25 |
||
pulp/pulpcore | 3,678 | pulp__pulpcore-3678 | [
"3673"
] | 098496f9ddcafafa0a45ce48969be6a207ad300e | diff --git a/pulpcore/content/handler.py b/pulpcore/content/handler.py
--- a/pulpcore/content/handler.py
+++ b/pulpcore/content/handler.py
@@ -412,7 +412,7 @@ async def list_directory(self, repo_version, publication, path):
"""
def file_or_directory_name(directory_path, relative_path):
- result = re.match(r"({})([^\/]*)(\/*)".format(directory_path), relative_path)
+ result = re.match(r"({})([^\/]*)(\/*)".format(re.escape(directory_path)), relative_path)
return "{}{}".format(result.groups()[1], result.groups()[2])
def list_directory_blocking():
| Special characters in the directory listing regex should be escaped
Steps to reproduce:
```
http://localhost:5001/pulp/content/43036ebe-5d0e-413c-881c-42c576b067b2/deltas/KK/LCzQ9nUs0sVU9+94E2ek3Dyc+ToKjMpe1pNqMthcg-CpWGufXYjIIlkgTANq_eDb9yzAgANtrjdjgp+Rw8UcU/
```
```
[2023-03-25 22:44:06 +0000] [5590] [ERROR] Error handling request
Traceback (most recent call last):
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_protocol.py", line 433, in _handle_request
resp = await request_handler(request)
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_app.py", line 504, in _handle
resp = await handler(request)
File "/usr/local/lib64/python3.8/site-packages/aiohttp/web_middlewares.py", line 117, in impl
return await handler(request)
File "/src/pulpcore/pulpcore/content/authentication.py", line 48, in authenticate
return await handler(request)
File "/src/pulpcore/pulpcore/content/handler.py", line 245, in stream_content
return await self._match_and_stream(path, request)
File "/src/pulpcore/pulpcore/content/handler.py", line 625, in _match_and_stream
dir_list, dates = await self.list_directory(repo_version, None, rel_path)
File "/src/pulpcore/pulpcore/content/handler.py", line 458, in list_directory
return await sync_to_async(list_directory_blocking)()
File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 448, in __call__
ret = await asyncio.wait_for(future, timeout=None)
File "/usr/lib64/python3.8/asyncio/tasks.py", line 455, in wait_for
return await fut
File "/usr/lib64/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.8/site-packages/asgiref/sync.py", line 490, in thread_handler
return func(*args, **kwargs)
File "/src/pulpcore/pulpcore/content/handler.py", line 449, in list_directory_blocking
name = file_or_directory_name(path, ca.relative_path)
File "/src/pulpcore/pulpcore/content/handler.py", line 416, in file_or_directory_name
return "{}{}".format(result.groups()[1], result.groups()[2])
AttributeError: 'NoneType' object has no attribute 'groups'
```
Affected code:
https://github.com/pulp/pulpcore/blob/3879ae14e20947d99029d77baab1b09414ded6b7/pulpcore/content/handler.py#L415
pulp_ostree might expose directories which contain special characters, like this:
```
deltas/KK/LCzQ9nUs0sVU9+94E2ek3Dyc+ToKjMpe1pNqMthcg-CpWGufXYjIIlkgTANq_eDb9yzAgANtrjdjgp+Rw8UcU/
```
Resolution:
We should use something like `re.escape(directory_path))` to handle this scenario.
| 2023-03-27T10:43:15 |
||
pulp/pulpcore | 3,680 | pulp__pulpcore-3680 | [
"3398"
] | 2f4604c84004a3227366a7c06483db363b04fa33 | diff --git a/pulpcore/openapi/__init__.py b/pulpcore/openapi/__init__.py
--- a/pulpcore/openapi/__init__.py
+++ b/pulpcore/openapi/__init__.py
@@ -426,20 +426,20 @@ def parse(self, input_request, public):
# Adding query parameters
if "parameters" in operation and schema.method.lower() == "get":
- fields_paramenter = build_parameter_type(
+ fields_parameter = build_parameter_type(
name="fields",
- schema={"type": "string"},
+ schema={"type": "array", "items": {"type": "string"}},
location=OpenApiParameter.QUERY,
description="A list of fields to include in the response.",
)
- operation["parameters"].append(fields_paramenter)
- not_fields_paramenter = build_parameter_type(
+ operation["parameters"].append(fields_parameter)
+ exclude_fields_parameter = build_parameter_type(
name="exclude_fields",
- schema={"type": "string"},
+ schema={"type": "array", "items": {"type": "string"}},
location=OpenApiParameter.QUERY,
description="A list of fields to exclude from the response.",
)
- operation["parameters"].append(not_fields_paramenter)
+ operation["parameters"].append(exclude_fields_parameter)
# Normalise path for any provided mount url.
if path.startswith("/"):
| fields and exclude_fields querystring parameters should apecify array-of-strings in openapi
`pulp debug openapi operation --id tasks_read | jq '.operation.parameters[]|select(.name=="fields")'`
current behaviour:
```json
{
"in": "query",
"name": "fields",
"schema": {
"type": "string"
},
"description": "A list of fields to include in the response."
}
```
expected behaviour:
```json
{
"in": "query",
"name": "fields",
"schema": {
"type": "array",
"items": {
"type": "string"
}
},
"description": "A list of fields to include in the response."
}
```
| 2023-03-27T13:51:56 |
||
pulp/pulpcore | 3,692 | pulp__pulpcore-3692 | [
"3313"
] | be3d4a9f203371749d5706bccba5c63b3b7d3758 | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -54,9 +54,7 @@ def _validate_fs_export(content_artifacts):
RuntimeError: If Artifacts are not downloaded or when trying to link non-fs files
"""
if content_artifacts.filter(artifact=None).exists():
- raise UnexportableArtifactException(
- _("Cannot export artifacts that haven't been downloaded.")
- )
+ raise UnexportableArtifactException()
def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_METHODS.WRITE):
| `UnexportableArtifactException` does not expect any arguments during the initialization
`UnexportableArtifactException.__init__() takes 1 positional argument but 2 were given)'`
https://github.com/pulp/pulpcore/blob/351bd8653820195ffd3b16c5fbab0f14fd0061fe/pulpcore/app/tasks/export.py#L41-L59
| 2023-03-28T15:00:27 |
||
pulp/pulpcore | 3,693 | pulp__pulpcore-3693 | [
"3313"
] | 6d15aa1fbc0ef6852374601defb18ee6afb21cbe | diff --git a/pulpcore/app/tasks/export.py b/pulpcore/app/tasks/export.py
--- a/pulpcore/app/tasks/export.py
+++ b/pulpcore/app/tasks/export.py
@@ -54,9 +54,7 @@ def _validate_fs_export(content_artifacts):
RuntimeError: If Artifacts are not downloaded or when trying to link non-fs files
"""
if content_artifacts.filter(artifact=None).exists():
- raise UnexportableArtifactException(
- _("Cannot export artifacts that haven't been downloaded.")
- )
+ raise UnexportableArtifactException()
def _export_to_file_system(path, relative_paths_to_artifacts, method=FS_EXPORT_METHODS.WRITE):
| `UnexportableArtifactException` does not expect any arguments during the initialization
`UnexportableArtifactException.__init__() takes 1 positional argument but 2 were given)'`
https://github.com/pulp/pulpcore/blob/351bd8653820195ffd3b16c5fbab0f14fd0061fe/pulpcore/app/tasks/export.py#L41-L59
| 2023-03-28T15:00:28 |
||
pulp/pulpcore | 3,716 | pulp__pulpcore-3716 | [
"3715"
] | 695182ad3c3d40098459ef8ad8bc9c90f8288f58 | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.models import Artifact
@@ -116,7 +117,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-06T21:28:45 |
||
pulp/pulpcore | 3,721 | pulp__pulpcore-3721 | [
"3715"
] | 90375ebcb21b58c8dd75f243b183fbe7c562ddf6 | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.loggers import deprecation_logger
@@ -129,7 +130,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-10T12:59:16 |
||
pulp/pulpcore | 3,722 | pulp__pulpcore-3722 | [
"3715"
] | 248abb49382ed1c64b856e9381034fe86f78da68 | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.models import Artifact
@@ -116,7 +117,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-10T12:59:31 |
||
pulp/pulpcore | 3,723 | pulp__pulpcore-3723 | [
"3715"
] | 7c42705e0d6a67bfd4b17523c9cfb09239744621 | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.models import Artifact
@@ -116,7 +117,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-10T12:59:46 |
||
pulp/pulpcore | 3,724 | pulp__pulpcore-3724 | [
"3715"
] | 6869189063b85ce85fa6128ce7501195cd26859b | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.models import Artifact
@@ -116,7 +117,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-10T12:59:59 |
||
pulp/pulpcore | 3,726 | pulp__pulpcore-3726 | [
"3715"
] | 1588a61ee67caa80c693ed3ef39fa80393b8c403 | diff --git a/pulpcore/download/base.py b/pulpcore/download/base.py
--- a/pulpcore/download/base.py
+++ b/pulpcore/download/base.py
@@ -5,6 +5,7 @@
import logging
import os
import tempfile
+from urllib.parse import urlsplit
from pulpcore.app import pulp_hashlib
from pulpcore.app.models import Artifact
@@ -122,7 +123,17 @@ def _ensure_writer_has_open_file(self):
allowing plugin writers to instantiate many downloaders in memory.
"""
if not self._writer:
- self._writer = tempfile.NamedTemporaryFile(dir=".", delete=False)
+ filename = urlsplit(self.url).path.split("/")[-1]
+ # linux allows any character except NUL or / in a filename and has a length limit of
+ # 255. Making it urlencoding-aware would be nice, but not critical, because urlencoded
+ # paths should be OK
+ is_legal_filename = filename and (len(filename) <= 243) # 255 - prefix length
+ # if the filename isn't legal then we just fall back to no suffix (random name)
+ suffix = "-" + filename if is_legal_filename else None
+ # write the file to the current working directory with a random prefix and the
+ # desired suffix. we always want the random prefix as it is possible to download
+ # the same filename from two different URLs, and the files may not be the same.
+ self._writer = tempfile.NamedTemporaryFile(dir=".", suffix=suffix, delete=False)
self.path = self._writer.name
self._digests = {n: pulp_hashlib.new(n) for n in Artifact.DIGEST_FIELDS}
self._size = 0
| Download machinery should attempt to keep the file extension of the downloaded URL intact, if one exists
**Version**
All
**Describe the bug**
Sometimes libraries will get upset when you pass into them a file without the file extension they expect for the type of file they are handling. It is best to keep the file extension intact to avoid those kinds of issues.
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
There is a deeper issue there with the `file` / `libmagic` library, but until that is fixed and shipped the best we can do is help it out by keeping the original filenames.
I'm sure it will also help with debugging on occasion.
**To Reproduce**
See https://bugzilla.redhat.com/show_bug.cgi?id=2176870
**Additional context**
BZ https://bugzilla.redhat.com/show_bug.cgi?id=2176870
| 2023-04-10T14:44:21 |
||
pulp/pulpcore | 3,731 | pulp__pulpcore-3731 | [
"3730"
] | 50bdd1ba0d2ce7ae4d6eef15a2dd7c569d49c03a | diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -148,8 +148,8 @@
"PAGE_SIZE": 100,
"DEFAULT_PERMISSION_CLASSES": ("pulpcore.plugin.access_policy.AccessPolicyFromDB",),
"DEFAULT_AUTHENTICATION_CLASSES": (
- "rest_framework.authentication.SessionAuthentication",
"rest_framework.authentication.BasicAuthentication",
+ "rest_framework.authentication.SessionAuthentication",
),
"UPLOADED_FILES_USE_URL": False,
"DEFAULT_VERSIONING_CLASS": "rest_framework.versioning.URLPathVersioning",
| diff --git a/pulpcore/tests/functional/api/test_auth.py b/pulpcore/tests/functional/api/test_auth.py
--- a/pulpcore/tests/functional/api/test_auth.py
+++ b/pulpcore/tests/functional/api/test_auth.py
@@ -32,7 +32,7 @@ def test_base_auth_failure(artifacts_api_client, invalid_user):
with pytest.raises(ApiException) as e:
artifacts_api_client.list()
- assert e.value.status == 403
+ assert e.value.status == 401
response = json.loads(e.value.body)
response_detail = response["detail"].lower()
for key in ("invalid", "username", "password"):
@@ -49,7 +49,7 @@ def test_base_auth_required(artifacts_api_client, anonymous_user):
with pytest.raises(ApiException) as e:
artifacts_api_client.list()
- assert e.value.status == 403
+ assert e.value.status == 401
response = json.loads(e.value.body)
response_detail = response["detail"].lower()
for key in ("authentication", "credentials", "provided"):
| The order of authentication classes causes Pulp to return 403 instead of 401 on failed authentication
https://github.com/pulp/pulpcore/blob/50bdd1ba0d2ce7ae4d6eef15a2dd7c569d49c03a/pulpcore/app/settings.py#L150
In pulp_container, we have a view that should handle insufficient, invalid, or valid credentials provided by users. If the users pass invalid credentials to that view, Pulp returns 403 instead of 401.
Changing the order of authentication classes could resolve the problem according to https://www.django-rest-framework.org/api-guide/authentication/#unauthorized-and-forbidden-responses.
> The first authentication class set on the view is used when determining the type of response.
| 2023-04-11T14:36:13 |
|
pulp/pulpcore | 3,751 | pulp__pulpcore-3751 | [
"3792"
] | 6385390bbeee5660e77055ef37e7018f24d4c668 | diff --git a/pulpcore/app/models/task.py b/pulpcore/app/models/task.py
--- a/pulpcore/app/models/task.py
+++ b/pulpcore/app/models/task.py
@@ -192,7 +192,7 @@ def __str__(self):
def __enter__(self):
self.lock = _uuid_to_advisory_lock(self.pk.int)
with connection.cursor() as cursor:
- cursor.execute("SELECT pg_try_advisory_lock(%s);", [self.lock])
+ cursor.execute("SELECT pg_try_advisory_lock(%s)", [self.lock])
acquired = cursor.fetchone()[0]
if not acquired:
raise AdvisoryLockError("Could not acquire lock.")
@@ -200,7 +200,7 @@ def __enter__(self):
def __exit__(self, exc_type, exc_value, traceback):
with connection.cursor() as cursor:
- cursor.execute("SELECT pg_advisory_unlock(%s);", [self.lock])
+ cursor.execute("SELECT pg_advisory_unlock(%s)", [self.lock])
released = cursor.fetchone()[0]
if not released:
raise RuntimeError("Lock not held.")
diff --git a/pulpcore/app/tasks/analytics.py b/pulpcore/app/tasks/analytics.py
--- a/pulpcore/app/tasks/analytics.py
+++ b/pulpcore/app/tasks/analytics.py
@@ -52,7 +52,7 @@ def get_analytics_posting_url():
def _get_postgresql_version_string():
- return connection.cursor().connection.server_version
+ return connection.cursor().connection.info.server_version
async def _postgresql_version(analytics):
diff --git a/pulpcore/tasking/pulpcore_worker.py b/pulpcore/tasking/pulpcore_worker.py
--- a/pulpcore/tasking/pulpcore_worker.py
+++ b/pulpcore/tasking/pulpcore_worker.py
@@ -73,7 +73,7 @@ def __init__(self, lock, lock_group=0):
def __enter__(self):
with connection.cursor() as cursor:
- cursor.execute("SELECT pg_try_advisory_lock(%s, %s);", [self.lock_group, self.lock])
+ cursor.execute("SELECT pg_try_advisory_lock(%s, %s)", [self.lock_group, self.lock])
acquired = cursor.fetchone()[0]
if not acquired:
raise AdvisoryLockError("Could not acquire lock.")
@@ -81,7 +81,7 @@ def __enter__(self):
def __exit__(self, exc_type, exc_value, traceback):
with connection.cursor() as cursor:
- cursor.execute("SELECT pg_advisory_unlock(%s, %s);", [self.lock_group, self.lock])
+ cursor.execute("SELECT pg_advisory_unlock(%s, %s)", [self.lock_group, self.lock])
released = cursor.fetchone()[0]
if not released:
raise RuntimeError("Lock not held.")
@@ -89,7 +89,12 @@ def __exit__(self, exc_type, exc_value, traceback):
class NewPulpWorker:
def __init__(self):
+ # Notification states from several signal handlers
self.shutdown_requested = False
+ self.wakeup = False
+ self.cancel_task = False
+
+ self.task = None
self.name = f"{os.getpid()}@{socket.getfqdn()}"
self.heartbeat_period = settings.WORKER_TTL / 3
self.versions = {app.label: app.version for app in pulp_plugin_configs()}
@@ -119,6 +124,13 @@ def _signal_handler(self, thesignal, frame):
self.task_grace_timeout = TASK_GRACE_INTERVAL
self.shutdown_requested = True
+ def _pg_notify_handler(self, notification):
+ if notification.channel == "pulp_worker_wakeup":
+ self.wakeup = True
+ elif self.task and notification.channel == "pulp_worker_cancel":
+ if notification.payload == str(self.task.pk):
+ self.cancel_task = True
+
def handle_worker_heartbeat(self):
"""
Create or update worker heartbeat records.
@@ -273,32 +285,24 @@ def sleep(self):
"""Wait for signals on the wakeup channel while heart beating."""
_logger.debug(_("Worker %s entering sleep state."), self.name)
- wakeup = False
- while not self.shutdown_requested:
- # Handle all notifications before sleeping in `select`
- while connection.connection.notifies:
- item = connection.connection.notifies.pop(0)
- if item.channel == "pulp_worker_wakeup":
- _logger.debug(_("Worker %s received wakeup call."), self.name)
- wakeup = True
- # ignore all other notifications
- if wakeup:
- break
-
+ while not self.shutdown_requested and not self.wakeup:
r, w, x = select.select(
[self.sentinel, connection.connection], [], [], self.heartbeat_period
)
self.beat()
if connection.connection in r:
- connection.connection.poll()
+ connection.connection.execute("SELECT 1")
if self.sentinel in r:
os.read(self.sentinel, 256)
+ self.wakeup = False
def supervise_task(self, task):
"""Call and supervise the task process while heart beating.
This function must only be called while holding the lock for that task."""
+ self.cancel_task = False
+ self.task = task
task.worker = self.worker
task.save(update_fields=["worker"])
cancel_state = None
@@ -307,13 +311,6 @@ def supervise_task(self, task):
task_process = Process(target=_perform_task, args=(task.pk, task_working_dir_rel_path))
task_process.start()
while True:
- # Handle all notifications before sleeping in `select`
- while connection.connection.notifies:
- item = connection.connection.notifies.pop(0)
- if item.channel == "pulp_worker_cancel" and item.payload == str(task.pk):
- _logger.info(_("Received signal to cancel current task %s."), task.pk)
- cancel_state = TASK_STATES.CANCELED
- # ignore all other notifications
if cancel_state:
if self.task_grace_timeout > 0:
_logger.info("Wait for canceled task to abort.")
@@ -330,7 +327,10 @@ def supervise_task(self, task):
)
self.beat()
if connection.connection in r:
- connection.connection.poll()
+ connection.connection.execute("SELECT 1")
+ if self.cancel_task:
+ _logger.info(_("Received signal to cancel current task %s."), task.pk)
+ cancel_state = TASK_STATES.CANCELED
if task_process.sentinel in r:
if not task_process.is_alive():
break
@@ -365,12 +365,14 @@ def supervise_task(self, task):
self.cancel_abandoned_task(task, cancel_state, cancel_reason)
if task.reserved_resources_record:
self.notify_workers()
+ self.task = None
def run_forever(self):
with WorkerDirectory(self.name):
signal.signal(signal.SIGINT, self._signal_handler)
signal.signal(signal.SIGTERM, self._signal_handler)
# Subscribe to pgsql channels
+ connection.connection.add_notify_handler(self._pg_notify_handler)
self.cursor.execute("LISTEN pulp_worker_wakeup")
self.cursor.execute("LISTEN pulp_worker_cancel")
while not self.shutdown_requested:
| Bump database backend to use psycopg3
| 2023-04-18T10:09:28 |
||
pulp/pulpcore | 3,755 | pulp__pulpcore-3755 | [
"3754"
] | 388ed5f6adaa54d5dff9220599685f8479d7fa68 | diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py
--- a/pulpcore/app/serializers/content.py
+++ b/pulpcore/app/serializers/content.py
@@ -1,6 +1,6 @@
from gettext import gettext as _
-from django.db import transaction
+from django.db import transaction, IntegrityError
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -45,7 +45,6 @@ def __init__(self, *args, **kwargs):
if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
self.fields["relative_path"].write_only = False
- @transaction.atomic
def create(self, validated_data):
"""
Create the content and associate it with its Artifact, or retrieve the existing content.
@@ -63,10 +62,16 @@ def create(self, validated_data):
relative_path = validated_data.pop("relative_path")
else:
relative_path = validated_data.get("relative_path")
- content = self.Meta.model.objects.create(**validated_data)
- models.ContentArtifact.objects.create(
- artifact=artifact, content=content, relative_path=relative_path
- )
+ try:
+ with transaction.atomic():
+ content = self.Meta.model.objects.create(**validated_data)
+ models.ContentArtifact.objects.create(
+ artifact=artifact, content=content, relative_path=relative_path
+ )
+ except IntegrityError:
+ content = self.retrieve(validated_data)
+ if content is None:
+ raise
return content
| There's a race when creating the same content in multiple processes
`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint "file_filecontent_relative_path_digest__pu_b4bae2c2_uniq"
DETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.
)`
| 2023-04-20T13:23:26 |
||
pulp/pulpcore | 3,756 | pulp__pulpcore-3756 | [
"3754"
] | 9b7b2fb63893f95e731cc98544ceb88f6a7855b0 | diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py
--- a/pulpcore/app/serializers/content.py
+++ b/pulpcore/app/serializers/content.py
@@ -1,6 +1,6 @@
from gettext import gettext as _
-from django.db import transaction
+from django.db import transaction, IntegrityError
from rest_framework import serializers
from rest_framework.validators import UniqueValidator
@@ -45,7 +45,6 @@ def __init__(self, *args, **kwargs):
if hasattr(self.Meta.model, "relative_path") and "relative_path" in self.fields:
self.fields["relative_path"].write_only = False
- @transaction.atomic
def create(self, validated_data):
"""
Create the content and associate it with its Artifact, or retrieve the existing content.
@@ -63,10 +62,16 @@ def create(self, validated_data):
relative_path = validated_data.pop("relative_path")
else:
relative_path = validated_data.get("relative_path")
- content = self.Meta.model.objects.create(**validated_data)
- models.ContentArtifact.objects.create(
- artifact=artifact, content=content, relative_path=relative_path
- )
+ try:
+ with transaction.atomic():
+ content = self.Meta.model.objects.create(**validated_data)
+ models.ContentArtifact.objects.create(
+ artifact=artifact, content=content, relative_path=relative_path
+ )
+ except IntegrityError:
+ content = self.retrieve(validated_data)
+ if content is None:
+ raise
return content
| There's a race when creating the same content in multiple processes
`Task 90df818f-3645-4b7c-8aaa-fae400a10ae0 failed (duplicate key value violates unique constraint "file_filecontent_relative_path_digest__pu_b4bae2c2_uniq"
DETAIL: Key (relative_path, digest, _pulp_domain_id)=(B, df7e70e5021544f4834bbee64a9e3789febc4be81470df629cad6ddb03320a5c, be9b7087-f318-48ef-81ce-f3141524c659) already exists.
)`
| 2023-04-20T16:07:49 |
||
pulp/pulpcore | 3,765 | pulp__pulpcore-3765 | [
"2048"
] | b73f2e98f22261ad58fd42522c9a1b4a4b1cbf48 | diff --git a/pulpcore/app/management/commands/rotate-db-key.py b/pulpcore/app/management/commands/rotate-db-key.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/app/management/commands/rotate-db-key.py
@@ -0,0 +1,72 @@
+from contextlib import suppress
+from gettext import gettext as _
+
+from django.apps import apps
+from django.core.management import BaseCommand
+from django.db import connection, transaction
+
+from pulpcore.app.models import MasterModel
+from pulpcore.app.models.fields import EncryptedTextField, EncryptedJSONField
+
+
+class DryRun(Exception):
+ pass
+
+
+class Command(BaseCommand):
+ """
+ Django management command for db key rotation.
+ """
+
+ help = _(
+ "Rotate the db encryption key. "
+ "This command will re-encrypt all values in instances of EncryptedTextField and "
+ "EncryptedJSONField with the first key in the file refereced by "
+ "settings.DB_ENCRYPTION_KEY. You need to make sure that all running instances of the "
+ "application already loaded this key for proper functioning. Refer to the docs for zero "
+ "downtime key rotation."
+ "It is safe to abort and resume or rerun this operation."
+ )
+
+ def add_arguments(self, parser):
+ """Set up arguments."""
+ parser.add_argument(
+ "--dry-run",
+ action="store_true",
+ help=_("Don't modify anything."),
+ )
+
+ def handle(self, *args, **options):
+ dry_run = options["dry_run"]
+
+ for model in apps.get_models():
+ if issubclass(model, MasterModel) and model._meta.master_model is None:
+ # This is a master model, and we will handle all it's descendents.
+ continue
+ field_names = [
+ field.name
+ for field in model._meta.get_fields()
+ if isinstance(field, (EncryptedTextField, EncryptedJSONField))
+ ]
+ if field_names:
+ print(
+ _("Updating {fields} on {model}.").format(
+ model=model.__name__, fields=",".join(field_names)
+ )
+ )
+ exclude_filters = {f"{field_name}": None for field_name in field_names}
+ qs = model.objects.exclude(**exclude_filters).only(*field_names)
+ with suppress(DryRun), transaction.atomic():
+ batch = []
+ for item in qs.iterator():
+ batch.append(item)
+ if len(batch) >= 1024:
+ model.objects.bulk_update(batch, field_names)
+ batch = []
+ if batch:
+ model.objects.bulk_update(batch, field_names)
+ batch = []
+ if dry_run:
+ with connection.cursor() as cursor:
+ cursor.execute("SET CONSTRAINTS ALL IMMEDIATE")
+ raise DryRun()
diff --git a/pulpcore/app/models/fields.py b/pulpcore/app/models/fields.py
--- a/pulpcore/app/models/fields.py
+++ b/pulpcore/app/models/fields.py
@@ -1,14 +1,14 @@
import logging
import os
from gettext import gettext as _
+from functools import lru_cache
-from cryptography.fernet import Fernet
+from cryptography.fernet import Fernet, MultiFernet
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.db.models import Lookup, FileField, JSONField
from django.db.models.fields import Field, TextField
from django.utils.encoding import force_bytes, force_str
-from django.utils.functional import cached_property
from pulpcore.app.files import TemporaryDownloadedFile
@@ -16,6 +16,14 @@
_logger = logging.getLogger(__name__)
+@lru_cache()
+def _fernet():
+ # Cache the enryption keys once per application.
+ _logger.debug(f"Loading encryption key from {settings.DB_ENCRYPTION_KEY}")
+ with open(settings.DB_ENCRYPTION_KEY, "rb") as key_file:
+ return MultiFernet([Fernet(key) for key in key_file.readlines()])
+
+
class ArtifactFileField(FileField):
"""
A custom FileField that always saves files to location specified by 'upload_to'.
@@ -88,20 +96,14 @@ def __init__(self, *args, **kwargs):
raise ImproperlyConfigured("EncryptedTextField does not support db_index=True.")
super().__init__(*args, **kwargs)
- @cached_property
- def _fernet(self):
- _logger.debug(f"Loading encryption key from {settings.DB_ENCRYPTION_KEY}")
- with open(settings.DB_ENCRYPTION_KEY, "rb") as key_file:
- return Fernet(key_file.read())
-
def get_db_prep_save(self, value, connection):
value = super().get_db_prep_save(value, connection)
if value is not None:
- return force_str(self._fernet.encrypt(force_bytes(value)))
+ return force_str(_fernet().encrypt(force_bytes(value)))
def from_db_value(self, value, expression, connection):
if value is not None:
- return force_str(self._fernet.decrypt(force_bytes(value)))
+ return force_str(_fernet().decrypt(force_bytes(value)))
class EncryptedJSONField(JSONField):
@@ -116,19 +118,13 @@ def __init__(self, *args, **kwargs):
raise ImproperlyConfigured("EncryptedJSONField does not support db_index=True.")
super().__init__(*args, **kwargs)
- @cached_property
- def _fernet(self):
- _logger.debug(f"Loading encryption key from {settings.DB_ENCRYPTION_KEY}")
- with open(settings.DB_ENCRYPTION_KEY, "rb") as key_file:
- return Fernet(key_file.read())
-
def encrypt(self, value):
if isinstance(value, dict):
return {k: self.encrypt(v) for k, v in value.items()}
elif isinstance(value, (list, tuple, set)):
return [self.encrypt(v) for v in value]
- return force_str(self._fernet.encrypt(force_bytes(repr(value))))
+ return force_str(_fernet().encrypt(force_bytes(repr(value))))
def decrypt(self, value):
if isinstance(value, dict):
@@ -136,7 +132,7 @@ def decrypt(self, value):
elif isinstance(value, (list, tuple, set)):
return [self.decrypt(v) for v in value]
- return eval(force_str(self._fernet.decrypt(force_bytes(value))))
+ return eval(force_str(_fernet().decrypt(force_bytes(value))))
def get_db_prep_save(self, value, connection):
value = self.encrypt(value)
| diff --git a/pulpcore/tests/unit/models/test_remote.py b/pulpcore/tests/unit/models/test_remote.py
--- a/pulpcore/tests/unit/models/test_remote.py
+++ b/pulpcore/tests/unit/models/test_remote.py
@@ -5,7 +5,7 @@
from django.test import TestCase
from pulpcore.app.models import Remote
-from pulpcore.app.models.fields import EncryptedTextField
+from pulpcore.app.models.fields import _fernet, EncryptedTextField
class RemoteTestCase(TestCase):
@@ -15,6 +15,7 @@ def setUp(self):
def tearDown(self):
if self.remote:
self.remote.delete()
+ _fernet.cache_clear()
@patch(
"pulpcore.app.models.fields.open",
@@ -22,6 +23,7 @@ def tearDown(self):
read_data=b"hPCIFQV/upbvPRsEpgS7W32XdFA2EQgXnMtyNAekebQ=",
)
def test_encrypted_proxy_password(self, mock_file):
+ _fernet.cache_clear()
self.remote = Remote(name=uuid4(), proxy_password="test")
self.remote.save()
self.assertEqual(Remote.objects.get(pk=self.remote.pk).proxy_password, "test")
@@ -35,4 +37,4 @@ def test_encrypted_proxy_password(self, mock_file):
proxy_password = EncryptedTextField().from_db_value(db_proxy_password, None, connection)
self.assertNotEqual(db_proxy_password, "test")
self.assertEqual(proxy_password, "test")
- self.assertEqual(mock_file.call_count, 2)
+ self.assertEqual(mock_file.call_count, 1)
| As a user I want to be able to rotate my encryption keys and rekey my informations
Author: spredzy (spredzy)
Redmine Issue: 9397, https://pulp.plan.io/issues/9397
---
Pulp 3.15 brings the support for encrypting fields in the DB. This is a great step toward better security practices. Thanks team for that.
In order to go a step further with security best-practices, I would like to be able to rotate my keys periodically, and hence rekey my data.
As it's stand today I haven't see a way to do this.
| From: @mdellweg (mdellweg)
Date: 2021-09-15T08:35:20Z
---
We should provide a `pulpcore-manager` command to replace the key.
Also there should be a strategy how to rekey in a clustered environment.
From: daviddavis (daviddavis)
Date: 2021-09-20T12:06:21Z
---
mdellweg wrote:
> We should provide a `pulpcore-manager` command to replace the key.
+1. How will the command work? What options will it take?
> Also there should be a strategy how to rekey in a clustered environment.
Maybe we should file another issue for this? Or turn this into an epic with subtasks?
From: @bmbouter (bmbouter)
Date: 2021-09-20T14:11:10Z
---
+1 to an epic with subtasks.
Also just to state it: It's both the changing of the key and the decrypt-re-encrypt of data in the database. Also here are two things I'm thinking about:
1) In clustered installs, how do we ensure the keys are distributed to all the nodes (which will need the private key) yet ensure the decrypt re-encrypt will only happen exactly once?
2) What happens if an OOM or power loss occurs on whatever node is being run halfway through? Since the data is encrypted, we have to be extremely careful that this is bulletproof.
From: @fao89 (fao89)
Date: 2021-09-21T14:40:05Z
---
I believe this may involve pulpcore, pulp_installer and pulp-operator work
This issue has been marked 'stale' due to lack of recent activity. If there is no further activity, the issue will be closed in another 30 days. Thank you for your contribution!
This issue is no longer marked for closure. | 2023-04-25T11:03:31 |
pulp/pulpcore | 3,766 | pulp__pulpcore-3766 | [
"3023"
] | 4cbd097cf28dd1df7ec57594abed2b2a74f017e3 | diff --git a/pulpcore/app/__init__.py b/pulpcore/app/__init__.py
--- a/pulpcore/app/__init__.py
+++ b/pulpcore/app/__init__.py
@@ -1 +0,0 @@
-default_app_config = "pulpcore.app.apps.PulpAppConfig"
diff --git a/pulpcore/app/apps.py b/pulpcore/app/apps.py
--- a/pulpcore/app/apps.py
+++ b/pulpcore/app/apps.py
@@ -227,6 +227,8 @@ class PulpAppConfig(PulpPluginAppConfig):
# The app's importable name
name = "pulpcore.app"
+ default = True
+
# The app label to be used when creating tables, registering models, referencing this app
# with manage.py, etc. This cannot contain a dot and must not conflict with the name of a
# package containing a Django app.
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -365,7 +365,7 @@
settings = DjangoDynaconf(
__name__,
- GLOBAL_ENV_FOR_DYNACONF="PULP",
+ ENVVAR_PREFIX_FOR_DYNACONF="PULP",
ENV_SWITCHER_FOR_DYNACONF="PULP_ENV",
PRELOAD_FOR_DYNACONF=[
"{}.app.settings".format(plugin_name) for plugin_name in INSTALLED_PULP_PLUGINS
| diff --git a/pulpcore/tests/unit/download/test_factory.py b/pulpcore/tests/unit/download/test_factory.py
--- a/pulpcore/tests/unit/download/test_factory.py
+++ b/pulpcore/tests/unit/download/test_factory.py
@@ -1,20 +1,21 @@
from django.test import TestCase
+from asgiref.sync import sync_to_async
from pulpcore.download.factory import DownloaderFactory
from pulpcore.plugin.models import Remote
class DownloaderFactoryHeadersTestCase(TestCase):
- def test_user_agent_header(self):
- remote = Remote.objects.create(url="http://example.org/", name="foo")
+ async def test_user_agent_header(self):
+ remote = await sync_to_async(Remote.objects.create)(url="http://example.org/", name="foo")
factory = DownloaderFactory(remote)
downloader = factory.build(remote.url)
default_user_agent = DownloaderFactory.user_agent()
self.assertEqual(downloader.session.headers["User-Agent"], default_user_agent)
- remote.delete()
+ await sync_to_async(remote.delete)()
- def test_custom_user_agent_header(self):
- remote = Remote.objects.create(
+ async def test_custom_user_agent_header(self):
+ remote = await sync_to_async(Remote.objects.create)(
url="http://example.org/", headers=[{"User-Agent": "foo"}], name="foo"
)
factory = DownloaderFactory(remote)
@@ -22,12 +23,13 @@ def test_custom_user_agent_header(self):
default_user_agent = DownloaderFactory.user_agent()
expected_user_agent = f"{default_user_agent}, foo"
self.assertEqual(downloader.session.headers["User-Agent"], expected_user_agent)
- remote.delete()
+ await sync_to_async(remote.delete)()
- def test_custom_headers(self):
- remote = Remote.objects.create(
+ async def test_custom_headers(self):
+ remote = await sync_to_async(Remote.objects.create)(
url="http://example.org/", headers=[{"Connection": "keep-alive"}], name="foo"
)
factory = DownloaderFactory(remote)
downloader = factory.build(remote.url)
self.assertEqual(downloader.session.headers["Connection"], "keep-alive")
+ await sync_to_async(remote.delete)()
diff --git a/pulpcore/tests/unit/viewsets/test_base.py b/pulpcore/tests/unit/viewsets/test_base.py
--- a/pulpcore/tests/unit/viewsets/test_base.py
+++ b/pulpcore/tests/unit/viewsets/test_base.py
@@ -23,9 +23,7 @@ def test_adds_filters(self):
queryset = viewset.get_queryset()
expected = models.RepositoryVersion.objects.filter(repository__pk=repo.pk)
- # weird, stupid django quirk
- # https://docs.djangoproject.com/en/3.2/topics/testing/tools/#django.test.TransactionTestCase.assertQuerysetEqual
- self.assertQuerysetEqual(queryset, map(repr, expected))
+ self.assertQuerysetEqual(queryset, expected)
def test_does_not_add_filters(self):
"""
@@ -38,9 +36,7 @@ def test_does_not_add_filters(self):
queryset = viewset.get_queryset()
expected = models.Repository.objects.all()
- # weird, stupid django quirk
- # https://docs.djangoproject.com/en/3.2/topics/testing/tools/#django.test.TransactionTestCase.assertQuerysetEqual
- self.assertQuerysetEqual(queryset, map(repr, expected))
+ self.assertQuerysetEqual(queryset, expected)
class TestGetSerializerClass(TestCase):
@@ -66,13 +62,13 @@ class TestTaskViewSet(viewsets.NamedModelViewSet):
serializer_class = serializers.TaskSerializer
viewset = TestTaskViewSet()
- self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
+ self.assertEqual(viewset.get_serializer_class(), serializers.TaskSerializer)
request = unittest.mock.MagicMock()
request.query_params = QueryDict("minimal=True")
viewset.request = request
- self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
+ self.assertEqual(viewset.get_serializer_class(), serializers.TaskSerializer)
def test_minimal_query_param(self):
"""
@@ -89,15 +85,15 @@ class TestTaskViewSet(viewsets.NamedModelViewSet):
# Test that it uses the full serializer with no query params
request.query_params = QueryDict()
viewset.request = request
- self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
+ self.assertEqual(viewset.get_serializer_class(), serializers.TaskSerializer)
# Test that it uses the full serializer with minimal=False
request.query_params = QueryDict("minimal=False")
viewset.request = request
- self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
+ self.assertEqual(viewset.get_serializer_class(), serializers.TaskSerializer)
# Test that it uses the minimal serializer with minimal=True
request.query_params = QueryDict("minimal=True")
viewset.request = request
- self.assertEquals(viewset.get_serializer_class(), serializers.MinimalTaskSerializer)
+ self.assertEqual(viewset.get_serializer_class(), serializers.MinimalTaskSerializer)
class TestGetParentFieldAndObject(TestCase):
@@ -121,7 +117,7 @@ def test_get_parent_field_and_object(self):
viewset = viewsets.RepositoryVersionViewSet()
viewset.kwargs = {"repository_pk": repo.pk}
- self.assertEquals(("repository", repo), viewset.get_parent_field_and_object())
+ self.assertEqual(("repository", repo), viewset.get_parent_field_and_object())
def test_get_parent_object(self):
"""
@@ -131,7 +127,7 @@ def test_get_parent_object(self):
viewset = viewsets.RepositoryVersionViewSet()
viewset.kwargs = {"repository_pk": repo.pk}
- self.assertEquals(repo, viewset.get_parent_object())
+ self.assertEqual(repo, viewset.get_parent_object())
class TestGetNestDepth(TestCase):
@@ -139,5 +135,5 @@ def test_get_nest_depth(self):
"""
Test that _get_nest_depth() returns the correct nesting depths.
"""
- self.assertEquals(1, viewsets.RepositoryViewSet._get_nest_depth())
- self.assertEquals(2, viewsets.RepositoryVersionViewSet._get_nest_depth())
+ self.assertEqual(1, viewsets.RepositoryViewSet._get_nest_depth())
+ self.assertEqual(2, viewsets.RepositoryVersionViewSet._get_nest_depth())
| Deprecation warnings
```
/usr/local/lib/python3.8/site-packages/dynaconf/contrib/django_dynaconf_v2.py:96: RemovedInDjango40Warning: The PASSWORD_RESET_TIMEOUT_DAYS setting is deprecated. Use PASSWORD_RESET_TIMEOUT instead.
```
```
/usr/local/lib/python3.8/site-packages/dynaconf/utils/__init__.py:264: DeprecationWarning: You are using GLOBAL_ENV_FOR_DYNACONF which is a deprecated settings replace it with ENVVAR_PREFIX_FOR_DYNACONF
```
```
/usr/local/lib/python3.8/site-packages/django/apps/registry.py:91: RemovedInDjango41Warning: 'pulpcore.app' defines default_app_config = 'pulpcore.app.apps.PulpAppConfig'. However, Django's automatic detection did not find this configuration. You should move the default config class to the apps submodule of your application and, if this module defines several config classes, mark the default one with default = True.
```
```
/usr/local/lib/python3.8/site-packages/asynctest/mock.py:434: DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead
```
```
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetQuerySet::test_does_not_add_filters
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:43: RemovedInDjango41Warning: In Django 4.1, repr() will not be called automatically on a queryset when compared to string values. Set an explicit 'transform' to silence this warning.
```
```
usr/local/lib/python3.8/site-packages/asynctest/helpers.py:13
/usr/local/lib/python3.8/site-packages/asynctest/helpers.py:13: DeprecationWarning: "@coroutine" decorator is deprecated since Python 3.8, use "async def" instead
def exhaust_callbacks(loop):
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_headers
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_user_agent_header
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_user_agent_header
/usr/local/lib64/python3.8/site-packages/aiohttp/connector.py:771: DeprecationWarning: The object should be created within an async function
super().__init__(
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_headers
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_user_agent_header
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_user_agent_header
/usr/local/lib64/python3.8/site-packages/aiohttp/connector.py:782: DeprecationWarning: The object should be created within an async function
resolver = DefaultResolver(loop=self._loop)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_headers
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_user_agent_header
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_user_agent_header
/usr/local/lib/python3.8/site-packages/pulpcore/download/factory.py:145: DeprecationWarning: The object should be created within an async function
return aiohttp.ClientSession(
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_headers
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_custom_user_agent_header
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/download/test_factory.py::DownloaderFactoryHeadersTestCase::test_user_agent_header
/usr/local/lib64/python3.8/site-packages/aiohttp/cookiejar.py:67: DeprecationWarning: The object should be created within an async function
super().__init__(loop=loop)
```
```
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetSerializerClass::test_minimal_query_param
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:92: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetSerializerClass::test_minimal_query_param
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:96: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetSerializerClass::test_minimal_query_param
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:100: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(viewset.get_serializer_class(), serializers.MinimalTaskSerializer)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetSerializerClass::test_serializer_class
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:69: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetSerializerClass::test_serializer_class
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:75: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(viewset.get_serializer_class(), serializers.TaskSerializer)
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetParentFieldAndObject::test_get_parent_field_and_object
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:124: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(("repository", repo), viewset.get_parent_field_and_object())
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetParentFieldAndObject::test_get_parent_object
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:134: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(repo, viewset.get_parent_object())
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetNestDepth::test_get_nest_depth
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:142: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(1, viewsets.RepositoryViewSet._get_nest_depth())
usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py::TestGetNestDepth::test_get_nest_depth
/usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/viewsets/test_base.py:143: DeprecationWarning: Please use assertEqual instead.
self.assertEquals(2, viewsets.RepositoryVersionViewSet._get_nest_depth())
```
| Most of these are test related, but the first 3 are not, and the others ought to be addressed when we rewrite the tests.
@ggainey I think those are blockers just for 3.25.0 if I recall correctly. | 2023-04-25T15:04:33 |
pulp/pulpcore | 3,770 | pulp__pulpcore-3770 | [
"2865"
] | 5902ea2df6fb9c1670cac277d5657f4ea03a92aa | diff --git a/pulpcore/app/viewsets/repository.py b/pulpcore/app/viewsets/repository.py
--- a/pulpcore/app/viewsets/repository.py
+++ b/pulpcore/app/viewsets/repository.py
@@ -16,6 +16,7 @@
Content,
Remote,
Repository,
+ RepositoryContent,
RepositoryVersion,
)
from pulpcore.app.response import OperationPostponedResponse
@@ -41,9 +42,51 @@
from pulpcore.filters import HyperlinkRelatedFilter
+class RepositoryContentFilter(Filter):
+ """
+ Filter used to filter repositories which have a piece of content
+ """
+
+ def __init__(self, *args, **kwargs):
+ kwargs.setdefault("help_text", _("Content Unit referenced by HREF"))
+ self.latest = kwargs.pop("latest", False)
+ super().__init__(*args, **kwargs)
+
+ def filter(self, qs, value):
+ """
+ Args:
+ qs (django.db.models.query.QuerySet): The Repository Queryset
+ value (string): of content href to filter
+
+ Returns:
+ Queryset of the Repository containing the specified content
+ """
+
+ if value is None:
+ # user didn't supply a value
+ return qs
+
+ if not value:
+ raise serializers.ValidationError(detail=_("No value supplied for content filter"))
+
+ # Get the content object from the content_href
+ content = NamedModelViewSet.get_resource(value, Content)
+
+ if self.latest:
+ return qs.filter(
+ pk__in=RepositoryContent.objects.filter(
+ version_removed=None, content__pk=content.pk
+ ).values_list("repository__pk", flat=True)
+ )
+ else:
+ return qs.filter(content__pk=content.pk)
+
+
class RepositoryFilter(BaseFilterSet):
pulp_label_select = LabelFilter()
remote = HyperlinkRelatedFilter(allow_null=True)
+ with_content = RepositoryContentFilter()
+ latest_with_content = RepositoryContentFilter(latest=True)
class Meta:
model = Repository
| diff --git a/pulpcore/tests/functional/api/test_repos.py b/pulpcore/tests/functional/api/test_repos.py
new file mode 100644
--- /dev/null
+++ b/pulpcore/tests/functional/api/test_repos.py
@@ -0,0 +1,50 @@
+"""Tests that CRUD repositories."""
+from uuid import uuid4
+
+import pytest
+
+from pulpcore.client.pulp_file import RepositorySyncURL
+
+
[email protected]
+def test_repository_content_filters(
+ file_content_api_client,
+ file_repository_api_client,
+ file_repository_factory,
+ file_remote_factory,
+ gen_object_with_cleanup,
+ write_3_iso_file_fixture_data_factory,
+ monitor_task,
+):
+ """Test repository's content filters."""
+ # generate a repo with some content
+ repo = file_repository_factory()
+ repo_manifest_path = write_3_iso_file_fixture_data_factory(str(uuid4()))
+ remote = file_remote_factory(manifest_path=repo_manifest_path, policy="on_demand")
+ body = RepositorySyncURL(remote=remote.pulp_href)
+ task_response = file_repository_api_client.sync(repo.pulp_href, body).task
+ version_href = monitor_task(task_response).created_resources[0]
+ content = file_content_api_client.list(repository_version_added=version_href).results[0]
+ repo = file_repository_api_client.read(repo.pulp_href)
+
+ # filter repo by the content
+ results = file_repository_api_client.list(with_content=content.pulp_href).results
+ assert results == [repo]
+ results = file_repository_api_client.list(latest_with_content=content.pulp_href).results
+ assert results == [repo]
+
+ # remove the content
+ response = file_repository_api_client.modify(
+ repo.pulp_href,
+ {"remove_content_units": [content.pulp_href]},
+ )
+ monitor_task(response.task)
+ repo = file_repository_api_client.read(repo.pulp_href)
+
+ # the repo still has the content unit
+ results = file_repository_api_client.list(with_content=content.pulp_href).results
+ assert results == [repo]
+
+ # but not in its latest version anymore
+ results = file_repository_api_client.list(latest_with_content=content.pulp_href).results
+ assert results == []
| As a user, I want to know which repositories some content is in
**Is your feature request related to a problem? Please describe.**
Content in pulp is distributed through repositories and distributions. If a client knows the name or HREF for some content, there's currently no way of know which repositories it's available in without looking in each repository.
**Describe the solution you'd like**
Add a `?content_in_latest=` filter to the distributions and repositories APIs that allow users to specify a content HREF and get a list of repositories or distributions that contain the selected content.
**Describe alternatives you've considered**
Are there any existing ways to do this in pulp?
**Additional context**
This is needed for repository management in galaxy_ng
| as of today it is possible to list all repo_versions(and publications) that contain a certain content or list of content
https://docs.pulpproject.org/pulpcore/restapi.html#tag/Repository_Versions
```
GET http :/pulp/api/v3/repository_versions/?content=<content_href>
[
{
"pulp_href": "/pulp/api/v3/repositories/rpm/rpm/07c41c5f-59e4-4371-942a-b6a006a6d2cf/versions/1/",
"pulp_created": "2021-03-18T19:23:31.661940Z",
"repository_href": "/pulp/api/v3/repositories/rpm/rpm/07c41c5f-59e4-4371-942a-b6a006a6d2cf/",
"version": 1,
}
...
]
GET http :/pulp/api/v3/publications/?content=<content_href>
[
{
"pulp_href": "/pulp/api/v3/publications/rpm/rpm/4d5bb614-4318-4408-8c18-ab8b6b4c016f/",
"pulp_created": "2021-03-17T12:24:31.661940Z",
"repository_version_href": "/pulp/api/v3/repositories/rpm/rpm/07c41c5f-59e4-4371-942a-b6a006a6d2cf/versions/1/",
}
...
]
```
oh neat, I didn't realize the repository version api existed. I was looking at /<repo_href>/repository-versions/
yeah our bad - we have no docs on this :/
with the existing implementation we could add a query param ?version=latest which would filter the latest repo version and show it instead of showing all of them.
Distribution search can be based of the repo_version search.
Adding a `?version=latest` flag to `repository_versions` could be sufficient. The potential issues I see with this are:
- The `repository_versions` API is not quite as data rich as the repositories API. You can't see names, remotes etc. This could be partially address with #2828
- This will make developing UIs more challenging and the end user experience worse overall. Lets say I have a "Repositories" view in my UI. If I want to filter that list of repositories by content. I have to make an api call to a different API endpoint and map the data so that the UI for repositories can interpret it. Without #2828, I would also have to make a bunch of additional API calls to get information about the repository itself. I also won't be able to use the content filter with any other repository filters such as repo labels, names or created dates.
I think our users would more or less appreciate all of these options, including:
(a) Adding the `?version=latest` flag to the repository_versions API
(b) Adding similar filter capability to the repositories endpoint
(c) Adding similar filtering capability to the distributions endpoint
The distributions endpoint (c) could be a little trickier due to some distributions have publications, and some don't, so for this to be implemented in pulpcore it would have to support both types. Totally do-able, just a bit more to it.
I want (a) because it's really easy, but then it requires the PoC Related Fields, which is a non-trivial piece of work. I think we should avoid tying this need up in that, so I believe the ideal path is to implement (b) and (c) as separate tickets. This ticket could be for one of them, and we'd need another ticket for the other.
For (b) it would be nice to have the ability to search against latest or not. For (c) it shouldn't take a "latest" option since the distribution already encapsulates that feature. Distributions can point to a repo which implies "latest" or a specific repo_version or publication, which implies "just this one".
Given all ^, I'm happy to defer to @newswangerd for whichever one you want.
A short aside on (a), maybe positioning it as `latest_only=True/False` (with default to False), would be good because I don't think semantically having the option accept mixed types, e.g. `latest` or in other cases "a url to a repo version" is a positive user experience.
At the pulpcore meeting, we want to pursue using this ticket for implementing (b) and @newswangerd will file another ticket for (c) and @newswangerd or galaxy_ng will implement both?
I'll file another ticket for distributions. We can implement both
The reason we have implemented (a) and not (b) because the content's presence and distribution comes from a repo version specifically.
Your story sounds like "As a user, I want to know which repos have content X in it's latest version" which is not as generic as "As a user, I want to know which repo versions have content X" . Not every distribution serves latest version, as it was pointed out earlier in the comment, but what the (b) will show in case I don't want to search against latest? The api will give you the repo result like this but will not tell in which version specifically the content is in.
```
"results": [
{
"description": null,
"latest_version_href": "/pulp/api/v3/repositories/container/container/6d0d0d25-b3c0-49a2-a798-358fbe6f5031/versions/0/",
"name": "lala",
"pulp_created": "2022-06-29T11:38:58.610462Z",
"pulp_href": "/pulp/api/v3/repositories/container/container/6d0d0d25-b3c0-49a2-a798-358fbe6f5031/",
"pulp_labels": {},
"remote": null,
"retain_repo_versions": null,
"versions_href": "/pulp/api/v3/repositories/container/container/6d0d0d25-b3c0-49a2-a798-358fbe6f5031/versions/"
}
]
```
And even if you're interested only in the latest repo version, there is no guarantee that by the time the user will decide to consume/use/copy/etc content from that repo 1) that repo will not have other versions created so the latest won't be latest you searched against anymore 2) the new latest might not have that content anymore; that's why it is probably safer to search through repo versions because they are immutable and content will be there until the repo version exists.
I agree on other points of repo versions api not being as rich as repo api and that different endpoints would need to be called but i am hoping that https://github.com/pulp/pulpcore/pull/2828 will help.
@ipanova thanks for such an informative post. Given that is the recommendation to focus on developing https://github.com/pulp/pulpcore/pull/2828 ?
I still think from a usability perspective users want to search from repos and distributions directly, but given the concerns you raise a clear path to doing that is not immediately clear to me. I guess one of the concerns is that if repo endpoints start returning serialized objects that aren't repos (for example) the endpoint is now polymorphic from an openAPI perspective. That being said it kind of would be anyway if we implement https://github.com/pulp/pulpcore/pull/2828 Just some rambling thoughts on this. More conversation is welcome.
@ipanova Let me see if I can summarize your objections to this:
1. Distributions can point to any repository version, so searching latest doesn't work
2. The repository serializer only links to the latest repo version, so searching for content in any other version doesn't make sense
3. Content is ultimately distributed from versions, so searching the versions directly is more reliable.
4. The repo version pointers for for distributions and repositories can be updated in-between searching for content and requesting content, so they don't accurately represent if the content is actually there.
1, 2 and 3 are great points. Since the content can be distributed from any version in the repository it doesn't make sense to limit your search to just the latest on the repositories endpoint.
I'm not 100% certain 3 matters as much. The best any REST API can do is communicate to the client what the state of the system is in right now. If I make a followup call to the system to request another piece of data there's no guarantee that the object still exists, that the object is the same as it was before, the the server is running, the authentication credentials are still valid, etc. The current endpoint for repository versions has this problem too. The repo versions are immutable, but they're also deletable. There's no guarantee the version exists if I make a followup call to grab my content from it.
With all this in mind, lets revisit some use cases from for this:
- U1: As an admin, I want to know who has access to download some content
- U2: As a content consumer, I want to know where to go to download some content
As you pointed out, (b) doesn't make any sense for U1. The content can be distributed from any version of the repository, so knowing if it's in latest doesn't help you make that determination anymore, so it would be better to make a call to the repo versions api endpoint.
U2 is a little more complicated. In the ansible world, clients can only download content from a distribution. If we just implement (a), to determine where my content is available, I would need to request the list of repository versions, and then figure out which repository versions are part of a distribution I have access to. Since distributions are just pointers to repository versions, it seems like it would still make sense to provide a `?contains_content` filter to the distributions endpoint.
Providing a `?contains_content` on the repository APIs would also be very helpful from a usability perspective, even if it means that you have to make a followup request to get the list of versions that the content is actually in. Going back to U1, let's say an admin is trying to assess which clients may have downloaded some malicious content. If the content is in 3000 versions of repo `foo`, 200 versions of `bar` and 1 version of `foobar`, then the admin has to page through 3201 repository versions to find out that there are 3 repos that demand their attention. It would be easier to get list of 3 repositories in one API call and then go through the list of repo versions for each repo to perform cleanup or whatever they need to do.
I guess this is a long winded way of saying I really want us to implement (c). Limiting your search to just the latest version on a repo doesn't make as much sense, so (a) and (b) might not be necessary, but I would still love to have some content filtering capabilities on the repository endpoint.
cc @bmbouter
Is the question about content in a Distribution controversial at this point?
I think the question of what is currently presented in a distribution is rather clear. Should we break this out as a separate issue?
Ah, maybe I misunderstood the comment. I just added https://github.com/pulp/pulpcore/issues/2952 to track the distribution filter.
I don't think I will be able to attend in time our today's meeting, so let me leave here some of my thoughts.
> @ipanova Let me see if I can summarize your objections to this:
>
> 1. Distributions can point to any repository version, so searching latest doesn't work
>
> 2. The repository serializer only links to the latest repo version, so searching for content in any other version doesn't make sense
>
> 3. Content is ultimately distributed from versions, so searching the versions directly is more reliable.
>
> 4. The repo version pointers for for distributions and repositories can be updated in-between searching for content and requesting content, so they don't accurately represent if the content is actually there.
>
>
> 1, 2 and 3 are great points. Since the content can be distributed from any version in the repository it doesn't make sense to limit your search to just the latest on the repositories endpoint.
>
> I'm not 100% certain 3 matters as much. The best any REST API can do is communicate to the client what the state of the system is in right now. If I make a followup call to the system to request another piece of data there's no guarantee that the object still exists, that the object is the same as it was before, the the server is running, the authentication credentials are still valid, etc. The current endpoint for repository versions has this problem too. The repo versions are immutable, but they're also deletable. There's no guarantee the version exists if I make a followup call to grab my content from it.
>
> With all this in mind, lets revisit some use cases from for this:
>
> * U1: As an admin, I want to know who has access to download some content
>
> * U2: As a content consumer, I want to know where to go to download some content
>
>
> As you pointed out, (b) doesn't make any sense for U1. The content can be distributed from any version of the repository, so knowing if it's in latest doesn't help you make that determination anymore, so it would be better to make a call to the repo versions api endpoint.
>
> U2 is a little more complicated. In the ansible world, clients can only download content from a distribution. If we just implement (a), to determine where my content is available, I would need to request the list of repository versions, and then figure out which repository versions are part of a distribution I have access to. Since distributions are just pointers to repository versions, it seems like it would still make sense to provide a `?contains_content` filter to the distributions eddpoint.
In my previous 2 comments I was mostly expressing my concerns over (b) as the main API endpoint of reference for repo content search. (a) and (c) make sense to me.
>
> Providing a `?contains_content` on the repository APIs would also be very helpful from a usability perspective, even if it means that you have to make a followup request to get the list of versions that the content is actually in. Going back to U1, let's say an admin is trying to assess which clients may have downloaded some malicious content. If the content is in 3000 versions of repo `foo`, 200 versions of `bar` and 1 version of `foobar`, then the admin has to page through 3201 repository versions to find out that there are 3 repos that demand their attention. It would be easier to get list of 3 repositories in one API call and then go through the list of repo versions for each repo to perform cleanup or whatever they need to do.
I am still not sure I am sold to this. If I have a malicious content I plan to remove it.
1) I will make a call to repo_versions API endpoint with `?latest_only=True` to identify the href for foo, bar and foobar and then
2) have 3 repo API endpoint calls to each repos to remove the malicious content
3) Since repo versions are immutable I can just delete them so won't I perform 3201 DELETE calls on repo_versions?
I am not very much against providing `?contains_content` on the repository APIs I just don't see any added value to it because the result of the API call will tell me that one of the 3000 repo_versions from repo foo contain content X. Well, thanks and what's next? Next, I am going perform (1) with`?latest_only=False` so I can find all affected repo_versions and then (3) to delete them.
EDIT: well, I do see value for the sake of convenience and generic informative call, to not scroll through the X number of pages of all repo_version results.
>
> I guess this is a long winded way of saying I really want us to implement (c). Limiting your search to just the latest version on a repo doesn't make as much sense, so (a) and (b) might not be necessary, but I would still love to have some content filtering capabilities on the repository endpoint.
yep +1 on (c)
>
> cc @bmbouter
We would really like to have this feature. And we'd prefer to filter repositories (as opposed to repo versions) by content as we don't expose repo versions to our users. Our use of Pulp is that users simply create, publish, and distribute repos so they have no concept of repo versions. | 2023-04-26T13:27:55 |
pulp/pulpcore | 3,784 | pulp__pulpcore-3784 | [
"3783"
] | ad319b1963fd7462bbf9e5562ead263a42481db8 | diff --git a/pulpcore/app/serializers/base.py b/pulpcore/app/serializers/base.py
--- a/pulpcore/app/serializers/base.py
+++ b/pulpcore/app/serializers/base.py
@@ -155,7 +155,8 @@ class RelatedField(
"""
-PKOnlyObject = namedtuple("PKOnlyObject", ["pk"])
+PKObject = namedtuple("PKObject", ["pk"])
+PKDomainObject = namedtuple("PKDomainObject", ["pk", "pulp_domain"])
class RelatedResourceField(RelatedField):
@@ -171,7 +172,7 @@ class RelatedResourceField(RelatedField):
def repo_ver_url(self, repo_ver):
repo_model = get_model_for_pulp_type(repo_ver.repository.pulp_type, Repository)
view_name = get_view_name_for_model(repo_model, "detail")
- obj = PKOnlyObject(pk=repo_ver.repository.pk)
+ obj = PKDomainObject(pk=repo_ver.repository.pk, pulp_domain=self.context["pulp_domain"])
repo_url = self.get_url(obj, view_name, request=None, format=None)
return f"{repo_url}versions/{repo_ver.number}/"
@@ -192,7 +193,12 @@ def to_representation(self, data):
except LookupError:
pass
else:
- obj = PKOnlyObject(pk=data.object_id)
+ if hasattr(model, "pulp_domain"):
+ obj = PKDomainObject(
+ pk=data.object_id, pulp_domain=self.context["pulp_domain"]
+ )
+ else:
+ obj = PKObject(pk=data.object_id)
try:
return self.get_url(obj, view_name, request=None, format=None)
except NoReverseMatch:
diff --git a/pulpcore/app/viewsets/task.py b/pulpcore/app/viewsets/task.py
--- a/pulpcore/app/viewsets/task.py
+++ b/pulpcore/app/viewsets/task.py
@@ -29,6 +29,7 @@
WorkerSerializer,
)
from pulpcore.app.tasks import purge
+from pulpcore.app.util import get_domain
from pulpcore.app.viewsets import NamedModelViewSet, RolesMixin
from pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NAME_FILTER_OPTIONS
from pulpcore.app.viewsets.custom_filters import (
@@ -39,7 +40,7 @@
)
from pulpcore.constants import TASK_INCOMPLETE_STATES, TASK_STATES
from pulpcore.tasking.tasks import dispatch
-from pulpcore.tasking.util import cancel as cancel_task
+from pulpcore.tasking.util import cancel_task
class TaskFilter(BaseFilterSet):
@@ -167,6 +168,8 @@ def get_serializer(self, *args, **kwargs):
.only("pk", "number", "repository__pulp_type")
)
serializer.context["repo_ver_mapping"] = {rv.pk: rv for rv in repo_vers}
+ # Assume, all tasks and related resources are of the same domain.
+ serializer.context["pulp_domain"] = get_domain()
return serializer
def get_queryset(self):
diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -10,7 +10,7 @@
_logger = logging.getLogger(__name__)
-def cancel(task_id):
+def cancel_task(task_id):
"""
Cancel the task that is represented by the given task_id.
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -12,6 +12,7 @@
import pytest
from aiohttp import web
+from contextlib import suppress
from dataclasses import dataclass
from packaging.version import parse as parse_version
from time import sleep
@@ -929,7 +930,9 @@ def _add_to_cleanup(api_client, pulp_href):
pass
for deleted_task_href in delete_task_hrefs:
- monitor_task(deleted_task_href)
+ with suppress(ApiException):
+ # The task itself may be gone at this point (e.g. by being part of a deleted domain).
+ monitor_task(deleted_task_href)
@pytest.fixture(scope="class")
| Task list and show endpoint optimizations for created_resources regress with domains
created_resources always render with "default" domain.
| 2023-05-03T09:49:27 |
|
pulp/pulpcore | 3,787 | pulp__pulpcore-3787 | [
"3783"
] | b12179ab5f05479681f699e79793042565a9f7be | diff --git a/pulpcore/app/serializers/base.py b/pulpcore/app/serializers/base.py
--- a/pulpcore/app/serializers/base.py
+++ b/pulpcore/app/serializers/base.py
@@ -155,7 +155,8 @@ class RelatedField(
"""
-PKOnlyObject = namedtuple("PKOnlyObject", ["pk"])
+PKObject = namedtuple("PKObject", ["pk"])
+PKDomainObject = namedtuple("PKDomainObject", ["pk", "pulp_domain"])
class RelatedResourceField(RelatedField):
@@ -171,7 +172,7 @@ class RelatedResourceField(RelatedField):
def repo_ver_url(self, repo_ver):
repo_model = get_model_for_pulp_type(repo_ver.repository.pulp_type, Repository)
view_name = get_view_name_for_model(repo_model, "detail")
- obj = PKOnlyObject(pk=repo_ver.repository.pk)
+ obj = PKDomainObject(pk=repo_ver.repository.pk, pulp_domain=self.context["pulp_domain"])
repo_url = self.get_url(obj, view_name, request=None, format=None)
return f"{repo_url}versions/{repo_ver.number}/"
@@ -192,7 +193,12 @@ def to_representation(self, data):
except LookupError:
pass
else:
- obj = PKOnlyObject(pk=data.object_id)
+ if hasattr(model, "pulp_domain"):
+ obj = PKDomainObject(
+ pk=data.object_id, pulp_domain=self.context["pulp_domain"]
+ )
+ else:
+ obj = PKObject(pk=data.object_id)
try:
return self.get_url(obj, view_name, request=None, format=None)
except NoReverseMatch:
diff --git a/pulpcore/app/viewsets/task.py b/pulpcore/app/viewsets/task.py
--- a/pulpcore/app/viewsets/task.py
+++ b/pulpcore/app/viewsets/task.py
@@ -29,6 +29,7 @@
WorkerSerializer,
)
from pulpcore.app.tasks import purge
+from pulpcore.app.util import get_domain
from pulpcore.app.viewsets import NamedModelViewSet, RolesMixin
from pulpcore.app.viewsets.base import DATETIME_FILTER_OPTIONS, NAME_FILTER_OPTIONS
from pulpcore.app.viewsets.custom_filters import (
@@ -39,7 +40,7 @@
)
from pulpcore.constants import TASK_INCOMPLETE_STATES, TASK_STATES
from pulpcore.tasking.tasks import dispatch
-from pulpcore.tasking.util import cancel as cancel_task
+from pulpcore.tasking.util import cancel_task
class TaskFilter(BaseFilterSet):
@@ -167,6 +168,8 @@ def get_serializer(self, *args, **kwargs):
.only("pk", "number", "repository__pulp_type")
)
serializer.context["repo_ver_mapping"] = {rv.pk: rv for rv in repo_vers}
+ # Assume, all tasks and related resources are of the same domain.
+ serializer.context["pulp_domain"] = get_domain()
return serializer
def get_queryset(self):
diff --git a/pulpcore/tasking/util.py b/pulpcore/tasking/util.py
--- a/pulpcore/tasking/util.py
+++ b/pulpcore/tasking/util.py
@@ -10,7 +10,7 @@
_logger = logging.getLogger(__name__)
-def cancel(task_id):
+def cancel_task(task_id):
"""
Cancel the task that is represented by the given task_id.
| diff --git a/pulpcore/tests/functional/__init__.py b/pulpcore/tests/functional/__init__.py
--- a/pulpcore/tests/functional/__init__.py
+++ b/pulpcore/tests/functional/__init__.py
@@ -12,6 +12,7 @@
import pytest
from aiohttp import web
+from contextlib import suppress
from dataclasses import dataclass
from packaging.version import parse as parse_version
from time import sleep
@@ -929,7 +930,9 @@ def _add_to_cleanup(api_client, pulp_href):
pass
for deleted_task_href in delete_task_hrefs:
- monitor_task(deleted_task_href)
+ with suppress(ApiException):
+ # The task itself may be gone at this point (e.g. by being part of a deleted domain).
+ monitor_task(deleted_task_href)
@pytest.fixture(scope="class")
| Task list and show endpoint optimizations for created_resources regress with domains
created_resources always render with "default" domain.
| 2023-05-03T17:01:23 |
|
pulp/pulpcore | 3,790 | pulp__pulpcore-3790 | [
"3614"
] | ef6c45537b92ccc23d6aa9eb974dc3aa9ce7f7d3 | diff --git a/pulpcore/app/util.py b/pulpcore/app/util.py
--- a/pulpcore/app/util.py
+++ b/pulpcore/app/util.py
@@ -288,33 +288,6 @@ def get_request_without_query_params(context):
return request
-def verify_signature(filepath, public_key, detached_data=None):
- """
- Check whether the provided file can be verified with the particular public key.
-
- When dealing with a detached signature (referenced by the 'filepath' argument), one have to pass
- the reference to a data file that was signed by that signature.
- """
- deprecation_logger.warning(
- "verify_signature() is deprecated and will be removed in pulpcore==3.25; use gpg_verify()."
- )
-
- with tempfile.TemporaryDirectory(dir=settings.WORKING_DIRECTORY) as temp_directory_name:
- gpg = gnupg.GPG(gnupghome=temp_directory_name)
- gpg.import_keys(public_key)
- imported_keys = gpg.list_keys()
-
- if len(imported_keys) != 1:
- raise RuntimeError("Exactly one key must be imported.")
-
- with open(filepath, "rb") as signature:
- verified = gpg.verify_file(signature, detached_data)
- if not verified.valid:
- raise InvalidSignatureError(
- f"The file '{filepath}' does not contain a valid signature."
- )
-
-
def gpg_verify(public_keys, signature, detached_data=None):
"""
Check whether the provided gnupg signature is valid for one of the provided public keys.
diff --git a/pulpcore/plugin/util.py b/pulpcore/plugin/util.py
--- a/pulpcore/plugin/util.py
+++ b/pulpcore/plugin/util.py
@@ -18,7 +18,6 @@
get_url,
gpg_verify,
raise_for_unknown_content_units,
- verify_signature,
get_default_domain,
get_domain,
get_domain_pk,
| Remove verify_signature() in favour of gpg_verify()
| This is a follow-up task to remove deprecated functions/methods that should no longer be available for users as of v3.25. | 2023-05-04T07:43:25 |
|
pulp/pulpcore | 3,791 | pulp__pulpcore-3791 | [
"3613"
] | ef6c45537b92ccc23d6aa9eb974dc3aa9ce7f7d3 | diff --git a/pulpcore/app/viewsets/base.py b/pulpcore/app/viewsets/base.py
--- a/pulpcore/app/viewsets/base.py
+++ b/pulpcore/app/viewsets/base.py
@@ -16,7 +16,6 @@
from rest_framework.serializers import ValidationError as DRFValidationError, ListField, CharField
from pulpcore.app import tasks
-from pulpcore.app.loggers import deprecation_logger
from pulpcore.app.models import MasterModel
from pulpcore.app.models.role import GroupRole, UserRole
from pulpcore.app.response import OperationPostponedResponse
@@ -199,43 +198,6 @@ def get_resource(uri, model=None):
detail=_("URI {u} is not a valid {m}.").format(u=uri, m=model._meta.model_name)
)
- @staticmethod
- def extract_pk(uri):
- """
- Resolve a resource URI to a simple PK value.
-
- Provides a means to resolve an href passed in a POST body to a primary key.
- Doesn't assume anything about whether the resource corresponding to the URI
- passed in actually exists.
-
- Note:
- Cannot be used with nested URIs where the PK of the final resource is not present.
- e.g. RepositoryVersion URI consists of repository PK and version number - no
- RepositoryVersion PK is present within the URI.
-
- Args:
- uri (str): A resource URI.
-
- Returns:
- primary_key (uuid.uuid4): The primary key of the resource extracted from the URI.
-
- Raises:
- rest_framework.exceptions.ValidationError: on invalid URI.
- """
- deprecation_logger.warning(
- "NamedModelViewSet.extract_pk() is deprecated and will be removed in pulpcore==3.25; "
- "use pulpcore.plugin.util.extract_pk() instead."
- )
- try:
- match = resolve(urlparse(uri).path)
- except Resolver404:
- raise DRFValidationError(detail=_("URI not valid: {u}").format(u=uri))
-
- try:
- return match.kwargs["pk"]
- except KeyError:
- raise DRFValidationError("URI does not contain an unqualified resource PK")
-
@classmethod
def is_master_viewset(cls):
# ViewSet isn't related to a model, so it can't represent a master model
diff --git a/pulpcore/plugin/actions.py b/pulpcore/plugin/actions.py
--- a/pulpcore/plugin/actions.py
+++ b/pulpcore/plugin/actions.py
@@ -1,11 +1,7 @@
-from gettext import gettext as _
-
from drf_spectacular.utils import extend_schema
from rest_framework.decorators import action
-from rest_framework.serializers import ValidationError
from pulpcore.app import tasks
-from pulpcore.app.loggers import deprecation_logger
from pulpcore.app.models import RepositoryVersion
from pulpcore.app.response import OperationPostponedResponse
from pulpcore.app.serializers import (
@@ -15,7 +11,7 @@
from pulpcore.tasking.tasks import dispatch
-__all__ = ["ModifyRepositoryActionMixin", "raise_for_unknown_content_units"]
+__all__ = ["ModifyRepositoryActionMixin"]
class ModifyRepositoryActionMixin:
@@ -49,29 +45,3 @@ def modify(self, request, pk):
},
)
return OperationPostponedResponse(task, request)
-
-
-def raise_for_unknown_content_units(existing_content_units, content_units_pks_hrefs):
- """Verify if all the specified content units were found in the database.
-
- Args:
- existing_content_units (pulpcore.plugin.models.Content): Content filtered by
- specified_content_units.
- content_units_pks_hrefs (dict): An original dictionary of pk-href pairs that
- are used for the verification.
- Raises:
- ValidationError: If some of the referenced content units are not present in the database
- """
- deprecation_logger.warning(
- "pulpcore.plugin.actions.raise_for_unknown_content_units() is deprecated and will be "
- "removed in pulpcore==3.25; use pulpcore.plugin.util.raise_for_unknown_content_units()."
- )
- existing_content_units_pks = existing_content_units.values_list("pk", flat=True)
- existing_content_units_pks = set(map(str, existing_content_units_pks))
-
- missing_pks = set(content_units_pks_hrefs.keys()) - existing_content_units_pks
- if missing_pks:
- missing_hrefs = [content_units_pks_hrefs[pk] for pk in missing_pks]
- raise ValidationError(
- _("Could not find the following content units: {}").format(missing_hrefs)
- )
| Remove NamedModelViewSet.extract_pk and pulpcore.plugin.actions.raise_for_unknown_content_units
In https://github.com/pulp/pulpcore/pull/3552, we deprecated `NamedModelViewSet.extract_pk` and `pulpcore.plugin.actions.raise_for_unknown_content_units` in favour of `pulpcore.plugin.util.extract_pk` and `pulpcore.plugin.util.raise_for_unknown_content_units`, respectively. The change was required to prevent circular imports while enhancing the validation on the `modify` endpoint.
| This is a follow-up task to remove deprecated functions/methods that should no longer be available for users as of v3.25. | 2023-05-04T07:51:13 |
|
pulp/pulpcore | 3,794 | pulp__pulpcore-3794 | [
"2993"
] | 14cedb972bb682e22f6dcb62b41e09480c8b7a22 | diff --git a/pulpcore/app/models/publication.py b/pulpcore/app/models/publication.py
--- a/pulpcore/app/models/publication.py
+++ b/pulpcore/app/models/publication.py
@@ -469,6 +469,11 @@ class BaseDistribution(MasterModel):
of ``a/path/foo`` existed, you could not make a second Distribution with a ``base_path`` of
``a/path`` or ``a`` because both are subpaths of ``a/path/foo``.
+ Note:
+ This class is no longer supported and cannot be removed from Pulp 3 due to possible
+ problems with old migrations for plugins. Until the migrations are squashed, this class
+ should be preserved.
+
Fields:
name (models.TextField): The name of the distribution. Examples: "rawhide" and "stable".
base_path (models.TextField): The base (relative) path component of the published url.
| Remove `BaseDistribution` from the plugin API
We tried to do this once before but there were migration issues perhaps?
edit: See comment below
> Maybe we just update the docstring with info that it can't be removed until Pulp4. and use this issue for that .misc change.
| Posting some matrix convo that happened some time ago:
x9c4
Hey folks, do you think we are ready to remove the BaseDistribution model from the database? Or is there any reason not to do so?
^
dalley maybe you remember?
dralley
I don't remember the exact reason but I think it had to do with migrations. Basically the historical migration record has lots of references to this BaseDistribution model
hopefully we wrote down the reason somewhere
x9c4
not in the code at least.
dralley
x9c4: https://pulp.plan.io/issues/8386#note-5
dralley
I think the TL;DR is: the migrations happen all at once for a given app, be it pulpcore or a plugin, so if you do a speedrun through all the pulpcore migrations and remove the basedistribution model entirely, then the early migrations for some plugin will fail
in a new installation, where they haven't already been applied
x9c4
So we cannot reall remove it, because we do not controll all plugins migrations. I'l stop thinking about it again for a year.
x9c4
I was once looking into squashing migrations, but maybe for the same reasons of plugin interdependence we cannot really do this in Pulp.
dralley
Pulp 4 (tm)
We could at the least remove it from the plugin API and it would just live in the DB and the model definition inside of `pulpcore.app.models`.
Actually it already is removed from the plugin API https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/models/__init__.py Maybe we just update the docstring with info that it can't be removed until Pulp4. and use this issue for that .misc change.
@bmbouter i agree
... until we squashed enough migrations we cannot remove it. | 2023-05-04T15:51:59 |
|
pulp/pulpcore | 3,801 | pulp__pulpcore-3801 | [
"3800"
] | 219cd401cebe7b53d29f1aa10b638f7dcd07c57b | diff --git a/pulpcore/app/models/domain.py b/pulpcore/app/models/domain.py
--- a/pulpcore/app/models/domain.py
+++ b/pulpcore/app/models/domain.py
@@ -49,6 +49,15 @@ def get_storage(self):
def prevent_default_deletion(self):
raise models.ProtectedError("Default domain can not be updated/deleted.", [self])
+ @hook(BEFORE_DELETE, when="name", is_not="default")
+ def _cleanup_orphans_pre_delete(self):
+ if self.content_set.exclude(version_memberships__isnull=True).exists():
+ raise models.ProtectedError("There is active content in the domain.")
+ self.content_set.filter(version_memberships__isnull=True).delete()
+ for artifact in self.artifact_set.all().iterator():
+ # Delete on by one to properly cleanup the storage.
+ artifact.delete()
+
class Meta:
permissions = [
("manage_roles_domain", "Can manage role assignments on domain"),
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -294,6 +294,12 @@
DOMAIN_ENABLED = False
+SHELL_PLUS_IMPORTS = [
+ "from pulpcore.app.util import get_domain, get_domain_pk, set_domain, get_url, extract_pk",
+ "from pulpcore.tasking.tasks import dispatch",
+ "from pulpcore.tasking.util import cancel_task",
+]
+
# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)
# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html
from dynaconf import DjangoDynaconf, Validator # noqa
| diff --git a/pulpcore/tests/functional/api/test_crud_domains.py b/pulpcore/tests/functional/api/test_crud_domains.py
--- a/pulpcore/tests/functional/api/test_crud_domains.py
+++ b/pulpcore/tests/functional/api/test_crud_domains.py
@@ -125,6 +125,57 @@ def test_active_domain_deletion(domains_api_client, rbac_contentguard_api_client
assert e.value.status == 404
[email protected]
+def test_orphan_domain_deletion(
+ domains_api_client,
+ file_repository_api_client,
+ file_content_api_client,
+ gen_object_with_cleanup,
+ monitor_task,
+ tmp_path,
+):
+ """Test trying to delete a domain that is in use, has objects in it."""
+ if not settings.DOMAIN_ENABLED:
+ pytest.skip("Domains not enabled")
+ body = {
+ "name": str(uuid.uuid4()),
+ "storage_class": "pulpcore.app.models.storage.FileSystem",
+ "storage_settings": {"MEDIA_ROOT": "/var/lib/pulp/media/"},
+ }
+ domain = gen_object_with_cleanup(domains_api_client, body)
+
+ repository = gen_object_with_cleanup(
+ file_repository_api_client, {"name": str(uuid.uuid4())}, pulp_domain=domain.name
+ )
+ new_file = tmp_path / "new_file"
+ new_file.write_text("Test file")
+ monitor_task(
+ file_content_api_client.create(
+ relative_path=str(uuid.uuid4()),
+ file=new_file,
+ pulp_domain=domain.name,
+ repository=repository.pulp_href,
+ ).task
+ )
+
+ # Try to delete a domain with the repository in it
+ response = domains_api_client.delete(domain.pulp_href)
+ with pytest.raises(PulpTaskError) as e:
+ monitor_task(response.task)
+
+ assert e.value.task.state == "failed"
+
+ # Delete the repository
+ file_repository_api_client.delete(repository.pulp_href)
+
+ # Now succeed in deleting the domain
+ response = domains_api_client.delete(domain.pulp_href)
+ monitor_task(response.task)
+ with pytest.raises(ApiException) as e:
+ domains_api_client.read(domain.pulp_href)
+ assert e.value.status == 404
+
+
@pytest.mark.parallel
def test_special_domain_creation(domains_api_client, gen_object_with_cleanup):
"""Test many possible domain creation scenarios."""
| Orphaned content should not prevent to delete a domain.
Being unable to delete a domain when all there is left in it is orphaned content is unintuitive.
| 2023-05-08T10:31:14 |
|
pulp/pulpcore | 3,808 | pulp__pulpcore-3808 | [
"3807"
] | c6cd331e153b2442b90e2aa9ec5eac04e24fffa3 | diff --git a/pulpcore/app/models/content.py b/pulpcore/app/models/content.py
--- a/pulpcore/app/models/content.py
+++ b/pulpcore/app/models/content.py
@@ -11,7 +11,7 @@
import subprocess
from collections import defaultdict
-from functools import lru_cache
+from functools import lru_cache, partial
from itertools import chain
from django.conf import settings
@@ -162,7 +162,8 @@ def delete(self, *args, **kwargs):
kwargs (dict): dictionary of keyword arguments to pass to Model.delete()
"""
super().delete(*args, **kwargs)
- self.file.delete(save=False)
+ # In case of rollback, we want the artifact to stay connected with it's file.
+ transaction.on_commit(partial(self.file.delete, save=False))
class ArtifactManager(BulkCreateManager):
| Artifacts storage association does not honor transaction semantics.
**Version**
At least up to 3.24
**Describe the bug**
When deleting an Artifact in a transaction that may subsequently fail and not be commited, the file associated with the artifact is already removed from the storage. This should not happen.
**To Reproduce**
See listing below.
**Expected behavior**
When not committing the transaction after deleting, the artifact should not be mutilated.
**Additional context**
```
In [6]: Artifact.objects.all()
Out[6]: <BulkTouchQuerySet [<Artifact: pk=0188014a-120f-7e05-9e65-3de72d45182e>]>
In [7]: a = Artifact.objects.first()
In [8]: with transaction.atomic():
...: a.delete()
...: raise Exception()
...:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In[8], line 3
1 with transaction.atomic():
2 a.delete()
----> 3 raise Exception()
Exception:
In [9]: a
Out[9]: <Artifact: pk=None>
In [10]: a = Artifact.objects.first()
In [11]: a
Out[11]: <Artifact: pk=0188014a-120f-7e05-9e65-3de72d45182e>
In [12]: a.file
Out[12]: <FieldFile: artifact/99/322da247546beac65c2f5b7d164e94dcaf050b85a6076a00cf4fec5e21a6ed>
In [13]: a.file.read()
---------------------------------------------------------------------------
FileNotFoundError
```
| 2023-05-09T16:50:56 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.