question_id
int64
59.5M
79.6M
creation_date
stringdate
2020-01-01 00:00:00
2025-05-14 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
71,803,409
2022-4-8
https://stackoverflow.com/questions/71803409/vscode-how-to-interrupt-a-running-python-test
I'm using VSCode Test Explorer to run my Python unit tests. There was a bug in my code and my tested method never finishes. How do I interrupt my test? I can't find how to do it using the GUI. I had to close VSCode to interrupt it. I'm using pytest framework.
Silly me, here is the Stop button at the top right of the the Testing tab:
33
78
71,786,694
2022-4-7
https://stackoverflow.com/questions/71786694/building-a-random-forest-classifier-with-equal-output-probabilities-to-a-decisio
I have been trying to build a RandomForestClassifier() (RF) model and a DecisionTreeClassifier() (DT) model in order to get the same output (only for learning purposes). I have found some questions with answers where I used those answers to build this code, like the required parameters to make both models equal but I can't find a code that actually does it, so I'm trying build that code: import numpy as np import seaborn as sns import matplotlib.pyplot as plt from sklearn.ensemble import RandomForestClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split random_seed = 42 X, y = make_classification( n_samples=100000, n_features=5, random_state=random_seed ) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=random_seed) DT = DecisionTreeClassifier(criterion='gini', # default splitter='best', # default max_depth=None, # default min_samples_split=3, # default min_samples_leaf=1, # default min_weight_fraction_leaf=0.0, # default max_features=None, # default random_state=random_seed, # NON-default max_leaf_nodes=None, # default min_impurity_decrease=0.0, # default class_weight=None, # default ccp_alpha=0.0 # default ) DT.fit(X_train, y_train) RF = RandomForestClassifier(n_estimators=1, # NON-default criterion='gini', # default max_depth=None, # default min_samples_split=3, # default min_samples_leaf=1, # default min_weight_fraction_leaf=0.0, # default max_features=None, # NON-default max_leaf_nodes=None, # default min_impurity_decrease=0.0, # default bootstrap=False, # NON-default oob_score=False, # default n_jobs=None, # default random_state=random_seed, # NON-default verbose=0, # default warm_start=False, # default class_weight=None, # default ccp_alpha=0.0, # default max_samples=None # default ) RF.fit(X_train, y_train) RF_pred = RF.predict(X_test) RF_proba = RF.predict_proba(X_test) DT_pred = DT.predict(X_test) DT_proba = DT.predict_proba(X_test) # Here we validate that the outputs are actually equal, with their respective percentage of how many rows are NOT equal print('If DT_pred = RF_pred:',np.array_equal(DT_pred, RF_pred), '; Percentage of not equal:', (DT_pred != RF_pred).sum()/len(DT_pred)) print('If DT_proba = RF_proba:', np.array_equal(DT_proba, RF_proba), '; Percentage of not equal:', (DT_proba != RF_proba).sum()/len(DT_proba)) # A plot that shows where those differences are concentrated sns.set(style="darkgrid") mask = (RF_proba[:,1] - DT_proba[:,1]) != 0 only_differences = (RF_proba[:,1] - DT_proba[:,1])[mask] sns.kdeplot(only_differences, shade=True, color="r") plt.title('Plot of only differences in probs scores') plt.show() Output: I even found an answer that compares an XGBoost with DecisionTree saying they are almost identical, and when I test their probabilities outputs they are fairly different. So, am I doing something wrong here? How can I get the same probabilities for those two models? Is there a possibility to get True for those two print() statements in the code above?
It appears to be due to random states, despite your best efforts. For the random forest to be effective at its randomization, it needs to provide each component decision tree with a different random state (using sklearn.ensemble._base._set_random_states, source). You can check in your code that while RF.random_state and DT.random_state are both 42, RF.estimators_[0].random_state is 1608637542. When bootstrap=False and max_columns=None, this is only changing some effects for tied-gain splits I believe, and so the results are very close on the training set. That can translate to slightly larger differences on a test set.
5
3
71,846,838
2022-4-12
https://stackoverflow.com/questions/71846838/gitlab-ci-cd-pipeline-runs-django-unit-tests-before-migrations
Problem I'm trying to set up a test stage in Gitlab's CI/CD. Locally, running the unit tests goes fine and as expected. In Gitlab's CI/CD, though, when running the script coverage run manage.py test -v 2 && coverage report the unit tests are executing before the migrations are completed in the test database, which is unexpected, and will always fail. Migrations on the test database need to run prior to unit tests executing. Any idea why this behavior might happen? When running python manage.py test these are the steps that happen by default: Creation of test databases. Database migration. Run system checks. Running tests. Report on the number of tests and success / failure. Removing test databases. In the local test run output below, you can see those exact steps, in that sequence happening. The problem is that in Gitlab's CI/CD pipeline, step 2 and 4 are switched for some mysterious reason to me. Things I've already tried Removing coverage, and only executing python manage.py test Result: same error Removing user test, where Gitlab says the test fails: Result: same error Removing all but one unit test to attempt to isolate: Result: same error Adding and activating a virtual environment inside Gitlab's container: Result: same error Trying a SQLite3: Result: migrations fail due to necessary array field necessary in the project not supported in SQLite3 Gitlab CI/CD Output My actual output is much bigger. But here's the relevant parts: $ echo $TEST_SECRETS | base64 -d > config/settings/local.py $ echo RUNNING UNIT TESTS RUNNING UNIT TESTS $ coverage run manage.py test -v 2 && coverage report Creating test database for alias 'default' ('test_mia')... test_dataconnector_category_creation_delete (apps.core.tests.test_models.DataConnectorCategoryTestCase) Verify data connector category creation ... ok test_compare_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Compare object from self.airbytesetting to ensure data is in proper format. ... ok test_fieldconstraints_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Test updating values in self.airbytesetting and saving that to the test database. ... ok test_compare_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Compare object from self.airbytesource to ensure data is in proper format. ... ok test_delete_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Test Deleting self.airbytesource object from database ... ok test_fieldconstraints_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Test updating values in self.airbytesource and saving that to the test database. ... ok test_compare_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Compare object from self.airbytestream to ensure data is in proper format. ... ok test_delete_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Test Deleting self.airbytestream object from database ... ok test_fieldconstraints_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Test updating values in self.airbytestream and saving that to the test database. ... ok test_compare_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Compare object from self.airbytesynccatalog to ensure data is in proper format. ... ok test_delete_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Test Deleting self.airbytesynccatalog object from database ... ok test_fieldconstraints_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Test updating values in self.airbytesynccatalog and saving that to the test database. ... ok test_compare_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Compare object from self.tableausetting to ensure data is in proper format. ... ok test_createrandom_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test Creating a New (Second) TableauSetting Model Object with Random Values. ... ok test_delete_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test Deleting self.tableausetting object from database ... ok test_fieldconstraints_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test updating values in self.tableausetting and saving that to the test database. ... ok test_get_namespace_custom_format (apps.services.tests.test_scripts_airbyte.ServicesTestCase) TO-DO: this method should live in AirByte class ... ok test_retrieve_me_user (apps.user.tests.test_views.UserViewTest) ... ok test_retrieve_user (apps.user.tests.test_views.UserViewTest) ... ok apps.user.tests.test_models (unittest.loader._FailedTest) ... ERROR ====================================================================== ERROR: apps.user.tests.test_models (unittest.loader._FailedTest) ---------------------------------------------------------------------- ImportError: Failed to import test module: apps.user.tests.test_models Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) psycopg2.errors.UndefinedTable: relation "user_user" does not exist LINE 1: INSERT INTO "user_user" ("password", "last_login", "is_super... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.8/unittest/loader.py", line 436, in _find_test_path module = self._get_module_from_name(name) File "/usr/local/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name __import__(name) File "/builds/americommerce/mia/mia-ms-main/apps/user/tests/test_models.py", line 3, in <module> from apps.user.tests.factories import UserFactory, CompanyFactory, SignUpTokenFactory File "/builds/americommerce/mia/mia-ms-main/apps/user/tests/factories.py", line 23, in <module> class SignUpTokenFactory(factory.django.DjangoModelFactory): File "/builds/americommerce/mia/mia-ms-main/apps/user/tests/factories.py", line 28, in SignUpTokenFactory user = UserFactory() File "/usr/local/lib/python3.8/site-packages/factory/base.py", line 40, in __call__ return cls.create(**kwargs) File "/usr/local/lib/python3.8/site-packages/factory/base.py", line 528, in create return cls._generate(enums.CREATE_STRATEGY, kwargs) File "/usr/local/lib/python3.8/site-packages/factory/django.py", line 117, in _generate return super()._generate(strategy, params) File "/usr/local/lib/python3.8/site-packages/factory/base.py", line 465, in _generate return step.build() File "/usr/local/lib/python3.8/site-packages/factory/builder.py", line 262, in build instance = self.factory_meta.instantiate( File "/usr/local/lib/python3.8/site-packages/factory/base.py", line 317, in instantiate return self.factory._create(model, *args, **kwargs) File "/usr/local/lib/python3.8/site-packages/factory/django.py", line 166, in _create return manager.create(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 447, in create obj.save(force_insert=True, using=self.db) File "/usr/local/lib/python3.8/site-packages/django/contrib/auth/base_user.py", line 67, in save super().save(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/django/db/models/base.py", line 753, in save self.save_base(using=using, force_insert=force_insert, File "/usr/local/lib/python3.8/site-packages/django/db/models/base.py", line 790, in save_base updated = self._save_table( File "/usr/local/lib/python3.8/site-packages/django/db/models/base.py", line 895, in _save_table results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw) File "/usr/local/lib/python3.8/site-packages/django/db/models/base.py", line 933, in _do_insert return manager._insert( File "/usr/local/lib/python3.8/site-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 1254, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File "/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1397, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python3.8/site-packages/sentry_sdk/integrations/django/__init__.py", line 508, in execute return real_execute(self, sql, params) File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers return executor(sql, params, many, context) File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: relation "user_user" does not exist LINE 1: INSERT INTO "user_user" ("password", "last_login", "is_super... ^ ---------------------------------------------------------------------- Ran 25 tests in 0.648s FAILED (errors=1) Destroying test database for alias 'default' ('test_mia')... Operations to perform: Synchronize unmigrated apps: corsheaders, debug_toolbar, delete_migrations, django_filters, generic_relations, messages, mptt, rest_framework, rest_framework_jwt, rest_framework_swagger, staticfiles Apply all migrations: admin, auth, contenttypes, core, dashboard, reports, services, sessions, store, user Synchronizing apps without migrations: Creating tables... Running deferred SQL... Running migrations: Applying contenttypes.0001_initial... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0001_initial... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying auth.0012_alter_user_first_name_max_length... OK Applying user.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying store.0001_initial... OK Applying dashboard.0001_initial... OK Applying core.0001_initial... OK Applying reports.0001_initial... OK Applying reports.0002_emailreport_store... OK Applying reports.0003_initial... OK Applying reports.0004_remove_emailreport_segments... OK Applying core.0002_dataconnectorcategory... OK Applying core.0003_auto_20220331_1610... OK Applying core.0004_delete_segment... OK Applying dashboard.0002_auto_20220331_1804... OK Applying dashboard.0003_flag_tableauview_widget_widgetsection... OK Applying reports.0005_emailreport_segments... OK Applying services.0001_initial... OK Applying services.0002_auto_20220325_0426... OK Applying services.0003_auto_20220331_1610... OK Applying sessions.0001_initial... OK Applying store.0002_initial... OK Applying store.0003_storefrontdataconnector... OK Applying user.0002_auto_20220325_0426... OK Applying user.0003_auto_20220331_1610... OK Applying user.0004_auto_20220331_1804... OK Applying user.0005_user_flags... OK System check identified no issues (0 silenced). Cleaning up project directory and file based variables 00:01 ERROR: Job failed: exit code 1 Local Test Run Output Creating test database for alias 'default' ('test_mia')... Operations to perform: Synchronize unmigrated apps: corsheaders, debug_toolbar, delete_migrations, django_filters, generic_relations, messages, mptt, rest_framework, rest_framework_jwt, rest_framework_swagger, staticfiles Apply all migrations: admin, auth, contenttypes, core, dashboard, reports, services, sessions, store, user Synchronizing apps without migrations: Creating tables... Running deferred SQL... Running migrations: Applying contenttypes.0001_initial... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0001_initial... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying auth.0009_alter_user_last_name_max_length... OK Applying auth.0010_alter_group_name_max_length... OK Applying auth.0011_update_proxy_permissions... OK Applying auth.0012_alter_user_first_name_max_length... OK Applying user.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying admin.0003_logentry_add_action_flag_choices... OK Applying store.0001_initial... OK Applying dashboard.0001_initial... OK Applying core.0001_initial... OK Applying reports.0001_initial... OK Applying reports.0002_emailreport_store... OK Applying reports.0003_initial... OK Applying reports.0004_remove_emailreport_segments... OK Applying core.0002_dataconnectorcategory... OK Applying core.0003_auto_20220331_1610... OK Applying core.0004_delete_segment... OK Applying dashboard.0002_auto_20220331_1804... OK Applying dashboard.0003_flag_tableauview_widget_widgetsection... OK Applying reports.0005_emailreport_segments... OK Applying services.0001_initial... OK Applying services.0002_auto_20220325_0426... OK Applying services.0003_auto_20220331_1610... OK Applying sessions.0001_initial... OK Applying store.0002_initial... OK Applying store.0003_storefrontdataconnector... OK Applying user.0002_auto_20220325_0426... OK Applying user.0003_auto_20220331_1610... OK Applying user.0004_auto_20220331_1804... OK Applying user.0005_user_flags... OK System check identified no issues (0 silenced). test_dataconnector_category_creation_delete (apps.core.tests.test_models.DataConnectorCategoryTestCase) Verify data connector category creation ... ok test_compare_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Compare object from self.airbytesetting to ensure data is in proper format. ... ok test_fieldconstraints_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesetting (apps.services.tests.test_model_airbytesetting.AirbyteSettingModel_TestClass) Test updating values in self.airbytesetting and saving that to the test database. ... ok test_compare_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Compare object from self.airbytesource to ensure data is in proper format. ... ok test_delete_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Test Deleting self.airbytesource object from database ... ok test_fieldconstraints_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesource (apps.services.tests.test_model_airbytesource.AirbyteSourceModel_TestClass) Test updating values in self.airbytesource and saving that to the test database. ... ok test_compare_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Compare object from self.airbytestream to ensure data is in proper format. ... ok test_delete_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Test Deleting self.airbytestream object from database ... ok test_fieldconstraints_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytestream (apps.services.tests.test_model_airbytestream.AirbyteStreamModel_TestClass) Test updating values in self.airbytestream and saving that to the test database. ... ok test_compare_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Compare object from self.airbytesynccatalog to ensure data is in proper format. ... ok test_delete_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Test Deleting self.airbytesynccatalog object from database ... ok test_fieldconstraints_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_airbytesynccatalog (apps.services.tests.test_model_airbytesynccatalog.AirbyteSyncCatalogModel_TestClass) Test updating values in self.airbytesynccatalog and saving that to the test database. ... ok test_compare_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Compare object from self.tableausetting to ensure data is in proper format. ... ok test_createrandom_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test Creating a New (Second) TableauSetting Model Object with Random Values. ... ok test_delete_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test Deleting self.tableausetting object from database ... ok test_fieldconstraints_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Compares Field Constraint Values for Accuracy ... ok test_update_tableausetting (apps.services.tests.test_model_tableausetting.TableauSettingModel_TestClass) Test updating values in self.tableausetting and saving that to the test database. ... ok test_get_namespace_custom_format (apps.services.tests.test_scripts_airbyte.ServicesTestCase) TO-DO: this method should live in AirByte class ... ok test_company_creation (apps.user.tests.test_models.UserTestCase) Verify Company creation ... ok test_signup_token_creation (apps.user.tests.test_models.UserTestCase) Verify token creation ... ok test_user_creation (apps.user.tests.test_models.UserTestCase) Verify User creation ... ok test_user_creation_with_company (apps.user.tests.test_models.UserTestCase) Verify User can be assigned company ... ok test_user_creation_with_signup_token (apps.user.tests.test_models.UserTestCase) Verify User can be assigned token ... ok test_retrieve_me_user (apps.user.tests.test_views.UserViewTest) ... ok test_retrieve_user (apps.user.tests.test_views.UserViewTest) ... ok ---------------------------------------------------------------------- Ran 29 tests in 0.521s OK Destroying test database for alias 'default' ('test_mia')... Name Stmts Miss Cover --------------------------------------------------------------------- apps/core/admin.py 5 0 100% apps/core/exceptions.py 22 3 86% apps/core/mixins.py 10 3 70% apps/core/models.py 8 1 88% apps/core/serializers.py 18 0 100% apps/core/urls.py 5 0 100% apps/core/utils.py 69 33 52% apps/core/views.py 42 26 38% apps/dashboard/admin.py 15 0 100% apps/dashboard/models.py 34 7 79% apps/dashboard/urls.py 6 0 100% apps/dashboard/views.py 27 16 41% apps/delete_migrations/admin.py 1 0 100% apps/delete_migrations/models.py 1 0 100% apps/reports/admin.py 14 0 100% apps/reports/mixins.py 16 9 44% apps/reports/models/alert.py 47 3 94% apps/reports/models/email_report.py 31 1 97% apps/reports/serializers/alert.py 49 14 71% apps/reports/serializers/email_report.py 49 22 55% apps/reports/tasks.py 71 43 39% apps/reports/urls.py 9 0 100% apps/reports/utils.py 12 9 25% apps/reports/views/alerts.py 40 23 42% apps/reports/views/reports.py 55 26 53% apps/services/admin.py 23 6 74% apps/services/models/airbyte.py 37 0 100% apps/services/models/tableau.py 9 0 100% apps/services/scripts/airbyte.py 157 126 20% apps/services/scripts/tableau.py 69 50 28% apps/services/urls.py 3 0 100% apps/services/views.py 15 8 47% apps/store/admin.py 41 0 100% apps/store/mixins.py 12 7 42% apps/store/models/amazon_ads.py 18 2 89% apps/store/models/amazon_seller_partner.py 21 2 90% apps/store/models/data_connector_base.py 28 5 82% apps/store/models/data_connector_settings.py 18 3 83% apps/store/models/facebook.py 17 2 88% apps/store/models/google_ads.py 20 2 90% apps/store/models/google_analytics.py 18 2 89% apps/store/models/google_search_console.py 18 2 89% apps/store/models/hubspot.py 14 2 86% apps/store/models/klaviyo.py 12 1 92% apps/store/models/mailchimp.py 14 2 86% apps/store/models/shopify.py 17 2 88% apps/store/models/store.py 13 3 77% apps/store/models/storefront.py 19 8 58% apps/store/serializers/amazon_ads.py 22 11 50% apps/store/serializers/amazon_seller_partner.py 26 14 46% apps/store/serializers/base.py 8 0 100% apps/store/serializers/credentials.py 3 0 100% apps/store/serializers/facebook.py 15 7 53% apps/store/serializers/google_ads.py 23 12 48% apps/store/serializers/google_analytics.py 22 11 50% apps/store/serializers/google_search_console.py 22 11 50% apps/store/serializers/hubspot.py 6 0 100% apps/store/serializers/klaviyo.py 6 0 100% apps/store/serializers/list_data_connectors.py 24 0 100% apps/store/serializers/mailchimp.py 6 0 100% apps/store/serializers/shopify.py 21 10 52% apps/store/serializers/store.py 30 3 90% apps/store/serializers/storefront.py 6 0 100% apps/store/tasks.py 104 80 23% apps/store/urls.py 26 0 100% apps/store/utils.py 89 66 26% apps/store/views/check_data_connector_store.py 21 13 38% apps/store/views/credentials.py 13 3 77% apps/store/views/delete_data_connector.py 19 10 47% apps/store/views/detail_data_connectors.py 42 0 100% apps/store/views/list_data_connectors.py 50 32 36% apps/store/views/reset_data_connector.py 18 9 50% apps/store/views/store.py 15 8 47% apps/store/views/sync_data_connector.py 18 9 50% apps/user/admin.py 29 7 76% apps/user/apps.py 5 0 100% apps/user/forms.py 14 1 93% apps/user/models.py 72 13 82% apps/user/serializers.py 88 31 65% apps/user/signals.py 11 4 64% apps/user/tasks.py 15 6 60% apps/user/urls.py 7 0 100% apps/user/utils.py 4 0 100% apps/user/validators.py 7 2 71% apps/user/views.py 61 26 57% manage.py 13 6 54% --------------------------------------------------------------------- TOTAL 2250 879 61% .gitlab-ci.yml file My actual file is much bigger. But here's the relevant parts: services: - docker:20.10.7-dind - postgres:latest stages: - test variables: POSTGRES_DB: test_mia POSTGRES_USER: mia_dev POSTGRES_PASSWORD: test123 build_test_db: image: postgres script: - export PGPASSWORD=$POSTGRES_PASSWORD - psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;" test: image: python:3.8-slim-buster stage: test before_script: - | apt-get update && apt-get install -y --no-install-recommends \ gcc \ musl-dev \ python3-dev \ libpq-dev \ libgnutls28-dev \ git \ && rm -rf /var/lib/apt/lists/* - python -m pip install --upgrade pip - pip install uwsgi - pip install -r requirements.txt - echo $TEST_SECRETS | base64 -d > config/settings/local.py script: - echo RUNNING UNIT TESTS - coverage run manage.py test -v 2 && coverage report needs: - ["build_test_db"] environment: test # only: # - develop
Add a python manage.py migrate line before coverage run manage.py test ... in the scripts section of your gitlab-ci.yml.
5
3
71,821,025
2022-4-10
https://stackoverflow.com/questions/71821025/remove-the-unnecessary-padding-inside-the-html-table-cells-which-is-being-gener
I have a python script which dynamically generates a HTML Table according to some specifications provided to it. A sample generated output is given below, As you can see, it is possible that, It is possible for two consecutive cells to be merged (it is done from the Python Script, again. With the help of Jinja2) It is also possible for one cell to be divided horizontally into two or more sections ^These two possibilities can occur at any position in the HTML table. The main issue I am facing here is this that I am getting unnecessary spacing/padding in all my cells (which are not horizontally divided). I don't understand the cause behind it and I can't seem to resolve this issue either. Can anyone help me to identify what changes do I need to make to make it look more presentable (i.e., no unnecessary space above and below the text in all cells). For example, in the first cell, there's unnecessary padding added above the "Calculus and Analytical Geometry" and also below the "Dr. Mushtaq Ahmed" Text. However, in the same HTML table, if you look at the cell which is horizontally divided into two sections, this padding is not being added. Similarly, I want to somehow align the "Name" and the "Room" in one Row. For example, for the first Cell, "Dr. Mushtaq Ahmed" and "Room #1" should be aligned in same row (rather than in two different rows). I have tried to make different changes to achieve this but they don't seem to work properly (One change did work for me, but it disrupted the shape of the Horizontally divided cell and hence, I didn't adopt it) What I am actually looking to get is similar to this image, The code for the HTML Table I shared above is as follows, :root { --border-strong: 3px solid #777; --border-normal: 1px solid gray; } body { font-family: Georgia, 'Times New Roman', Times, serif; } table>caption { font-size: 6mm; font-weight: bolder; letter-spacing: 1mm; } /* 210 x 297 mm */ table { width: 297mm; height: 210mm; border-collapse: collapse; } td { border: var(--border-normal); position: relative; font-size: 2.6mm; font-weight: bold; } tbody tr:nth-child(odd) { background: #eee; } tbody tr:last-child { border-bottom: var(--border-strong); } tbody tr> :last-child { border-right: var(--border-strong); } /* top header */ .top_head>th { width: 54mm; height: 10mm; vertical-align: bottom; border-top: var(--border-strong); border-bottom: var(--border-strong); border-right: 1px solid gray; } .top_head :first-child { width: 27mm; border: var(--border-strong); } .top_head :last-child { border-right: var(--border-strong); } /* left header */ tbody th { border-left: var(--border-strong); border-right: var(--border-strong); border-bottom: 1px solid gray; } tbody>tr:last-child th { border-bottom: var(--border-strong); } /* row */ tbody td>div { height: 34mm; overflow: hidden; } .vertical_span_all { font-size: 5mm; font-weight: bolder; text-align: center; border-bottom: var(--border-strong); } .vertical_span_all div { height: 10mm; } /* td contents */ .note { font-size: 3mm; } .note :last-child { float: right; } @page { margin: 5mm; } .new-page { page-break-before: always; } .center { text-align: center; } .left { text-align: left; margin-left: 6px; /*margin-top: 10px;*/ } .right { text-align: right; margin-right: 4px; } .teacher { margin-left: 4px; } td{ height:175px; width:150px; } <!DOCTYPE html> <html> <body> <!-- Heading --> <h1 class="center">CS-1D</h1> <!-- Table --> <table border="1"> <!-- Day/Periods --> <tr> <td class="center" ><br> <b>Day/Period</b></br> </td> <td class="center" > <b>I</b> </td> <td class="center" > <b>II</b> </td> <td class="center"> <b>III</b> </td> <td class="center"> <b>1:15-1:45</b> </td> <td class="center" > <b>IV</b> </td> <td class="center" > <b>V</b> </td> </tr> <!-- Monday --> <tr> <td class="center"> <b>Monday</b></td> <td colspan=1> <p class="left">Calculus and Analytical Geometry</p> <p class = "right">Room #1</p> <p class = "teacher">Dr.Mushtaq Ahmad</p> <!-- <p class="left">Calculus_and_Analytical_Geometry@Room_#[email protected]_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> </td> <td colspan=1> <p class="left">Programming Fundamentals</p> <p class = "right">Room #9</p> <p class = "teacher">Dr. Rabia Maqsood</p> <!-- <p class="left">Programming_Fundamentals@Room_#9@Dr._Rabia_Maqsood</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td rowspan="6" class="center"> <h2>L<br>U<br>N<br>C<br>H</h2> </td> <td colspan=2> <p class="left">Programming Fundamentals - Lab</p> <p class = "right">Lab #1</p> <p class = "teacher">Muhammad Azeem Iftikhar</p> <!-- <p class="left">Programming_Fundamentals_-_Lab@Lab_#1@Muhammad_Azeem_Iftikhar</p> <p class="right">2</p> <p class="teacher"></p> --></td> </tr> <!-- Tuesday --> <tr> <td class="center"> <b>Tuesday</b> </td> <td colspan=1> </td> <td colspan=1> </td> <td colspan=1> <p class="left">English Composition and Comprehension - Lab</p> <p class = "right">Room #1</p> <p class = "teacher">Rida Akram<hr>English Composition and Comprehension</p> <!-- <p class="left">English_Composition_and_Comprehension_-_Lab@Room_#1@Rida_Akram<hr>English_Composition_and_Comprehension@Room_#7@Sadia_Ashfaq</p> <p class="right">1</p> <p class="teacher"></p> --><p class="left"></p> <p class = "right">Room #7</p> <p class = "teacher">Sadia Ashfaq</p> <!-- <p class="left">English_Composition_and_Comprehension_-_Lab@Room_#1@Rida_Akram<hr>English_Composition_and_Comprehension@Room_#7@Sadia_Ashfaq</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> <p class="left">English Composition and Comprehension - Lab</p> <p class = "right">Room #1</p> <p class = "teacher">Rida Akram</p> <!-- <p class="left">English_Composition_and_Comprehension_-_Lab@Room_#1@Rida_Akram</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> <p class="left">English Composition and Comprehension</p> <p class = "right">Room #3</p> <p class = "teacher">Farah Iqbal</p> <!-- <p class="left">English_Composition_and_Comprehension@Room_#3@Farah_Iqbal</p> <p class="right">1</p> <p class="teacher"></p> --></td> </tr> <!-- Wednesday --> <tr> <td class="center"> <b>Wednesday</b> </td> <td colspan=1> <p class="left">Islamic Studies/Ethics</p> <p class = "right">Room #7</p> <p class = "teacher">Zia Ahmad</p> <!-- <p class="left">Islamic_Studies/Ethics@Room_#7@Zia_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> </td> <td colspan=1> <p class="left">English Composition and Comprehension</p> <p class = "right">Room #6</p> <p class = "teacher">Sadia Ashfaq</p> <!-- <p class="left">English_Composition_and_Comprehension@Room_#6@Sadia_Ashfaq</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> </td> <td colspan=1> <p class="left">Applied Physics</p> <p class = "right">Room #1</p> <p class = "teacher">Waheed Ahmad</p> <!-- <p class="left">Applied_Physics@Room_#1@Waheed_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> </tr> <!-- Thursday --> <tr> <td class="center"> <b>Thursday</b> </td> <td colspan=1> </td> <td colspan=1> <p class="left">Calculus and Analytical Geometry</p> <p class = "right">Room #5</p> <p class = "teacher">Dr.Mushtaq Ahmad</p> <!-- <p class="left">Calculus_and_Analytical_Geometry@Room_#[email protected]_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> <p class="left">Programming Fundamentals</p> <p class = "right">Room #10</p> <p class = "teacher">Dr. Rabia Maqsood</p> <!-- <p class="left">Programming_Fundamentals@Room_#10@Dr._Rabia_Maqsood</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> <p class="left">English Composition and Comprehension</p> <p class = "right">Room #4</p> <p class = "teacher">Farah Iqbal</p> <!-- <p class="left">English_Composition_and_Comprehension@Room_#4@Farah_Iqbal</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=1> <p class="left">Applied Physics</p> <p class = "right">Room #1</p> <p class = "teacher">Waheed Ahmad</p> <!-- <p class="left">Applied_Physics@Room_#1@Waheed_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> </tr> <!-- Friday --> <tr> <td class="center"> <b>Friday</b> </td> <td colspan=1> <p class="left">Islamic Studies/Ethics</p> <p class = "right">Room #6</p> <p class = "teacher">Zia Ahmad</p> <!-- <p class="left">Islamic_Studies/Ethics@Room_#6@Zia_Ahmad</p> <p class="right">1</p> <p class="teacher"></p> --></td> <td colspan=2> <p class="left">Introduction to Information and Communication Technology - Lab</p> <p class = "right">Lab #4</p> <p class = "teacher">Aqsa Younas</p> <!-- <p class="left">Introduction_to_Information_and_Communication_Technology_-_Lab@Lab_#4@Aqsa_Younas</p> <p class="right">2</p> <p class="teacher"></p> --></td> <td colspan=2> <p class="left">English Composition and Comprehension - Lab</p> <p class = "right">Room #3</p> <p class = "teacher">Amna Farooq</p> <!-- <p class="left">English_Composition_and_Comprehension_-_Lab@Room_#3@Amna_Farooq</p> <p class="right">2</p> <p class="teacher"></p> --></td> </tr> </table> </body> <p class = "new-page"></div> </html> Here is the template.html code which is being used to generate the HTML Files. Multiple HTML files are being generated (for different sections) and then they are combined to make a PDF file. Therefore, it is important that height of the table is maintained, so that every generated HTML file covers one complete page of PDF File. Although, I don't need anybody to understand this template.html file and any changes made in the above shared HTML code can easily be done in this file ultimately. https://pastebin.com/154ErqUU pasted the code on pastebin because character limit had exceeded.
Assign the following CSS rulesets: Figure I Selector Property Value td position relative td padding 0 td div position relative td p position absolute td p margin 0 .course top 0.5rem .course left 0.5rem .room bottom 0.5rem .room right 0.5rem .teacher bottom 0.5rem .teacher left 0.5rem Basically <td> has default padding and <p> has default margins so they should zeroed out. All containers are position: relative and all content is position: absolute and should have a minimum of 2 directions from left, top, bottom, and right. The double stacked cells that should be rowspan="1" is too impractical since the rest of the cells would have to be rowspan="2" on the Tuesday row and then the other rows would be twice the size of a rowspan="1" in order to be uniform. Instead there are two <div>s stacked within that particular cell (Row 1, Column 3 of tbody). The entire table (with the exception of the borders) are set in rems and are referenced to the font-size in html (2.85mm = 1rem`). (See comments in example for details). Figure II Media Long Side Length Width HTML width 278.73mm 214.89mm Paper length 279mm 216mm You can adjust the whole table by changing the font-size in html, it appears 2.85mm is optimal. Also, the lunch column is reduced signifigantly which gave the rest of the table badly needed space. There is JavaScript that adds the content, but I wrote it to avoid cluttering the HTML -- it's not a requirement for the solution. Details are commented in example below const data = [ { row: 0, column: 1, course: 'Calculus and Analytical Geometry', room: 'Room #1', teacher: 'Dr.Mushtaq Ahmad', split: null }, { row: 0, column: 3, course: 'Programming Fundamentals', room: 'Room #9', teacher: 'Dr. Rabia Maqsood', split: null }, { row: 0, column: 5, course: 'Programming Fundamentals - Lab', room: 'Lab #1', teacher: 'Muhammad Azeem Iftikhar', split: null }, { row: 1, column: 3, course: 'English Composition and Comprehension - Lab', room: 'Room #1', teacher: 'Rida Akram', split: 'A' }, { row: 1, column: 3, course: 'English Composition and Comprehension', room: 'Room #7', teacher: 'Sadia Ashfaq', split: 'B' }, { row: 1, column: 4, course: 'English Composition and Comprehension - Lab', room: 'Room #1', teacher: 'Rida Akram', split: null }, { row: 1, column: 5, course: 'English Composition and Comprehension', room: 'Room #3', teacher: 'Farah Iqbal', split: null }, { row: 2, column: 1, course: 'Islamic Studies/Ethics', room: 'Room #7', teacher: 'Zia Ahmad', split: null }, { row: 2, column: 3, course: 'English Composition and Comprehension', room: 'Room #6', teacher: 'Sadia Ashfaq', split: null }, { row: 2, column: 5, course: 'Applied Physics', room: 'Room #1', teacher: 'Waheed Ahmad', split: null }, { row: 3, column: 2, course: 'Calculus and Analytical Geometry', room: 'Room #5', teacher: 'Dr.Mushtaq Ahmad', split: null }, { row: 3, column: 3, course: 'Programming Fundamentals', room: 'Room #10', teacher: 'Dr. Rabia Maqsood', split: null }, { row: 3, column: 4, course: 'English Composition and Comprehension', room: 'Room #4', teacher: 'Farah Iqbal', split: null }, { row: 3, column: 5, course: 'Applied Physics', room: 'Room #1', teacher: 'Waheed Ahmad', split: null }, { row: 4, column: 1, course: 'Islamic Studies/Ethics', room: 'Room #6', teacher: 'Zia Ahmad', split: null }, { row: 4, column: 2, course: 'Introduction to Information and Communication Technology - Lab', room: 'Lab #4', teacher: 'Aqsa Younas', split: null }, { row: 4, column: 3, course: 'English Composition and Comprehension - Lab', room: 'Room #3', teacher: 'Amna Farooq', split: null }, ]; const t = document.querySelector('table').tBodies[0]; const tRows = [...t.rows]; const cellData = (obj) => { let r = obj.row; let c = obj.column; let cell = tRows[r].cells[c]; if ('split' in obj) { let A, B; switch (obj.split) { case 'A': A = cell.querySelector('.A'); A.children[0].textContent = obj.course; A.children[1].textContent = obj.room; A.children[2].textContent = obj.teacher; break; case 'B': B = cell.querySelector('.B'); B.children[0].textContent = obj.course; B.children[1].textContent = obj.room; B.children[2].textContent = obj.teacher; break; default: cell.children[0].textContent = obj.course; cell.children[1].textContent = obj.room; cell.children[2].textContent = obj.teacher; break; } } }; data.forEach((obj) => cellData(obj)); /* ✳️ Rulesets pertaining to question */ :root { --thin: 1px solid #777; --thick: 3px solid #777; --split: 0.5px solid #777; } /* Global Reference Length 2.85mm = 1rem Changing the font-size changes all lengths in rem units. */ html { font: 500 2.85mm/1.5 Georgia; } /* Letter Size Paper (8.5 x 11in) 216 x 279mm 75.78 x 97.89rem 816 x 1056px */ /* Actual Dimensions 1rem = 2.85mm ❉ width: 97.8rem / 278.73mm βœ₯ height: 75.4rem / 214.89mm */ table { width: 97.8rem; height: 75.4rem; border-collapse: collapse; table-layout: fixed; } caption { font-size: 2.4rem; /* βœ₯ 10.26mm / 3.6rem */ font-weight: bolder; letter-spacing: 0.4rem; } thead th { width: 16.8rem; /* ❉ x 5 */ height: 3.8rem; /* βœ₯ */ vertical-align: bottom; border-top: var(--thick); border-bottom: var(--thick); border-right: var(--thin); } thead th:first-child { width: 9.8rem; /* ❉ */ border: var(--thick); } thead th:last-child { border-right: var(--thick); } tbody th { position: relative; border-left: var(--thick); border-right: var(--thick); border-bottom: var(--thin); } tbody > tr:last-child th { border-bottom: var(--thick); } tbody tr:nth-child(odd) { background: #eee; } tbody tr:last-child { border-bottom: var(--thick); } tbody tr > th:last-child, tbody tr > td:last-child { border-right: var(--thick); } td { position: relative; /* ✳️ */ height: 13.6rem; /* βœ₯ x 5 */ padding: 0; /* ✳️ */ border: var(--thin); font-size: 1rem; font-weight: bold; } td div.A { position: relative; /* ✳️ */ height: 6.8rem; border-bottom: var(--split); } td div.B { position: relative; /* ✳️ */ height: 6.8rem; border-top: var(--split); } td p { position: absolute; /* ✳️ */ margin: 0; /* ✳️ */ } .course { top: 0.5rem; /* ✳️ */ left: 0.5rem; /* ✳️ */ } .room { bottom: 0.5rem; /* ✳️ */ right: 0.5rem; /* ✳️ */ } .teacher { bottom: 0.5rem; /* ✳️ */ left: 0.5rem; /* ✳️ */ } th.lunch { width: 4rem; /* ❉ */ } td.lunch { vertical-align: middle; text-align: center; } td.lunch b { font-size: 1.5rem; } <!DOCTYPE html> <html lang="en"> <head> <title></title> <meta charset="utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=edge" /> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <link href="https://website.com/path/to/style.css" rel="stylesheet" /> <style></style> </head> <body> <table> <caption> CS-1D </caption> <thead> <tr> <th>Day/Period</th> <th>I</th> <th>II</th> <th>III</th> <th class="lunch">1:15<br />-<br />1:45</th> <th>IV</th> <th>V</th> </tr> </thead> <tbody> <tr> <th>Monday</th> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td class="lunch" rowspan="5" colspan="1"> <b>L<br />U<br />N<br />C<br />H<br /></b> </td> <td colspan="2"> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> </tr> <tr> <th>Tuesday</th> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <div class="A"> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </div> <div class="B"> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </div> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> </tr> <tr> <th>Wednesday</th> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> </tr> <tr> <th>Thursday</th> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> </tr> <tr> <th>Friday</th> <td> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td colspan="2"> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> <td colspan="2"> <p class="course"></p> <p class="room"></p> <p class="teacher"></p> </td> </tr> </tbody> </table> <script src='https://website.com/path.to/script.js'></script> <script></script> </body> </html>
4
1
71,818,149
2022-4-10
https://stackoverflow.com/questions/71818149/post-request-gets-blocked-on-python-backend-get-request-works-fine
I am building a web app where the front-end is done with Flutter while the back-end is with Python. GET requests work fine while POST requests get blocked because of CORS, I get this error message: Access to XMLHttpRequest at 'http://127.0.0.1:8080/signal' from origin 'http://localhost:57765' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Below is my flutter function I used to send GET and POST requests: Future<dynamic> sendResponse() async { final url = 'http://127.0.0.1:8080/signal'; var data = { "signal": '8', }; var header = { 'Access-Control-Allow-Origin': '*', "Accept": "application/x-www-form-urlencoded, '*'" }; http.Response response = await http.post(Uri.parse(url), body: data, headers: header);//http.post(Uri.parse(url), body: data, headers: header);//http.get(Uri.parse(url)); if (response.statusCode == 200) { print(json.decode(response.body)); return jsonDecode(response.body); //print(json.decode(credentials.body)); } else { print(response.statusCode); throw Exception('Failed to load Entry'); } // var ResponseFromPython = await response.body;//jsonDecode(credentials.body); // return ResponseFromPython; } Below is my Python backend code using Flask: from flask import Flask,jsonify, request, make_response import json from flask_cors import CORS, cross_origin #declared an empty variable for reassignment response = '' app = Flask(__name__) #CORS(app, resources={r"/signal": {"origins": "*, http://localhost:59001"}}) #http://localhost:52857 #CORS(app, origins=['*']) app.config['CORS_HEADERS'] = ['Content-Type','Authorization'] @app.route("/") def index(): return "Congratulations, it worked" @app.route("/signal", methods = ['POST', 'GET']) #, @cross_origin(origins='http://localhost:57765',headers=['Content-Type','Authorization', 'application/x-www-form-urlencoded','*'], upports_credentials=True)# allow all origins all methods. def multbytwo(): """multiple signal by 2 just to test.""" global response if (request.method=='POST'): # request.headers.add("Access-Control-Allow-Origin", "*") request_data = request.data #getting the response data request_data = json.loads(request_data.decode('utf-8')) #converting it from json to key value pair comingSignal = request_data['signal'] response = make_response(comingSignal, 201)#jsonify(comingSignal*2) response.headers.add('Access-Control-Allow-Origin', '*') response.headers.add('Access-Control-Allow-Methods", "DELETE, POST, GET, OPTIONS') response.headers.add('Access-Control-Allow-Headers", "Content-Type, Authorization, X- Requested-With') return response else: try: #scaler = request.args.get("signal") out = 9 * 2 response = jsonify(out) response.headers.add("Access-Control-Allow-Origin", "*") return response #sending data back to your frontend app except ValueError: return "invalid input xyz" if __name__ == "__main__": app.run(host="127.0.0.1", port=8080, debug=True) Below are the troubleshooting steps I made: -Added the flask_CORS package in python I tried here different combination from using general parameters like CORS(app, resources={r"/signal": {"origins": "*"}}) did not help. Also tried the decorator @cross-origin and did not help -Added some headers to the response itself to indicate that it accepts cross-origin You see in my python code I tried adding a lot of headers to the response, nothing seem to respond. -Tried installing an extension in Chrome that by-passes the CORS check I tried the allow CORS and CORS unblock extensions and I used the steps described in this answer: How chrome extensions be enabled when flutter web debugging?. Although these extensions are supposed to add the CORS allow header to the response, I still got the same error. I still do not fully understand the CORS concept but I tried a lot of work-arounds and nothing works! please help.
I finally figured out what was going on. First I disabled the same origin policy in chrome using this command: this is run clicking the start button in windows and typing this command directly.. chrome.exe --disable-site-isolation-trials --disable-web-security --user-data-dir="D:\anything" This fired a separate chrome window that does not block cross-origin, we will call this the CORS free window. This allowed me to finally communicate with my python code and understand what is going on. You can see that the chrome default setting were not even showing me anything related to the response, just showing a 500 code error. I copied the localhost link and port and pasted them in my other CORS free chrome window The other CORS free chrome window showed helpful information: It was a simple JSON decoding error! I went back to my flutter code and I changed the http post request, adding a jsonEncode function on the post body: http.Response response = await http.post(Uri.parse(url), body:jsonEncode(data), headers: header); Now the post request returns a correct response on the default chrome settings. It was just this CORS blocking the response completely that made me handi-capped.
7
1
71,842,013
2022-4-12
https://stackoverflow.com/questions/71842013/seaborn-pairplot-with-log-scale-only-for-specific-columns
I have a dataframe and I'm using seaborn pairplot to plot one target column vs the rest of the columns. Code is below, import seaborn as sns import matplotlib.pyplot as plt tgt_var = 'AB' var_lst = ['A','GH','DL','GT','MS'] pp = sns.pairplot(data=df, y_vars=[tgt_var], x_vars=var_lst) pp.fig.set_figheight(6) pp.fig.set_figwidth(20) The var_lst is not a static list, I just provided an example. What I need is to plot tgt_var on Y axis and each var_lst on x axis. I'm able to do this with above code, but I also want to use log scale on X axis only if the var_lst item is 'GH' or 'MS', for the rest normal scale. Is there any way to achieve this?
Iterate pp.axes.flat and set xscale="log" if the xlabel matches "GH" or "MS": log_columns = ["GH", "MS"] for ax in pp.axes.flat: if ax.get_xlabel() in log_columns: ax.set(xscale="log") Full example with the iris dataset where the petal columns are xscale="log": import seaborn as sns df = sns.load_dataset("iris") pp = sns.pairplot(df) log_columns = ["petal_length", "petal_width"] for ax in pp.axes.flat: if ax.get_xlabel() in log_columns: ax.set(xscale="log")
4
9
71,838,934
2022-4-12
https://stackoverflow.com/questions/71838934/arangodb-read-timed-out-read-timeout-60
I have a problem. I am using ArangoDB enterprise:3.8.6 via Docker. But unfortunately my query takes longer than 30s. When it fails, the error is arangodb HTTPConnectionPool(host='127.0.0.1', port=8529): Read timed out. (read timeout=60). My collection is aroung 4GB huge and ~ 1.2 mio - 900k documents inside the collection. How could I get the complete collection with all documents without any error? Python code (runs locally on my machine) from arango import ArangoClient # Initialize the ArangoDB client. client = ArangoClient() # Connect to database as user. db = client.db(<db>, username=<username>, password=<password>) cursor = db.aql.execute(f'FOR doc IN students RETURN doc', batch_size=10000) result = [doc for doc in cursor] print(result[0]) [OUT] arangodb HTTPConnectionPool(host='127.0.0.1', port=8529): Read timed out. (read timeout=60) docker-compose.yml for ArangoDB version: '3.7' services: database: container_name: database__arangodb image: arangodb/enterprise:3.8.6 environment: - ARANGO_LICENSE_KEY=<key> - ARANGO_ROOT_PASSWORD=root - ARANGO_CONNECT_TIMEOUT=300 - ARANGO_READ_TIMEOUT=600 ports: - 8529:8529 volumes: - C:/Users/dataset:/var/lib/arangodb3 What I tried cursor = db.aql.execute('FOR doc IN <Collection> RETURN doc', stream=True) while cursor.has_more(): # Fetch until nothing is left on the server. cursor.fetch() while not cursor.empty(): # Pop until nothing is left on the cursor. cursor.pop() [OUT] CursorNextError: [HTTP 404][ERR 1600] cursor not found # A N D cursor = db.aql.execute('FOR doc IN <Collection> RETURN doc', stream=True, ttl=3600) collection = [doc for doc in cursor] [OUT] nothing # Runs, runs and runs for more than 1 1/2 hours What worked but only for 100 documents # And that worked cursor = db.aql.execute(f'FOR doc IN <Collection> LIMIT 100 RETURN doc', stream=True) collection = [doc for doc in cursor]
You can increase the HTTP client's timeout by using a custom HTTP client for Arango. The default is set here to 60 seconds. from arango.http import HTTPClient class MyCustomHTTPClient(HTTPClient): REQUEST_TIMEOUT = 1000 # Set the timeout you want in seconds here # Pass an instance of your custom HTTP client to Arango: client = ArangoClient( http_client=MyCustomHTTPClient() )
4
6
71,845,596
2022-4-12
https://stackoverflow.com/questions/71845596/python-typing-narrowing-type-from-function-that-returns-a-union
I have difficulties finding return types that satisfy mypy. I have two functions. The first one returns a Union type, since the type depends on the parameter given to the function. The second function calls the first one with a default parameter. Thus, the type is not a Union type -- it can be narrowed down to one of the types in the Union. Let me give you an example of a very simple version of my problem: from typing import Union def first(as_string: bool) -> Union[str, int]: if as_string: return "42" else: return 42 def second(as_string: bool) -> str: return first(True) This causes an error: Incompatible return value type (got "str", expected "Union[str, int]") How can I prevent mypy from throwing an error while still using type hints? If you want to suggest splitting the first function, please take in mind that this is just a simplification. My (first) function receives a function (sklearn.metrics function) and will return a scalar most of the time. Only when applying a confusion matrix, the type changes. The first function does some preprocessing and than applies the metric. I simply want to have a differently named function for the confusion matrix since I think it is a special case of a metric.
mypy can't infer the relationship between the types of the parameters and the types of the return values. You have two options: Use an assert to make sure that the type is correct: def second(as_string) -> str: ret = first(True) assert isinstance(ret, str) return ret You can use typing.cast to assert to the typechecker without altering runtime behavior (i.e. without introducing the potential for an AssertionError to be raised). from typing import cast def second(as_string) -> str: return cast(str, first(True))
19
17
71,828,861
2022-4-11
https://stackoverflow.com/questions/71828861/filtering-audio-signal-in-tensorflow
I am building an audio-based deep learning model. As part of the preporcessing I want to augment the audio in my datasets. One augmentation that I want to do is to apply RIR (room impulse response) function. I am working with Python 3.9.5 and TensorFlow 2.8. In Python the standard way to do it is, if the RIR is given as a finite impulse response (FIR) of n taps, is using SciPy lfilter import numpy as np from scipy import signal import soundfile as sf h = np.load("rir.npy") x, fs = sf.read("audio.wav") y = signal.lfilter(h, 1, x) Running in loop on all the files may take a long time. Doing it with TensorFlow map utility on TensorFlow datasets: # define filter function def h_filt(audio, label): h = np.load("rir.npy") x = audio.numpy() y = signal.lfilter(h, 1, x) return tf.convert_to_tensor(y, dtype=tf.float32), label # apply it via TF map on dataset aug_ds = ds.map(h_filt) Using tf.numpy_function: tf_h_filt = tf.numpy_function(h_filt, [audio, label], [tf.float32, tf.string]) # apply it via TF map on dataset aug_ds = ds.map(tf_h_filt) I have two questions: Is this way correct and fast enough (less than a minute for 50,000 files)? Is there a faster way to do it? E.g. replace the SciPy function with a built-in TensforFlow function. I didn't find the equivalent of lfilter or SciPy's convolve.
Here is one way you could do Notice that tensor flow function is designed to receive batches of inputs with multiple channels, and the filter can have multiple input channels and multiple output channels. Let N be the size of the batch I, the number of input channels, F the filter width, L the input width and O the number of output channels. Using padding='SAME' it maps an input of shape (N, L, I) and a filter of shape (F, I, O) to an output of shape (N, L, O). import numpy as np from scipy import signal import tensorflow as tf # data to compare the two approaches x = np.random.randn(100) h = np.random.randn(11) # h y_lfilt = signal.lfilter(h, 1, x) # Since the denominator of your filter transfer function is 1 # the output of lfiler matches the convolution y_np = np.convolve(h, x) assert np.allclose(y_lfilt, y_np[:len(y_lfilt)]) # now let's do the convolution using tensorflow y_tf = tf.nn.conv1d( # x must be padded with half of the size of h # to use padding 'SAME' np.pad(x, len(h) // 2).reshape(1, -1, 1), # the time axis of h must be flipped h[::-1].reshape(-1, 1, 1), # a 1x1 matrix of filters stride=1, padding='SAME', data_format='NWC') assert np.allclose(y_lfilt, np.squeeze(y_tf)[:len(y_lfilt)])
7
4
71,837,032
2022-4-12
https://stackoverflow.com/questions/71837032/python3-file-open-write-exception-handling
Does the code below provide correct exception handling. My goal is, to not attempt the file.write() unless the file was successfully opened and ensure the file is closed. I am not concerned with the exact errors of why the file did not open or why the file did not write. Python version 3.6.9. OS Linux. data = "Some data" filename = "test.txt" try: file = open(filename, 'w+') except: print("Error opening file") else: try: file.write(data) except: print("Error writing to file") finally: file.close()
You should basically never use a blanket except; always spell out which exception(s) exactly you want to handle. Here is a refactoring using a context handler, so you can avoid the explicit finally: close data = "Some data" filename = "test.txt" try: with open(filename, 'w+') as file: try: file.write(data) except (IOError, OSError): print("Error writing to file") except (FileNotFoundError, PermissionError, OSError): print("Error opening file") There may be more exceptions you should enumerate; I rattled these off off the top of my head.
6
6
71,812,767
2022-4-9
https://stackoverflow.com/questions/71812767/how-to-stop-python-from-truncating-print-statements
I have a print statement that prints a very long big Pandas DataFrame series out, but I need all the information. When printing, Python gives 0 [{This is a long stateme.....}] 1 [{This is a long stateme.....}] and terminates the print statement with dots. I want to see the entire print statement without Python terminating it short, is there a setting to change to let print statements print infinite values?
Pandas allows options to see a set number of rows, as well as width pd.set_option("display.max_rows", n) pd.set_option("display.expand_frame_repr", True) pd.set_option('display.width', 1000) To increase the number of characters printed per column, use: pd.set_option("display.max_colwidth", max_characters) By default, a column only prints up to 50 characters, this option allows you to print an max_characters amount of characters. See the docs below: https://pandas.pydata.org/docs/user_guide/options.html
7
1
71,828,733
2022-4-11
https://stackoverflow.com/questions/71828733/flake8-on-all-files-under-specific-subdirectories
I'm trying to use flake8 only on 3 specific sub directories: features_v2, rules_v2 and indicators_v2. In order to test if my pattern is correct I tried applying it only to features_v2 at first. so I came up with this pattern: exclude = ^(?!.*features_v2).*$ but sadly it doesn't seems to work. Is flake8 does not support lookaround, or it does and I did something wrong? Is there a better way to use flake on 3 different subdirectories?
flake8's exclude is not a regex but a glob -- it cannot support what you're looking for may I suggest instead to run flake8 on the directories you want? flake8 features_v2 rules_v2 indicators_v2 disclaimer: I maintain flake8
8
13
71,820,357
2022-4-10
https://stackoverflow.com/questions/71820357/obtaining-tweets-image-urls-with-tweepy-v2-on-twitter-api-v2
what is an elegant way to access the urls of tweets pictures with tweepy v2? Twitter released a v2 of their API and tweepy adjusted their python module to it (tweepy v2). Lets say for example I have a dataframe of tweets created with tweepy, holding tweet id and so on. There is a row with this example tweet: https://twitter.com/federalreserve/status/1501967052080394240 The picture is saved under a different url and the documentation on tweepy v2 does reveal if there is a way to access it. https://pbs.twimg.com/media/FNgO9vNXIAYy2LK?format=png&name=900x900 Reading thought the requests json obtained through tweepy.Client(bearer_token, return_type = requests.Response) did not hold the required links. I am already using the following parameters in the client: client.get_liked_tweets(user_id, tweet_fields = ['created_at', 'text', 'id', 'attachments', 'author_id', 'entities'], media_fields=['preview_image_url', 'url'], user_fields=['username'], expansions=['attachments.media_keys', 'author_id'] ) What would be a way to obtain or generate the link to the picture of the tweet? Preferably through tweepy v2 itself? Thank you in advance.
The arguments of get_liked_tweets seem to be correct. Have you looked in the includes dict at the root of the response? The media fields are not in data, so you should have something like that: { "data": { "attachments": { "media_keys": [ "16_1211797899316740096" ] }, "author_id": "2244994945", "id": "1212092628029698048", "text": "[...]" }, "includes": { "media": [ { "media_key": "16_1211797899316740096", "preview_image_url": "[...]", "url": "[...]" } ], "users": [ { "id": "2244994945", "username": "TwitterDev" } ] } }
5
5
71,827,522
2022-4-11
https://stackoverflow.com/questions/71827522/unable-to-install-pickle5
PS C:\Users\Lenovo> pip install pickle5 Collecting pickle5 Using cached pickle5-0.0.11.tar.gz (132 kB) Preparing metadata (setup.py) ... done Using legacy 'setup.py install' for pickle5, since package 'wheel' is not installed. Installing collected packages: pickle5 Running setup.py install for pickle5 ... error error: subprocess-exited-with-error Γ— Running setup.py install for pickle5 did not run successfully. β”‚ exit code: 1 ╰─> [36 lines of output] running install running build running build_py creating build creating build\lib.win-amd64-3.10 creating build\lib.win-amd64-3.10\pickle5 copying pickle5\pickle.py -> build\lib.win-amd64-3.10\pickle5 copying pickle5\pickletools.py -> build\lib.win-amd64-3.10\pickle5 copying pickle5\__init__.py -> build\lib.win-amd64-3.10\pickle5 creating build\lib.win-amd64-3.10\pickle5\test copying pickle5\test\pickletester.py -> build\lib.win-amd64-3.10\pickle5\test copying pickle5\test\test_pickle.py -> build\lib.win-amd64-3.10\pickle5\test copying pickle5\test\test_picklebuffer.py -> build\lib.win-amd64-3.10\pickle5\test copying pickle5\test\__init__.py -> build\lib.win-amd64-3.10\pickle5\test running build_ext building 'pickle5._pickle' extension creating build\temp.win-amd64-3.10 creating build\temp.win-amd64-3.10\Release creating build\temp.win-amd64-3.10\Release\pickle5 C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include -IC:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt /Tcpickle5/_pickle.c /Fobuild\temp.win-amd64-3.10\Release\pickle5/_pickle.obj _pickle.c C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -IC:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include -IC:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\Include -IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\include -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt -IC:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt /Tcpickle5/picklebufobject.c /Fobuild\temp.win-amd64-3.10\Release\pickle5/picklebufobject.obj picklebufobject.c pickle5/picklebufobject.c(20): warning C4273: 'PyPickleBuffer_FromObject': inconsistent dll linkage C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include\cpython/picklebufobject.h(18): note: see previous definition of 'PyPickleBuffer_FromObject' pickle5/picklebufobject.c(39): warning C4273: 'PyPickleBuffer_GetBuffer': inconsistent dll linkage C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include\cpython/picklebufobject.h(22): note: see previous definition of 'PyPickleBuffer_GetBuffer' pickle5/picklebufobject.c(58): warning C4273: 'PyPickleBuffer_Release': inconsistent dll linkage C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include\cpython/picklebufobject.h(24): note: see previous definition of 'PyPickleBuffer_Release' pickle5/picklebufobject.c(208): warning C4273: 'PyPickleBuffer_Type': inconsistent dll linkage C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\include\cpython/picklebufobject.h(13): note: see previous definition of 'PyPickleBuffer_Type' C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\libs /LIBPATH:C:\Users\Lenonvo\AppData\Local\Programs\Python\Python310\PCbuild\amd64 /LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30133\lib\x64 /LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22000.0\ucrt\x64 /LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.22000.0\um\x64 /EXPORT:PyInit__pickle build\temp.win-amd64-3.10\Release\pickle5/_pickle.obj build\temp.win-amd64-3.10\Release\pickle5/picklebufobject.obj /OUT:build\lib.win-amd64-3.10\pickle5\_pickle.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.10\Release\pickle5\_pickle.cp310-win_amd64.lib python310.lib(python310.dll) : error LNK2005: PyPickleBuffer_GetBuffer already defined in picklebufobject.obj Creating library build\temp.win-amd64-3.10\Release\pickle5\_pickle.cp310-win_amd64.lib and object build\temp.win-amd64-3.10\Release\pickle5\_pickle.cp310-win_amd64.exp build\lib.win-amd64-3.10\pickle5\_pickle.cp310-win_amd64.pyd : fatal error LNK1169: one or more multiply defined symbols found error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\link.exe' failed with exit code 1169 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure Γ— Encountered error while trying to install package. ╰─> pickle5 note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.
You only need pickle5, a module backporting Pickle protocol 5 features to older Pythons when running on Python versions older than 3.8. As evident from Python310 and -3.10 in the output, you're on Python 3.10. You don't need pickle5. Thus, the answer to "what should you do", without us not knowing more details about your situation, is "not try to install pickle5".
7
6
71,823,279
2022-4-11
https://stackoverflow.com/questions/71823279/python-read-huge-file-line-per-line-and-send-it-to-multiprocessing-or-thread
I have been trying to get my code to work for many days, I am desperate. I've scoured the internet, but I still can't find it. I have a text file encoded in "latin-1" of 9GB -> 737 022 387 lines, each line contains a string. I would like to read each line and send them in an http PUT request that waits for a response, and returns TRUE or FALSE if the response is 200 or 400 The PUT request takes about 1 to 3 seconds, so to speed up the processing time I would like to use either a Thread or a multiprocessing. To start, I simulate my PUT request with a sleep of 3 seconds. and even that I can't get it to work This code split my string into char, i don't know why... from multiprocessing import Pool from time import sleep def process_line(line): sleep(3) print(line) return True if __name__ == "__main__": pool = Pool(2) peon = open(r'D:\txtFile',encoding="latin-1") for line in peon: res = pool.map(process_line,line ) print(res) This give error : TypeError: process_line() takes 1 positional argument but 17 were given import multiprocessing from multiprocessing import Pool from time import sleep def process_line(line): sleep(3) print(line) return True if __name__ == "__main__": pool = Pool(2) with open(r"d:\txtFile",encoding="latin-1") as file: res = pool.apply(process_line,file.readline() ) print(res) that : Crash the computer from multiprocessing import Pool from time import sleep def process_line(line): sleep(3) print(line) return True if __name__ == "__main__": pool = Pool(2) peon = open(r'D:\txtFile',encoding="latin-1") for line in peon: res = pool.map(process_line,peon ) print(res)
Although the problem seems unrealistic though. shooting 737,022,387 requests! calculate how many months it'll take from single computer!! Still, Better way to do this task is to read line by line from file in a separate thread and insert into a queue. And then multi-process the queue. Solution 1: from multiprocessing import Queue, Process from threading import Thread from time import sleep urls_queue = Queue() max_process = 4 def read_urls(): with open('urls_file.txt', 'r') as f: for url in f: urls_queue.put(url.strip()) print('put url: {}'.format(url.strip())) # put DONE to tell send_request_processor to exit for i in range(max_process): urls_queue.put("DONE") def send_request(url): print('send request: {}'.format(url)) sleep(1) print('recv response: {}'.format(url)) def send_request_processor(): print('start send request processor') while True: url = urls_queue.get() if url == "DONE": break else: send_request(url) def main(): file_reader_thread = Thread(target=read_urls) file_reader_thread.start() procs = [] for i in range(max_process): p = Process(target=send_request_processor) procs.append(p) p.start() for p in procs: p.join() print('all done') # wait for all tasks in the queue file_reader_thread.join() if __name__ == '__main__': main() Demo: https://onlinegdb.com/Elfo5bGFz Solution 2: You can use tornado asynchronous networking library from tornado import gen from tornado.ioloop import IOLoop from tornado.queues import Queue q = Queue(maxsize=2) async def consumer(): async for item in q: try: print('Doing work on %s' % item) await gen.sleep(0.01) finally: q.task_done() async def producer(): with open('urls_file.txt', 'r') as f: for url in f: await q.put(url) print('Put %s' % item) async def main(): # Start consumer without waiting (since it never finishes). IOLoop.current().spawn_callback(consumer) await producer() # Wait for producer to put all tasks. await q.join() # Wait for consumer to finish all tasks. print('Done') # producer and consumer can run in parallel IOLoop.current().run_sync(main)
6
4
71,768,804
2022-4-6
https://stackoverflow.com/questions/71768804/two-ways-to-create-timezone-aware-datetime-objects-django-seven-minutes-diffe
Up to now I thought both ways to create a timezone aware datetime are equal. But they are not: import datetime from django.utils.timezone import make_aware, get_current_timezone make_aware(datetime.datetime(1999, 1, 1, 0, 0, 0), get_current_timezone()) datetime.datetime(1999, 1, 1, 0, 0, 0, tzinfo=get_current_timezone()) datetime.datetime(1999, 1, 1, 0, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CET+1:00:00 STD>) datetime.datetime(1999, 1, 1, 0, 0, tzinfo=<DstTzInfo 'Europe/Berlin' LMT+0:53:00 STD>) In the Django Admin GUI second way creates this (German date format dd.mm.YYYY): 01.01.1999 00:07:00 Why are there 7 minutes difference if I use this: datetime.datetime(1999, 1, 1, 0, 0, 0, tzinfo=get_current_timezone())
This happens on Django 3.2 and lower, which rely on the pytz library. In Django 4 (unless you enable to setting to use the deprecated library), the output of the two examples you give is identical. In Django 3.2 and below, the variance arises because the localised time is built in two different ways. When using make_aware, it is done by calling the localize() method on the pytz timezone instance. In the second version, it's done by passing a tzinfo object directly to the datetime constructor. The difference between the two is well illustrated in this blog post: The biggest mistake people make with pytz is simply attaching its time zones to the constructor, since that is the standard way to add a time zone to a datetime in Python. If you try and do that, the best case scenario is that you'll get something obviously absurd: import pytz from datetime import datetime NYC = pytz.timezone('America/New_York') dt = datetime(2018, 2, 14, 12, tzinfo=NYC) print(dt) # 2018-02-14 12:00:00-04:56 Why is the time offset -04:56 and not -05:00? Because that was the local solar mean time in New York before standardized time zones were adopted, and is thus the first entry in the America/New_York time zone. Why did pytz return that? Because unlike the standard library's model of lazily-computed time zone information, pytz takes an eager calculation approach. Whenever you construct an aware datetime from a naive one, you need to call the localize function on it: dt = NYC.localize(datetime(2018, 2, 14, 12)) print(dt) # 2018-02-14 12:00:00-05:00 Exactly the same thing is happening with your Europe/Berlin example. pytz is eagerly fetching the first entry in its database, which is a pre-1983 solar time, which was 53 minutes and 28 seconds ahead of Greenwich Mean Time (GMT). This is obviously inappropriate given the date - but the tzinfo isn't aware of the date you are using unless you pass it to localize(). This is the difference between your two approaches. Using make_aware correctly calls localize() on the object. Assigning the tzinfo directly to the datetime object, however, doesn't, and results in pytz using the (wrong) time zone information because it was simply the first entry for that zone in its database. The pytz documentation obliquely refers to this as well: This library only supports two ways of building a localized time. The first is to use the localize() method provided by the pytz library. This is used to localize a naive datetime (datetime with no timezone information)... The second way of building a localized time is by converting an existing localized time using the standard astimezone() method... Unfortunately using the tzinfo argument of the standard datetime constructors β€˜β€™does not work’’ with pytz for many timezones. It is actually because of these and several other bugs in the pytz implementation that Django dropped it in favour of Python's built-in zoneinfo module. More from that blog post: At the time of its creation, pytz was cleverly designed to optimize for performance and correctness, but with the changes introduced by PEP 495 and the performance improvements to dateutil, the reasons to use it are dwindling. ... The biggest reason to use dateutil over pytz is the fact that dateutil uses the standard interface and pytz doesn't, and as a result it is very easy to use pytz incorrectly. Passing a pytz tzinfo object directly to a datetime constructor is incorrect. You must call localize() on the tzinfo class, passing it the date. The correct way to initialise the datetime in your second example is: > berlin = get_current_timezone() > berlin.localize(datetime.datetime(1999, 1, 1, 0, 0, 0)) datetime.datetime(1999, 1, 1, 0, 0, tzinfo=<DstTzInfo 'Europe/Berlin' CET+1:00:00 STD>) ... which matches what make_aware produces.
10
11
71,823,151
2022-4-11
https://stackoverflow.com/questions/71823151/deploy-reacts-build-folder-via-fastapi
I want to serve my React frontend using FastAPI. The goal being 0 Javascript dependency for the user. The user can simply download the Python code, start server, and view the website on localhost. My folder structure is: - my-fullstack-app - frontend/ - build/ - public/ - ... - package.json - backend/ - main.py - static/ I ran npm run build to generate the frontend/build/ folder which contains: build/ β”œβ”€β”€ asset-manifest.json β”œβ”€β”€ favicon.ico β”œβ”€β”€ index.html β”œβ”€β”€ logo192.png β”œβ”€β”€ logo512.png β”œβ”€β”€ manifest.json β”œβ”€β”€ robots.txt └── static β”œβ”€β”€ css β”‚ β”œβ”€β”€ main.073c9b0a.css β”‚ └── main.073c9b0a.css.map β”œβ”€β”€ js β”‚ β”œβ”€β”€ 787.cda612ba.chunk.js β”‚ β”œβ”€β”€ 787.cda612ba.chunk.js.map β”‚ β”œβ”€β”€ main.af955102.js β”‚ β”œβ”€β”€ main.af955102.js.LICENSE.txt β”‚ └── main.af955102.js.map └── media └── logo.6ce24c58023cc2f8fd88fe9d219db6c6.svg I copied the contents of the frontend/build/ folder inside backend/static/. Now, I want to serve this backend/static/ folder via FastAPI as opposed to running another server. In my FastAPI's main.py I have: from fastapi import FastAPI from fastapi.staticfiles import StaticFiles app = FastAPI() app.mount("/", StaticFiles(directory="static/"), name="static") I then start the server using - uvicorn main:app --reload. But it doesn't work. When I open http://127.0.0.1:8000/ in the browser, the output is a JSON file which says {"detail":"Not Found"} and console has Content Security Policy: The page's settings blocked the loading of a resource at http://127.0.0.1:8000/favicon.ico ("default-src").. How do I get this to work? I've seen examples for similar functionality with React and Express.
The answer mentioned by @flakes is exactly what I was looking for. Check it out for more details. For completeness, Without html=True flag. app.mount("/", StaticFiles(directory="static/"), name="static") Navigate to http://127.0.0.1:8000/index.html to view the html file. With html=True flag. app.mount("/", StaticFiles(directory="static/", html=True), name="static") Navigate to http://127.0.0.1:8000/ to view the html file.
7
10
71,773,998
2022-4-6
https://stackoverflow.com/questions/71773998/how-to-draw-a-progress-bar-with-pillow
I am currently trying to draw a progress bar, based on a calculated percentage. However, I am not able to display it in the right format. I orientated myself on another answer from this site here (How do you make a progress bar and put it on an image? & Is it possible to add a blue bar using PIL or Pillow?) But either the case is that the progress bar is too long and the width restriction does not work or the bar shows no progress. Example 1: async def rank(self, ctx, member: discord.Member): member = ctx.author data = await database.find_user(collection_user, ctx.guild.id, ctx.author.id) already_earned = data["exp"] to_reach= ((50 * (data['lvl'] ** 2)) + (50 * (data['lvl'] - 1))) percentage = ((data["exp"] / next_level_xp ) * 100) # Get the percentage ## Rank card img = Image.open("leveling/rank.png") draw = ImageDraw.Draw(img) font = ImageFont.truetype("settings/myfont.otf", 35) font1 = ImageFont.truetype("settings/myfont.otf", 24) async with aiohttp.ClientSession() as session: async with session.get(str(ctx.author.avatar)) as response: image = await response.read() icon = Image.open(BytesIO(image)).convert("RGBA") img.paste(icon.resize((156, 156)), (50, 60)) # Some text drawn, this works ### From StackOverflow ### def drawProgressBar(d, x, y, w, h, progress, bg=(129, 66, 97), fg=(211,211,211)): # draw background draw.ellipse((x+w, y, x+h+w, y+h), fill=bg) draw.ellipse((x, y, x+h, y+h), fill=bg) draw.rectangle((x+(h/2), y, x+w+(h/2), y+h), fill=bg, width=10) # draw progress bar progress = ((already_earned / to_reach ) * 100) w *= progress draw.ellipse((x+w, y, x+h+w, y+h),fill=fg) draw.ellipse((x, y, x+h, y+h),fill=fg) draw.rectangle((x+(h/2), y, x+w+(h/2), y+h),fill=fg, width=10) return d drawProgressBar(img, 10, 10, 100, 25, 0.5) ### From StackOverflow ### img.save('leveling/infoimg2.png') # Save it and send it out This looks like this: Second example: async def rank(self, ctx, member: discord.Member): member = ctx.author data = await database.find_user(collection_user, ctx.guild.id, ctx.author.id) already_earned = data["exp"] to_reach = ((50 * (data['lvl'] ** 2)) + (50 * (data['lvl'] - 1))) percentage = ((already_earned / to_reach) * 100) # Get the percentage img = Image.open("leveling/rank.png") draw = ImageDraw.Draw(img) ### From StackOverflow ### color=(129, 66, 97) x, y, diam = percentage, 8, 34 draw.ellipse([x,y,x+diam,y+diam], fill=color) ImageDraw.floodfill(img, xy=(14,24), value=color, thresh=40) ### From StackOverflow ### font = ImageFont.truetype("settings/myfont.otf", 35) font1 = ImageFont.truetype("settings/myfont.otf", 24) async with aiohttp.ClientSession() as session: async with session.get(str(ctx.author.avatar)) as response: image = await response.read() icon = Image.open(BytesIO(image)).convert("RGBA") img.paste(icon.resize((156, 156)), (50, 60)) # Draw some other text here, this works though img.save('leveling/infoimg2.png') # Save the file and output it Which looks like this: Both results do not match the pictures shown in the answers and questions. Is somebody able to tell me where I went wrong? I also tried to increase x in the second example or set img = Image.open("pic.png").format('RGB') but nothing seems to work. The progress bar is either too long or too short. I tried to achieve that my progress bar is restricted to some size, always matches 100% and that my defined progress will adapt to it.
The issue with your code was that the background and progress bar sections were of the same color, and as such, you couldn't see it. This can be fixed by coloring them differently. The line progress = ((already_earned / to_reach ) * 100) also sets progress to a percentage [0, 100]. You then multiply width by this. For the input of 100, a fill amount of 50% for example makes the ellipse go 5000 pixels - way off the screen and overwriting everything that was already drawn there. @client.command(name='rank') async def rank(ctx, progress: float): # Open the image and do stuff # I tested this with a blank 800x400 RGBA def new_bar(x, y, width, height, progress, bg=(129, 66, 97), fg=(211,211,211), fg2=(15,15,15)): # Draw the background draw.rectangle((x+(height/2), y, x+width+(height/2), y+height), fill=fg2, width=10) draw.ellipse((x+width, y, x+height+width, y+height), fill=fg2) draw.ellipse((x, y, x+height, y+height), fill=fg2) width = int(width*progress) # Draw the part of the progress bar that is actually filled draw.rectangle((x+(height/2), y, x+width+(height/2), y+height), fill=fg, width=10) draw.ellipse((x+width, y, x+height+width, y+height), fill=fg) draw.ellipse((x, y, x+height, y+height), fill=fg) new_bar(10, 10, 100, 25, progress) # send Example: progress = 0.25 progress = 0.9 As to the second code snippet, it's using floodfill() incorrectly. thresh is the "maximum tolerable difference of a pixel value from the β€˜background’ in order for it to be replaced." This means that the floodfill does literally nothing, and you only drew the ellipse (and not the part of the progress bar that goes before it). # this draws a circle with bounding box `x,y` and `x+diam,y+diam` # note that `x` is dependent on the progress value: higher progress means larger x, which means the circle is drawn more to the right draw.ellipse([x,y, x+diam,y+diam], fill=color) # If you look at the post where the code was given, you can see the error. # In that post, the entirety of the progress bar already exists, and is a very different color (allowing the use of the threshold value). # In this code, no progress bar exists yet, meaning everything else is just one solid color, and then the floodfill cannot do anything. # Also, xy is specified as arbitrary coordinates, which you would need to change to fit your bar. # This does nothing. ImageDraw.floodfill(img, xy=(14,24), value=color, thresh=40) If you want to fix this, you need to fill in the color to the left of the progress bar. The first function above already does this. If you want, you can remove the first 3 lines to avoid drawing a background, producing the same results.
7
4
71,812,508
2022-4-9
https://stackoverflow.com/questions/71812508/sqlalchemy-select-from-join-of-two-subqueries
Need help translating this SQL query into SQLAlchemy: select COALESCE(DATE_1,DATE_2) as DATE_COMPLETE, QUESTIONS_CNT, ANSWERS_CNT from ( (select DATE as DATE_1, count(distinct QUESTIONS) as QUESTIONS_CNT from GUEST_USERS where LOCATION like '%TEXAS%' and DATE = '2021-08-08' group by DATE ) temp1 full join (select DATE as DATE_2, count(distinct ANSWERS) as ANSWERS_CNT from USERS where LOCATION like '%TEXAS%' and DATE = '2021-08-08' group by DATE ) temp2 on temp1.DATE_1=temp2.DATE_2 ) Mainly struggling with the join of the two subqueries. I've tried this (just for the join part of the SQL): query1 = db.session.query( GUEST_USERS.DATE_WEEK_START.label("DATE_1"), func.count(GUEST_USERS.QUESTIONS).label("QUESTIONS_CNT") ).filter( GUEST_USERS.LOCATION.like("%TEXAS%"), GUEST_USERS.DATE == "2021-08-08" ).group_by(GUEST_USERS.DATE) query2 = db_session_stg.query( USERS.DATE.label("DATE_2"), func.count(USERS.ANSWERS).label("ANSWERS_CNT") ).filter( USERS.LOCATION.like("%TEXAS%"), USERS.DATE == "2021-08-08" ).group_by(USERS.DATE) sq2 = query2.subquery() query1_results = query1.join( sq2, sq2.c.DATE_2 == GUEST_USERS.DATE) ).all() In this output I receive only the DATE_1 column and the QUESTIONS_CNT columns. Any idea why the selected output from the subquery is not being returned in the result?
Not sure if this is the best solution but this is how I got it to work. Using 3 subqueries essentially. query1 = db.session.query( GUEST_USERS.DATE_WEEK_START.label("DATE_1"), func.count(GUEST_USERS.QUESTIONS).label("QUESTIONS_CNT") ).filter( GUEST_USERS.LOCATION.like("%TEXAS%"), GUEST_USERS.DATE == "2021-08-08" ).group_by(GUEST_USERS.DATE) query2 = db_session_stg.query( USERS.DATE.label("DATE_2"), func.count(USERS.ANSWERS).label("ANSWERS_CNT") ).filter( USERS.LOCATION.like("%TEXAS%"), USERS.DATE == "2021-08-08" ).group_by(USERS.DATE) sq1 = query1.subquery() sq2 = query2.subquery() query3 = db.session.query(sq1, sq2).join( sq2, sq2.c.DATE_2 == sq1.c.DATE_1) sq3 = query3.subquery() query4 = db.session.query( func.coalesce( sq3.c.DATE_1, sq3.c.DATE_2), sq3.c.QUESTIONS_CNT, sq3.c.ANSWERS_CNT ) results = query4.all()
4
5
71,798,863
2022-4-8
https://stackoverflow.com/questions/71798863/how-to-change-the-default-backend-in-matplotlib-from-qtagg-to-qt5agg-in-pych
Qt5Agg is necessary to use the mayavi 3D visualization package. I have installed PyQt5 and mayavi using pip in a separate copied conda environment. The default backend then changes from TkAgg to QtAgg. This is a bit weird because in an earlier installation in a different PC the default changed directly to Qt5Agg. I always check the backend using the following commands from the python console : import matplotlib matplotlib.get_backend() Even with the backend being 'QtAgg', I am able to use mayavi from the terminal without any issue but not when I do so in Pycharm. Here I get a non-responsive empty window (image below) : Image of the non-responsive window I have been able to get rid of this issue by explicitly using Qt5Agg instead of QtAgg before the plt call : import matplotlib matplotlib.use('Qt5Agg') import matplotlib.pyplot as plt But I would prefer a better way than using the above in every script that I write. As I had mentioned earlier, I already have mayavi installed and have used it successfully it in Pycharm in a different PC and there the default backend is 'Qt5Agg' and hence there is no need to change the backend explicitly. Is there anything obvious that I'm overlooking ? Can you please let me know of a way to change the default backend for matplotlib from QtAgg to Qt5Agg after PyQt5 installation using pip ? Thanks in advance !!
Thanks to @PaulH's comment, I was able to solve the issue. Owing to @mx0's suggestion, I shall now explicitly mention the fix below so that others can also benefit from it. In a particular conda environment, if matplotlib package is installed, then there will be a 'matplotlibrc' file stored somewhere that defines what the default backend will be whenever matplotlib is imported from that conda environment. The location of this 'matplotlibrc' can be found using the following commands : import matplotlib matplotlib.matplotlib_fname() Please look into the following link if there's any deprecation issue with the above commands : https://matplotlib.org/stable/tutorials/introductory/customizing.html#customizing-with-matplotlibrc-files Once the location of the 'matplotlibrc' file is known, open it and simply uncomment one line inside this file. Just change the backend from : ##backend: Agg to : backend: Qt5Agg And that's it. All the plot window troubles in PyCharm will be solved as far as the mayavi 3D visualization package is concerned. For any other use, where a specific backend is necessary, you can also set the default to any other backend of choice.
5
8
71,802,758
2022-4-8
https://stackoverflow.com/questions/71802758/when-using-slots-why-does-dirtype-have-no-dict-attribute
I'm trying to understand slots. Therefore, I have written a little script with two classes, one using slots and one not. class A: def __init__(self, name): self.name = name def getName(self): return self.name class C: __slots__ = "name" def __init__(self, name): self.name = name def getName(self): return self.name When I use the dir() on type A and on an object of type A, the attribute __dict__ appears in the result list, as expected. dir(A) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'getName'] dir(A("test")) ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__get__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'getName', 'name'] If I use type C I get print(dir(C)) ['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', 'classAttributeC', 'getName', 'name'] print(dir(C("test"))) ['__class__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__slots__', '__str__', '__subclasshook__', 'classAttributeC', 'getName', 'name'] No attribute __dict__ in the results list for dir(C("test")), as expected but also no attribute __dict__ for dir(C). Why isn't the attribute in the results list when I can call C.__dict__ and get the following output? {'__module__': '__main__', 'classAttributeC': 9999, '__slots__': 'name', '__init__': <function C.__init__ at 0x7ff26b9ab730>, 'getName': <function C.getName at 0x7ff26b9ab7b8>, 'name': <member 'name' of 'C' objects>, '__doc__': None}
Since you don't override __dir__ here, in each case here it will resolve in the MRO to type.__dir__(A) or type.__dir__(C). So we look at the default implementation of __dir__ for types, here in Objects/typeobject.c /* __dir__ for type objects: returns __dict__ and __bases__. We deliberately don't suck up its __class__, as methods belonging to the metaclass would probably be more confusing than helpful. */ static PyObject * type___dir___impl(PyTypeObject *self) { PyObject *result = NULL; PyObject *dict = PyDict_New(); if (dict != NULL && merge_class_dict(dict, (PyObject *)self) == 0) result = PyDict_Keys(dict); Py_XDECREF(dict); return result; } The bases are the same (object,), so there is your answer in the __dict__: >>> "__dict__" in A.__dict__ True >>> "__dict__" in C.__dict__ False So, types without slots implement a __dict__ descriptor, but types which implement slots don't - and you just get a __dict__ implementation from above: >>> inspect.getattr_static(A, "__dict__") <attribute '__dict__' of 'A' objects> >>> inspect.getattr_static(C, "__dict__") <attribute '__dict__' of 'type' objects>
8
7
71,805,426
2022-4-9
https://stackoverflow.com/questions/71805426/how-to-tell-a-python-type-checker-that-an-optional-definitely-exists
I'm used to typescript, in which one can use a ! to tell the type-checker to assume a value won't be null. Is there something analogous when using type annotations in python? A (contrived) example: When executing the expression m.maybe_num + 3 in the code below, the enclosing if guarantees that maybe_num won't be None. But the type-checker doesn't know that, and returns an error. (Verified in https://mypy-play.net/?mypy=latest&python=3.10.) How can I tell the type-checker that I know better? from typing import Optional class MyClass: def __init__(self, maybe_num: Optional[int]): self.maybe_num = maybe_num def has_a_num(self) -> bool: return self.maybe_num is not None def three_more(self) -> Optional[int]: if self.has_a_num: # mypy error: Unsupported operand types for + ("None" and "int") return self.maybe_num + 3 else: return None
Sadly there's no clean way to infer the type of something from a function call like this, but you can work some magic with TypeGuard annotations for the has_a_num() method, although the benefit from those annotations won't really be felt unless the difference is significantly more major than the type of a single int. If it's just a single value, you should just use a standard is not None check. if self.maybe_num is not None: ... You can define a subclass of your primary subclass, where the types of any parameters whose types are affected are explicitly redeclared. class MyIntClass(MyClass): maybe_num: int From there, your checker function should still return a boolean, but the annotated return type tells MyPy that it should use it for type narrowing to the listed type. Sadly it will only do this for proper function parameters, rather than the implicit self argument, but this can be fixed easily enough by providing self explicitly as follows: if MyClass.has_a_num(self): ... That syntax is yucky, but it works with MyPy. This makes the full solution be as follows # Parse type annotations as strings to avoid # circular class references from __future__ import annotations from typing import Optional, TypeGuard class MyClass: def __init__(self, maybe_num: Optional[int]): self.maybe_num = maybe_num def has_a_num(self) -> TypeGuard[_MyClass_Int]: # This annotation defines a type-narrowing operation, # such that if the return value is True, then self # is (from MyPy's perspective) _MyClass_Int, and # otherwise it isn't return self.maybe_num is not None def three_more(self) -> Optional[int]: if MyClass.has_a_num(self): # No more mypy error return self.maybe_num + 3 else: return None class _MyClass_Int(MyClass): maybe_num: int TypeGuard was added in Python 3.10, but can be used in earlier versions using the typing_extensions module from pip.
7
2
71,805,129
2022-4-9
https://stackoverflow.com/questions/71805129/pandas-dataframe-concatenation-with-axis-1-lost-column-names
I'm trying to concatenate two dataframes with these conditions : for an existing header, append to the column ; otherwise add a new column. The code is working but the columns names are lost in case 2. Why? It doesn't seem to be mentioned in Pandas doc. Or I missed something? How to keep the column names? The code : # Testing # Merge, join, concatenate # Pandas documentation : https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html df1 = pd.DataFrame( { "A": ["A0", "A1", "A2", "A3"], "B": ["B0", "B1", "B2", "B3"], "C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"], }, #index=[0, 1, 2, 3], ) df2 = pd.DataFrame( { "A": ["A4", "A5", "A6", "A7"], "B": ["B4", "B5", "B6", "B7"], "C": ["C4", "C5", "C6", "C7"], "D": ["D4", "D5", "D6", "D7"], }, #index=[4, 5, 6, 7], ) df3 = pd.DataFrame( { "E": ["E0", "E1", "E2", "E3", "E4", "E5"], }, #index=[0, 1, 2, 3, 4 , 5], ) frames = [df1, df2] result_1 = pd.concat(frames, ignore_index=True) print(result_1) frames = [result_1, df3] if "E" in df3.columns : result_2 = pd.concat(frames, axis=1, ignore_index=True) print(result_2)
You requested to drop the index with ignore_index=True. As you are concatenating on axis=1 the index is the columns! frames = [result_1, df3] if "E" in df3.columns : result_2 = pd.concat(frames, axis=1) print(result_2) Output: A B C D E 0 A0 B0 C0 D0 E0 1 A1 B1 C1 D1 E1 2 A2 B2 C2 D2 E2 3 A3 B3 C3 D3 E3 4 A4 B4 C4 D4 E4 5 A5 B5 C5 D5 E5 6 A6 B6 C6 D6 NaN 7 A7 B7 C7 D7 NaN
7
10
71,802,492
2022-4-8
https://stackoverflow.com/questions/71802492/how-to-pass-parameter-to-dictionary-input-for-agg-pyspark-function
From the pyspark docs, I Can do: gdf = df.groupBy(df.name) sorted(gdf.agg({"*": "first"}).collect()) In my actual use case I have maaaany variables, so I like that I can simply create a dictionary, which is why: gdf = df.groupBy(df.name) sorted(gdf.agg(F.first(col, ignorenulls=True)).collect()) @lemon's suggestion won't work for me. How can I pass a parameter for first (i.e. ignorenulls=True), see here.
You can use list comprehension. gdf.agg(*[F.first(x, ignorenulls=True).alias(x) for x in df.columns]).collect()
4
4
71,801,541
2022-4-8
https://stackoverflow.com/questions/71801541/how-can-i-dynamically-pad-a-number-in-python-f-strings
I have an f-string like this: f"foo-{i:03d}" This is producing a filename for a file that I read in. The problem is sometimes there are hundreds of files, so the files are named like foo-001, foo-002, etc., but other times there are thousands, so the files are foo-0001 and 04d would be needed. Is there a way I can do this dynamically? For example, if I have the total number of files as a variable n, then I could do: pad = len(str(n)) but then how do I use that pad variable in the f-string?
Nested f-string: >>> pad = 4 >>> i = 1 >>> f"foo-{i:>0{pad}}" 'foo-0001'
7
11
71,798,874
2022-4-8
https://stackoverflow.com/questions/71798874/django-how-to-add-or-condition-to-queryset-filter-in-custom-filter
I want to make a search filter which searches in multiple fields with multiple conditions, using only one search field. I have this filters.py file: import django_filters from .models import Product class ProductFilter(django_filters.FilterSet): q = django_filters.CharFilter(method='search_filter', label='Cerca') class Meta: model = Product fields = ['q'] def search_filter(self, queryset, name, value): return queryset.filter(name__icontains=value, sku__iexact=value) But return queryset.filter(name__icontains=value, sku__iexact=value) doesn't work, neither return queryset.filter(Product(name__icontains=value) | Product(sku__iexact=value)) How can I do this?
You can filter with Q objects [Django-doc]: import django_filters from django.db.models import Q from .models import Product class ProductFilter(django_filters.FilterSet): q = django_filters.CharFilter(method='search_filter', label='Cerca') class Meta: model = Product fields = ['q'] def search_filter(self, queryset, name, value): return queryset.filter(Q(name__icontains=value) | Q(sku__iexact=value))
4
6
71,787,974
2022-4-7
https://stackoverflow.com/questions/71787974/why-does-x13-formatx-asd-cause-a-typeerror
Consider this: >>> '{x[1]}'.format(x="asd") 's' >>> '{x[1:3]}'.format(x="asd") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: string indices must be integers What could be the cause for this behavior?
An experiment based on your comment, checking what value the object's __getitem__ method actually receives: class C: def __getitem__(self, index): print(repr(index)) '{c[4]}'.format(c=C()) '{c[4:6]}'.format(c=C()) '{c[anything goes!@#$%^&]}'.format(c=C()) C()[4:6] Output (Try it online!): 4 '4:6' 'anything goes!@#$%^&' slice(4, 6, None) So while the 4 gets converted to an int, the 4:6 isn't converted to slice(4, 6, None) as in usual slicing. Instead, it remains simply the string '4:6'. And that's not a valid type for indexing/slicing a string, hence the TypeError: string indices must be integers you got. Update: Is that documented? Well... I don't see something really clear, but @GACy20 pointed out something subtle. The grammar has these rules field_name ::= arg_name ("." attribute_name | "[" element_index "]")* element_index ::= digit+ | index_string index_string ::= <any source character except "]"> + Our c[4:6] is the field_name, and we're interested in the element_index part 4:6. I think it would be clearer if digit+ had its own rule with meaningful name: field_name ::= arg_name ("." attribute_name | "[" element_index "]")* element_index ::= index_integer | index_string index_integer ::= digit+ index_string ::= <any source character except "]"> + I'd say having index_integer and index_string would more clearly indicate that digit+ is converted to an integer (instead of staying a digit string), while <any source character except "]"> + would stay a string. That said, looking at the rules as they are, perhaps we should think "what would be the point of separating the digits case out of the any-characters case which would match it as well?" and think that the point is to treat pure digits differently, presumably to convert them to an integer. Or maybe some other part of the documentation even states that digit or digits+ in general gets converted to an integer.
35
29
71,794,902
2022-4-8
https://stackoverflow.com/questions/71794902/issue-with-bad-request-syntax-with-flask
I am testing/attempting to learn flask, and flast_restful. This issue I get is: code 400, message Bad request syntax ('name=testitem') main.py: from flask import Flask,request from flask_restful import Api, Resource, reqparse app = Flask(__name__) api = Api(app) product_put_args = reqparse.RequestParser() product_put_args.add_argument("name", type = str, help = "Name of the product") product_put_args.add_argument("quantity", type = int, help = "Quantity of the item") products = {} class Product(Resource): def get(self, barcode): return products[barcode] def put(self, barcode): args = product_put_args.parse_args() return {barcode: args} api.add_resource(Product, "/product/<int:barcode>") if(__name__) == "__main__": app.run(debug = True) and my test.py import requests base = "http://127.0.0.1:5000/" response = requests.put(base + "product/1", {"name": "testitem"}) print(response.json()) I have attempted to reform mat and change around both files to figure out what is sending the issue, I feel like it is something simple, but if you can help me, I bet this will help me and many others that are trying to start creating a rest API.
You need to add the location information to the RequestParser by default it tries to parse values from flask.Request.values, and flask.Request.json, but in your case, the values need to be parsed from a flask.request.form. Below code fixes your error from flask import Flask,request from flask_restful import Api, Resource, reqparse app = Flask(__name__) api = Api(app) product_put_args = reqparse.RequestParser() product_put_args.add_argument("name", type = str, help = "Name of the product", location='form') product_put_args.add_argument("quantity", type = int, help = "Quantity of the item", location='form') products = {} class Product(Resource): def get(self, barcode): return products[barcode] def put(self, barcode): args = product_put_args.parse_args() products[barcode] = args['name'] return {barcode: args} api.add_resource(Product, "/product/<int:barcode>") if(__name__) == "__main__": app.run(debug = True)
4
12
71,765,091
2022-4-6
https://stackoverflow.com/questions/71765091/unit-testing-by-mocking-s3-bucket
I am new to unit testing and I require to perform some simple unit tests for an object storage class. I have a class named OSBucket as follows: def __initBucket(self): ecs_session = boto3.Session( aws_access_key_id="OSKEY", aws_secret_access_key="SECRETKEY" ) OS_resource = ecs_session.resource('s3', verify=cert, endpoint_url=endpoint) self.mybucket = OS_resource.Bucket(OS_BUCKET) def get_mybucket(self): return self.mybucket def download_file(self, fileName,filepath): self.mybucket.download_file(fileName, filepath) def upload_file(self, filepath,dest_file_name): self.mybucket.upload_file(filepath, '%s%s' % ("/",dest_file_name)) The method __initBucket is called in the constructor of the class. How could I start creating a unit test class to test, for example, the download_file method? UPDATE 1 moto_fake.start() conn = boto3.resource('s3', aws_access_key_id="fake_id", aws_secret_access_key="fake_secret") conn.create_bucket(Bucket="OS_BUCKET") os_bucket = OSBucket.OSBucket(thisRun) sourcefile = "testingMoto.txt" filePath = os.path.join("/", sourcefile) os_bucket.upload_file(filePath, sourcefile) which executes the moto_fake.start() before creating the os_bucket object does not work for me. UPDATE 2 Using patch.object to change the endpoint variable to None, makes the test pass
First Approach: using python mocks You can mock the s3 bucket using standard python mocks and then check that you are calling the methods with the arguments you expect. However, this approach won't actually guarantee that your implementation is correct since you won't be connecting to s3. For example, you can call non-existing boto functions if you misspelled their names - the mock won't throw any exception. Second Approach: using moto Moto is the best practice for testing boto. It simulates boto in your local machine, creating buckets locally so you can have complete end-to-end tests. I used moto to write a couple of tests for your class (and I might have found a bug in your implementation - check the last test line - those are the arguments needed to make it find the file and not throw an exception) import pathlib from tempfile import NamedTemporaryFile import boto3 import moto import pytest from botocore.exceptions import ClientError from os_bucket import OSBucket @pytest.fixture def empty_bucket(): moto_fake = moto.mock_s3() try: moto_fake.start() conn = boto3.resource('s3') conn.create_bucket(Bucket="OS_BUCKET") # or the name of the bucket you use yield conn finally: moto_fake.stop() def test_download_non_existing_path(empty_bucket): os_bucket = OSBucket() os_bucket.initBucket() with pytest.raises(ClientError) as e: os_bucket.download_file("bad_path", "bad_file") assert "Not Found" in str(e) def test_upload_and_download(empty_bucket): os_bucket = OSBucket() os_bucket.initBucket() with NamedTemporaryFile() as tmp: tmp.write(b'Hi') file_name = pathlib.Path(tmp.name).name os_bucket.upload_file(tmp.name, file_name) os_bucket.download_file("/" + file_name, file_name) # this might indicate a bug in the implementation Final notes: I actually gave a pycon lecture about this exact topic, watch it here The source code for the lecture can be found here
5
11
71,791,008
2022-4-8
https://stackoverflow.com/questions/71791008/np-cumsumdfcolumn-treatment-of-nans
np.cumsum([1, 2, 3, np.nan, 4, 5, 6]) will return nan for every value after the first np.nan. Moreover, it will do the same for any generator. However, np.cumsum(df['column']) will not. What does np.cumsum(...) do, such that dataframes are treated specially? In [2]: df = pd.DataFrame({'column': [1, 2, 3, np.nan, 4, 5, 6]}) In [3]: np.cumsum(df['column']) Out[3]: 0 1.0 1 3.0 2 6.0 3 NaN 4 10.0 5 15.0 6 21.0 Name: column, dtype: float64
When you call np.cumsum(object) with an object that is not a numpy array, it will try calling object.cumsum() See this thread for details . You can also see it in the Numpy source. The pandas method has a default of skipna=True. So np.cumsum(df) gets turned into the equivalent of df.cumsum(axis=None, skipna=True, *args, **kwargs), which, of course skips the NaN values. The Numpy method does not have a skipna option. You can also verify this yourself by overriding the pandas method with your own: class DF(pd.DataFrame): def cumsum(self, axis=None, skipna=True, *args, **kwargs): print('calling pandas cumsum') return super().cumsum(axis=None, skipna=True, *args, **kwargs) df = DF({'column': [1, 2, 3, np.nan, 4, 5, 6]}) # does calling the numpy function call your pandas method? np.cumsum(df) This will print calling pandas cumsum and return the expected result: column 0 1.0 1 3.0 2 6.0 3 NaN 4 10.0 5 15.0 6 21.0 You can then experiment with the result of changing skipna=True.
6
6
71,763,118
2022-4-6
https://stackoverflow.com/questions/71763118/what-is-causing-my-random-joblib-externals-loky-process-executor-terminatedwor
I'm making GIS-based data-analysis, where I calculate wide area nation wide prediction maps (e.g. weather maps etc.). Because my target area is very big (whole country) I am using supercomputers (Slurm) and parallelization to calculate the prediction maps. That is, I split the prediction map into multiple pieces with each piece being calculated in its own process (embarrassingly parallel processes), and within each process, multiple CPU cores are used to calculate that piece (the map piece is further split into smaller pieces for the CPU cores). I used Python's joblib-library for taking advantage of the multiple cores at my disposal and most of the time everything works smoothly. But sometimes, randomly with about 1.5% of the time, I get the following error: Traceback (most recent call last): File "main.py", line 557, in <module> sub_rasters = Parallel(n_jobs=-1, verbose=0, pre_dispatch='2*n_jobs')( File "/root_path/conda/envs/geoconda-2021/lib/python3.8/site-packages/joblib/parallel.py", line 1054, in __call__ self.retrieve() File "/root_path/conda/envs/geoconda-2021/lib/python3.8/site-packages/joblib/parallel.py", line 933, in retrieve self._output.extend(job.get(timeout=self.timeout)) File "/root_path/conda/envs/geoconda-2021/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result return future.result(timeout=timeout) File "/root_path/conda/envs/geoconda-2021/lib/python3.8/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/root_path/conda/envs/geoconda-2021/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result raise self._exception joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. The exit codes of the workers are {SIGBUS(-7)} What causes this problem, any ideas? And how to make sure this does not happen? This is irritating because, for example, if I have 200 map pieces being calculated and 197 succeed and 3 have this error, then I need to calculate these 3 pieces again.
Q :" What causes this problem, any ideas? - I am using supercomputers " A :a) Python Interpreter process ( even if run on supercomputers ) is living in an actual localhost RAM-memory.b) given (a), the number of such localhost CPU-cores controls the joblib.Parallel() behaviour.c)given (b) and having set n_jobs = -1 and also pre_dispatch = '2*n_jobs' makes such Python Interpreter start requesting that many loky-backend specific separate processes instantiations, as an explicit multiple of such localhost number of CPU-cores ( could be anywhere from 4, 8, 16, ..., 80, ... 8192 - yes, depends on the actual "supercomputer" hardware / SDS composition )d)given (c), each such new Python Interpreter process ( being there anywhere between 8, 16, 32, ..., 160, ... 16384 such new Python Interpreter processes demanded to be launched ) requests a new, separate RAM-allocation from the localhost O/S memory managere)given (d) such accumulating RAM-allocations ( each Python Process may ask for anything between 30 MB - 3000 MB of RAM, depending on the actual joblib-backend used and the memory (richness of the internal state) of the __main__-( joblib.Parallel()-launching )-Python Interpreter ) may easily and soon grow over physical-RAM, where swap starts to emulate the missing capacities by exchanging blocks of RAM content between physical RAM and disk storage - that at costs about 10,000x - 100,000x higher latencies, than if it were not forced into such a swapping virtual-memory capacities emulation of the missing physical-RAM resourcesf)given (e) "supercomputing" administration often prohibits over-allocations by administrative tools and kills all processes, that tried to oversubscribe RAM-resources beyond some fair-use threshold or user-profiled quotae)given (e) and w.r.t. the documented trace: ... joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker. the above inducted chain of evidence was confirmed to be (either) a SegFAULT (not being probable in Python Interpreter realms) or deliberate KILL, due to "supercomputer" Fair Usage Policy violation(s), here due to excessive memory usage. For SIGBUS(-7) you may defensively try avoid Lustre flushing and revise details about mmap-usage, potentially trying to read "beyond EoF", if applicable: By default, Slurm flushes Lustre file system and kernel caches upon completion of each job step. If multiple applications are run simultaneously on compute nodes (either multiple applications from a single Slurm job or multiple jobs) the result can be significant performance degradation and even bus errors. Failures occur more frequently when more applications are executed at the same time on individual compute nodes. Failures are also more common when Lustre file systems are used.Two approaches exist to address this issue. One is to disable the flushing of caches, which can be accomplished by adding "LaunchParameters=lustre_no_flush" to your Slurm configuration file "slurm.conf". Consult Fair Usage Policies applicable with your "supercomputer" Technical Support Dept. so as to get valid ceiling details. Next refactor your code not to pre_dispatch that many processes, if still would like to use the strategy of single-node process-replication, instead of other, less RAM-blocking, more efficient HPC computing strategy.
7
7
71,769,975
2022-4-6
https://stackoverflow.com/questions/71769975/in-the-python-requests-cache-package-how-do-i-detect-a-cache-hit-or-miss
The Python https://requests-cache.readthedocs.io/ library can be used to cache requests. If I'm using requests-cache, how do I detect whether a response came from the cache, or had to be re-fetched from the network?
Based on the docs The following attributes are available on responses: from_cache: indicates if the response came from the cache cache_key: The unique identifier used to match the request to the response (see Request Matching for details) created_at: datetime of when the cached response was created or last updated expires: datetime after which the cached response will expire (see Expiration for details) is_expired: indicates if the cached response is expired (if, for example, an old response was returned due to a request error) From their example from requests_cache import CachedSession session = CachedSession(expire_after=timedelta(days=1)) response = session.get('http://httpbin.org/get') print(response.from_cache)
7
10
71,767,715
2022-4-6
https://stackoverflow.com/questions/71767715/importerror-cannot-import-name-container-from-collections
My code is running fine in local, but crashes on Heroku. What can I do to fix it? Traceback (most recent call last) File "/app/nao_entre_em_panico.py", line 2, in <module> from flask import Flask, jsonify, request File "/app/.heroku/python/lib/python3.10/site-packages/flask/__init__.py", line 17, in <module> from werkzeug.exceptions import abort File "/app/.heroku/python/lib/python3.10/site-packages/werkzeug/__init__.py", line 151, in <module> __import__('werkzeug.exceptions') File "/app/.heroku/python/lib/python3.10/site-packages/werkzeug/exceptions.py", line 71, in <module> from werkzeug.wrappers import Response File "/app/.heroku/python/lib/python3.10/site-packages/werkzeug/wrappers.py", line 27, in <module> from werkzeug.http import HTTP_STATUS_CODES, \ File "/app/.heroku/python/lib/python3.10/site-packages/werkzeug/http.py", line 1148, in <module> from werkzeug.datastructures import Accept, HeaderSet, ETags, Authorization, \ File "/app/.heroku/python/lib/python3.10/site-packages/werkzeug/datastructures.py", line 16, in <module> from collections import Container, Iterable, MutableSet ImportError: cannot import name 'Container' from 'collections' (/app/.heroku/python/lib/python3.10/collections/__init__.py)
Container, Iterable, MutableSet, and other abstract base classes are in collections.abc since Python 3.3.
4
5
71,766,568
2022-4-6
https://stackoverflow.com/questions/71766568/convert-string-to-int-before-querying-django-orm
I have a model class Entry(models.Model): maca = models.CharField(max_length=100, blank=True, null=True) This field will accept only numbers (cannot set char field to integer field for business reasons) Now I have to get all entries that have maca greater than 90 What I'm trying to do is this: Entry.objects.filter(maca__gte=90) But get isn't working because the maca is a string. How can I convert maca to int before filtering? Or something like this? Thanks
Try this: from django.db.models.functions import Cast from django.db.models import IntegerField Entry.objects.annotate(maca_integer=Cast('maca', output_field=IntegerField())).filter(maca_integer__gte=90)
4
6
71,764,027
2022-4-6
https://stackoverflow.com/questions/71764027/numpy-installation-fails-when-installing-with-poetry-on-m1-and-macos
I have a Numpy as a dependency in Poetry pyproject.toml file and it fails to install. error: the clang compiler does not support 'faltivec', please use -maltivec and include altivec.h explicitly error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-12-arm64-3.9/numpy/core/src/umath -Ibuild/src.macosx-12-arm64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-12-arm64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-12-arm64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/Users/moo/Library/Caches/pypoetry/virtualenvs/dex-ohlcv-qY1n4duk-py3.9/include -I/opt/homebrew/Cellar/[email protected]/3.9.12/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-12-arm64-3.9/numpy/core/src/common -Ibuild/src.macosx-12-arm64-3.9/numpy/core/src/npymath -c numpy/core/src/multiarray/array_assign_scalar.c -o build/temp.macosx-12-arm64-3.9/numpy/core/src/multiarray/array_assign_scalar.o -MMD -MF build/temp.macosx-12-arm64-3.9/numpy/core/src/multiarray/array_assign_scalar.o.d -faltivec -I/System/Library/Frameworks/vecLib.framework/Headers" failed with exit status 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy macOS Big Sur Python 3.9 installed through Homebrew How to solve it? If I install Numpy with pip it installs fine.
Make sure you have OpenBLAS installed from Homebrew: brew install openblas Then before running any installation script, make sure you tell your shell environment to use Homebrew OpenBLAS installation export OPENBLAS="$(brew --prefix openblas)" poetry install If you get an error File "/private/var/folders/tx/50wn88yd40v2_6_7fvfr98z00000gn/T/pip-build-env-uq7qd2ba/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 252, in get_tag plat_name = get_platform(self.bdist_dir) File "/private/var/folders/tx/50wn88yd40v2_6_7fvfr98z00000gn/T/pip-build-env-uq7qd2ba/overlay/lib/python3.9/site-packages/wheel/bdist_wheel.py", line 48, in get_platform result = calculate_macosx_platform_tag(archive_root, result) File "/private/var/folders/tx/50wn88yd40v2_6_7fvfr98z00000gn/T/pip-build-env-uq7qd2ba/overlay/lib/python3.9/site-packages/wheel/macosx_libfile.py", line 356, in calculate_macosx_platform_tag assert len(base_version) == 2 AssertionError This should have been fixed in the recent enough Python packaging tools. Make sure Poetry is recent enough version Numpy is recent enough version Any dependency using Numpy, like Scipy or Pyarrrow are also the most recent version For example in your pyproject.toml [tool.poetry.dependencies] # For Scipy compatibility python = ">=3.9,<3.11" scipy = "^1.8.0" pyarrow = "^7.0.0" Even if this still fails you can try to preinstall scipy with pip before running poetry install in Poetry virtualenv (enter with poetry shell) This should pick up the precompiled scipy wheel. When the precompiled wheel is present, Poetry should not try to install it again and then fail it the build step. poetry shell pip install scipy Collecting scipy Downloading scipy-1.8.0-cp39-cp39-macosx_12_0_arm64.whl (28.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 28.7/28.7 MB 6.0 MB/s eta 0:00:00 Requirement already satisfied: numpy<1.25.0,>=1.17.3 in /Users/moo/Library/Caches/pypoetry/virtualenvs/dex-ohlcv-qY1n4duk-py3.9/lib/python3.9/site-packages (from scipy) (1.22.3) Installing collected packages: scipy Successfully installed scipy-1.8.0 After this run Poetry normally: poetry install
13
17
71,710,921
2022-4-1
https://stackoverflow.com/questions/71710921/python-run-multiple-async-functions-simultaneously
I'm essentially making a pinger, that makes has a 2d list, of key / webhook pairs, and after pinging a key, send the response to a webhook the 2d list goes as follows: some_list = [["key1", "webhook1"], ["key2", "webhook2"]] My program is essentially a loop, and I'm not too sure how I can rotate the some_list data, in the function. Here's a little demo of what my script looks like: async def do_ping(some_pair): async with aiohttp.ClientSession() as s: tasks = await gen_tasks(s, some_pair) results = await asyncio.gather(*tasks*) sleep(10) await do_ping(some_pair) I've tried: async def main(): for entry in some_list: asyncio.run(do_ping(entry)) but due to the do_ping function being a self-calling loop, it just calls the first one over and over again, and never gets to the ones after it. Hoping to find a solution to this, whether it's threading or alike, and if you have a better way of structuring some_list values (which I assume would be a dictionary), feel free to drop that feedback as well
You made your method recursive await do_ping(some_pair), it never ends for the loop in main to continue. I would restructure the application like this: async def do_ping(some_pair): async with aiohttp.ClientSession() as s: while True: tasks = await gen_tasks(s, some_pair) results = await asyncio.gather(*tasks) await asyncio.sleep(10) async def main(): tasks = [do_ping(entry) for entry in some_list] await asyncio.gather(*tasks) if __name__ == "__main__": asyncio.run(main()) Alternatively you could move the repeat and sleeping logic into the main: async def do_ping(some_pair): async with aiohttp.ClientSession() as s: tasks = await gen_tasks(s, some_pair) results = await asyncio.gather(*tasks) async def main(): while True: tasks = [do_ping(entry) for entry in some_list] await asyncio.gather(*tasks) await asyncio.sleep(10) if __name__ == "__main__": asyncio.run(main()) You could also start the tasks before doing a call to sleep, and gather them afterwards. That would make the pings more consistently start at 10 second intervals instead of being 10 seconds + the time it takes to gather the results: async def main(): while True: tasks = [ asyncio.create_task(do_ping(entry)) for entry in some_list ] await asyncio.sleep(10) await asyncio.wait(tasks) EDIT As pointed out by creolo you should only create a single ClientSession object. See https://docs.aiohttp.org/en/stable/client_reference.html Session encapsulates a connection pool (connector instance) and supports keepalives by default. Unless you are connecting to a large, unknown number of different servers over the lifetime of your application, it is suggested you use a single session for the lifetime of your application to benefit from connection pooling. async def do_ping(session, some_pair): tasks = await gen_tasks(session, some_pair) results = await asyncio.gather(*tasks) async def main(): async with aiohttp.ClientSession() as session: while True: tasks = [ asyncio.create_task(do_ping(session, entry)) for entry in some_list ] await asyncio.sleep(10) await asyncio.wait(tasks)
4
8
71,753,605
2022-4-5
https://stackoverflow.com/questions/71753605/polars-read-csv-with-german-number-formatting
Is there a possibility in polars to read in csv with german number formatting like it is possible in pandas.read_csv() with the parameters "decimal" and "thousands"
Currently, the Polars read_csv method does not expose those parameters. However, there is an easy workaround to convert them. For example, with this csv, allow Polars to read the German-formatted numbers as utf8. import polars as pl my_csv = b"""col1\tcol2\tcol3 1.234,5\tabc\t1.234.567 9.876\tdef\t3,21 """ df = pl.read_csv(my_csv, separator="\t") print(df) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ str ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•β•β•ͺ══════β•ͺ═══════════║ β”‚ 1.234,5 ┆ abc ┆ 1.234.567 β”‚ β”‚ 9.876 ┆ def ┆ 3,21 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ From here, the conversion is just a few lines of code: df = df.with_columns( pl.col("col1", "col3") .str.replace_all(r"\.", "") .str.replace(",", ".") .cast(pl.Float64) # or whatever datatype needed ) print(df) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ col1 ┆ col2 ┆ col3 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ str ┆ f64 β”‚ β•žβ•β•β•β•β•β•β•β•β•ͺ══════β•ͺ════════════║ β”‚ 1234.5 ┆ abc ┆ 1.234567e6 β”‚ β”‚ 9876.0 ┆ def ┆ 3.21 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Just be careful to apply this logic only to numbers encoded in German locale. It will mangle numbers formatted in other locales.
4
5
71,759,316
2022-4-5
https://stackoverflow.com/questions/71759316/easily-convert-string-column-to-pl-datetime-in-polars
Consider a Polars data frame with a column of str type that indicates the date in the format '27 July 2020'. I would like to convert this column to the polars.datetime type, which is distinct from the Python standard datetime. import polars as pl from datetime import datetime df = pl.DataFrame({ "id": [1, 2], "event_date": ["27 July 2020", "31 December 2020"] }) df = df.with_columns( pl.col("event_date").map_elements(lambda x: x.replace(" ", "-")) .map_elements(lambda x: datetime.strptime(x, "%d-%B-%Y")) ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ event_date β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════║ β”‚ 1 ┆ 2020-07-27 00:00:00 β”‚ β”‚ 2 ┆ 2020-12-31 00:00:00 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Suppose we try to process df further to create a new column indicating the quarter of the year an event took place. df.with_columns( pl.col("event_date").map_elements(lambda x: x.month) .map_elements(lambda x: 1 if x in range(1,4) else 2 if x in range(4,7) else 3 if x in range(7,10) else 4) .alias("quarter") ) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ event_date ┆ quarter β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═════════║ β”‚ 1 ┆ 2020-07-27 00:00:00 ┆ 3 β”‚ β”‚ 2 ┆ 2020-12-31 00:00:00 ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ How would I do this in Polars without applying custom lambdas through map_elements?
The easiest way to convert strings to Date/Datetime is to use Polars' own functions: .str.to_date() .str.to_datetime() df.with_columns( pl.col("event_date").str.to_datetime("%d %B %Y") ) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ event_date β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════║ β”‚ 1 ┆ 2020-07-27 00:00:00 β”‚ β”‚ 2 ┆ 2020-12-31 00:00:00 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ The Temporal section of the docs shows the supported functions in the .dt namespace. In the case of your second example, there is a dedicated quarter expression: .dt.quarter() df = df.with_columns( pl.col("event_date").str.to_datetime("%d %B %Y") ).with_columns( pl.col("event_date").dt.quarter().alias("quarter") ) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ id ┆ event_date ┆ quarter β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ datetime[ΞΌs] ┆ i8 β”‚ β•žβ•β•β•β•β•β•ͺ═════════════════════β•ͺ═════════║ β”‚ 1 ┆ 2020-07-27 00:00:00 ┆ 3 β”‚ β”‚ 2 ┆ 2020-12-31 00:00:00 ┆ 4 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
25
45
71,758,283
2022-4-5
https://stackoverflow.com/questions/71758283/how-is-python-polars-treating-the-index
I want to try out polars in Python so what I want to do is concatenate several dataframes that are read from jsons. When I change the index to date and have a look at lala1.head() I see that the column date is gone, so I basically lose the index. Is there a better solution or do I need to sort by date, which basically does the same as setting the index to date? import polars as pl quarterly_balance_df = pl.read_json('../AAPL/single_statements/1985-09-30-quarterly_balance.json') q1 = quarterly_balance_df.lazy().with_columns(pl.col("date").str.to_date()) quarterly_balance_df = q1.collect() q2 = quarterly_balance_df.lazy().with_columns(pl.col("fillingDate").str.to_date()) quarterly_balance_df = q2.collect() q3 = quarterly_balance_df.lazy().with_columns(pl.col("acceptedDate").str.to_date()) quarterly_balance_df = q3.collect() quarterly_balance_df2 = pl.read_json('../AAPL/single_statements/1986-09-30-quarterly_balance.json') q1 = quarterly_balance_df2.lazy().with_columns(pl.col("date").str.to_date()) quarterly_balance_df2 = q1.collect() q2 = quarterly_balance_df2.lazy().with_columns(pl.col("fillingDate").str.to_date()) quarterly_balance_df2 = q2.collect() q3 = quarterly_balance_df2.lazy().with_columns(pl.col("acceptedDate").str.to_date()) quarterly_balance_df2 = q3.collect() lala1 = pl.from_pandas(quarterly_balance_df.to_pandas().set_index('date')) lala2 = pl.from_pandas(quarterly_balance_df.to_pandas().set_index('date')) test = pl.concat([lala1,lala2])
Polars intentionally eliminates the concept of an index. From the "Coming from Pandas" section in the User Guide: Polars aims to have predictable results and readable queries, as such we think an index does not help us reach that objective. Indeed, the from_pandas method ignores any index. For example, if we start with this data: import polars as pl df = pl.DataFrame( { "key": [1, 2], "var1": ["a", "b"], "var2": ["r", "s"], } ) print(df) shape: (2, 3) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ key ┆ var1 ┆ var2 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•ͺ══════β•ͺ══════║ β”‚ 1 ┆ a ┆ r β”‚ β”‚ 2 ┆ b ┆ s β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ Now, if we export this Polars dataset to Pandas, set key as the index in Pandas, and then re-import to Polars, you'll see the 'key' column disappear. pl.from_pandas(df.to_pandas().set_index("key")) shape: (2, 2) β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ var1 ┆ var2 β”‚ β”‚ --- ┆ --- β”‚ β”‚ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•ͺ══════║ β”‚ a ┆ r β”‚ β”‚ b ┆ s β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ This is why your Date column disappeared. In Polars, you can sort, summarize, or join by any set of columns in a DataFrame. No need to declare an index. I recommend looking through the Polars User Guide. It's a great place to start. And there's a section for those coming from Pandas.
7
12
71,737,836
2022-4-4
https://stackoverflow.com/questions/71737836/how-to-use-polars-with-plotly-without-converting-to-pandas
I would like to replace Pandas with Polars but I was not able to find out how to use Polars with Plotly without converting to Pandas. I wonder if there is a way to completely cut Pandas out of the process. Consider the following test data: import polars as pl import numpy as np import plotly.express as px df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": np.random.rand(5), "groups": ["A", "A", "B", "C", "B"], } ) fig = px.bar(df, x='names', y='random') fig.show() I would like this code to show the bar chart in a Jupyter notebook but instead it returns an error: /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/polars/internals/frame.py:1483: UserWarning: accessing series as Attribute of a DataFrame is deprecated warnings.warn("accessing series as Attribute of a DataFrame is deprecated") It is possible to transform the Polars data frame to a Pandas data frame with df = df.to_pandas(). Then, it works. However, is there another, simpler and more elegant solution?
Yes, no need for converting to a Pandas dataframe. Someone (sa-) has requested supporting a better option here and included a workaround for it. "The workaround that I use right now is px.line(x=df["a"], y=df["b"]), but it gets unwieldy if the name of the data frame is too big" For the OP's code example, the approach of specifying the dataframe columns explicitly works. I find in addition to specifying the dataframe columns with px.bar(x=df["names"], y=df["random"]) - or - px.bar(df, x=df["names"], y=df["random"]), casting to a list can also work: import polars as pl import numpy as np import plotly.express as px df = pl.DataFrame( { "nrs": [1, 2, 3, None, 5], "names": ["foo", "ham", "spam", "egg", None], "random": np.random.rand(5), "groups": ["A", "A", "B", "C", "B"], } ) px.bar(df, x=list(df["names"]), y=list(df["random"])) Knowing polars better, you may see some other options once you see the idea of the workaround. The example posted there is simpler, instead of px.line(df, x="a", y="b") like you could use for a Pandas dataframe, you use px.line(x=df["a"], y=df["b"]). With polars, that is: import polars as pl import plotly.express as px df = pl.DataFrame({"a":[1,2,3,4,5], "b":[1,4,9,16,25]}) px.line(x=df["a"], y=df["b"]) (Note that using plotly.express requires Pandas to be installed, see here and here. I used plotly.express in my answer because it was closer to the OP. The code could be adapted to using plotly.graph_objects if there was a desire to not have Pandas installed & involved at all.)
12
12
71,722,882
2022-4-3
https://stackoverflow.com/questions/71722882/how-do-you-print-the-django-sql-query-for-an-aggregation
If I have a django queryset print(queryset.query) shows me the SQL statement so I can validate it. But with aggregations they never return a queryset. How do you print those queries out. I guess I can turn on debug logging for the ORM and find them that way but it seems like I should be able to get it right before the execution engine sends it off to postgres.....
You can use connection.queries after your aggregation query. Example: ... from django.db import connection def your_view(request): # Your view logic and aggregate queryset print(connection.queries) return render(request, 'index.html', {}) The output is a dictionary of SQL queries that you entered into the database.
9
3
71,751,434
2022-4-5
https://stackoverflow.com/questions/71751434/pylance-type-of-pandas-method-is-partially-unknown
If I try to validate code that uses pandas methodes with pylance in strict mode, I get a validation error. import pandas as pd ser: pd.Series[float] = pd.Series([.1, .2, .5, .3]) print(ser.max()) Pylance in strict mode returns the error: Type of "max" is partially unknown Am I doing something wrong? Can I avoid this error without switching pylance to basic mode?
According to pyright documentation, one can turn off this specific check by creating pyrightconfig.json file in the same project folder with the next content: { "reportUnknownMemberType":"none", } So you can be still in strict mode, but with one type of checks disabled.
6
2
71,758,114
2022-4-5
https://stackoverflow.com/questions/71758114/python-list-comprehension-with-complex-data-structures
I'm trying to flatten some mixed arrays in Python using LC. I'm having some trouble figuring out how to structure it. Here's the array's i'm trying to flatten arr_1 = [1, [2, 3], 4, 5] arr_2 = [1,[2,3],[[4,5]]] I tried this methods for arr_1 but get "TypeError: 'int' object is not iterable" print([item if type(items) is list else items for items in arr_1 for item in items]) So I decided to break it into parts to see where it's failing by using this def check(item): return item; print([check(item) if type(items) is list else check(items) for items in [1, [2, 3], 4, 5] for items in arr_2]) Through the debugger I found that it's failing at the 2d array in for items in [1, [2, 3], 4, 5] I don't need the LC to be in one line but I just wanted to know how to do it in a single nested LC if its even possible.
Using an internal stack and iter's second form to simulate a while loop: def flatten(obj): return [x for stack in [[obj]] for x, in iter(lambda: stack and [stack.pop()], []) if isinstance(x, int) or stack.extend(reversed(x))] print(flatten([1, [2, 3], 4, 5])) print(flatten([1, [2, 3], [[4, 5]]])) print(flatten([1, [2, [], 3], [[4, 5]]])) Output (Try it online!): [1, 2, 3, 4, 5] [1, 2, 3, 4, 5] [1, 2, 3, 4, 5] Slight variation, splitting the "long" line into two (Try it online!): def flatten(obj): return [x for stack in [[obj]] for _ in iter(lambda: stack, []) for x in [stack.pop()] if isinstance(x, int) or stack.extend(reversed(x))] To explain it a bit, here's roughly the same with ordinary code: def flatten(obj): result = [] stack = [obj] while stack: x = stack.pop() if isinstance(x, int): result.append(x) else: stack.extend(reversed(x)) return result If the order doesn't matter, we can use a queue instead (inspired by 0x263A's comment), although it's less memory-efficient (Try it online!): def flatten(obj): return [x for queue in [[obj]] for x in queue if isinstance(x, int) or queue.extend(x)] We can fix the order if instead of putting each list's contents at the end of the queue, we insert them right after the list (which is less time-efficient) in the "priority" queue (Try it online!): def flatten(obj): return [x for pqueue in [[obj]] for i, x in enumerate(pqueue, 1) if isinstance(x, int) or pqueue.__setitem__(slice(i, i), x)]
6
19
71,718,167
2022-4-2
https://stackoverflow.com/questions/71718167/importerror-cannot-import-name-escape-from-jinja2
I am getting the error ImportError: cannot import name 'escape' from 'jinja2' When trying to run code using the following requirements.txt: chart_studio==1.1.0 dash==2.1.0 dash_bootstrap_components==1.0.3 dash_core_components==2.0.0 dash_html_components==2.0.0 dash_renderer==1.9.1 dash_table==5.0.0 Flask==1.1.2 matplotlib==3.4.3 numpy==1.20.3 pandas==1.3.4 plotly==5.5.0 PyYAML==6.0 scikit_learn==1.0.2 scipy==1.7.1 seaborn==0.11.2 statsmodels==0.12.2 urllib3==1.26.7 Tried pip install jinja2 But the requirement is already satisfied. Running this code on a windows system.
Jinja is a dependency of Flask and Flask V1.X.X uses the escape module from Jinja, however recently support for the escape module was dropped in newer versions of Jinja. To fix this issue, simply update to the newer version of Flask V2.X.X in your requirements.txt where Flask no longer uses the escape module from Jinja. Flask>=2.2.2 Also, do note that Flask V1.X.X is no longer supported by the team. If you want to continue to use this older version, this Github issue may help.
136
189
71,703,734
2022-4-1
https://stackoverflow.com/questions/71703734/how-to-upgrade-version-of-pyenv-on-ubuntu
I wanted to install python 3.10 but that version is not available on pyenv version list. checked by pyenv install --list. People suggested to upgrade pyenv that but I do not see help related to updating pyenv.
pyenv isn't really 'installed' in a traditional sense, it's just a git checkout. All you have to do to update is cd ~/.pyenv git pull That also updates the list of available python versions.
22
40
71,674,202
2022-3-30
https://stackoverflow.com/questions/71674202/how-do-i-make-mypy-recognize-non-nullable-orm-attributes
Mypy infers ORM non-nullable instance attributes as optionals. Filename: test.py from sqlalchemy.orm import decl_api, registry from sqlalchemy import BigInteger, Column, String mapper_registry = registry() class Base(metaclass=decl_api.DeclarativeMeta): __abstract__ = True registry = mapper_registry metadata = mapper_registry.metadata __init__ = mapper_registry.constructor class Person(Base): __tablename__ = "persons" id = Column(BigInteger, primary_key=True, autoincrement=True) name = Column(String(40), nullable=False) def main(person: Person): person_id = person.id person_name = person.name reveal_locals() Running mypy test.py yields: test.py:27: note: Revealed local types are: test.py:27: note: person: test.Person test.py:27: note: person_id: Union[builtins.int, None] test.py:27: note: person_name: Union[builtins.str, None] As far as my understanding goes, person_id and person_name should have been int and str respectively since they are set as non-nullable. What am I missing here? Relevant libraries: SQLAlchemy 1.4.25 sqlalchemy2-stubs 0.0.2a15 mypy 0.910 mypy-extensions 0.4.3
I had the same question when evaluating a nullable=False column with mypy. One of my teammates found the answer in the SqlAlchemy docs: https://docs.sqlalchemy.org/en/14/orm/extensions/mypy.html#introspection-of-columns-based-on-typeengine The types are by default always considered to be Optional, even for the primary key and non-nullable column. The reason is because while the database columns β€œid” and β€œname” can’t be NULL, the Python attributes id and name most certainly can be None without an explicit constructor: There are also mitigations they list in that article with declaring the columns to be mapped types explicitly.
6
1
71,679,094
2022-3-30
https://stackoverflow.com/questions/71679094/python-vs-javascript-execution-time
I tried solving Maximum Subarray using both Javascript(Node.js) and Python, with brute force algorithm. Here's my code: Using python: from datetime import datetime from random import randint arr = [randint(-1000,1000) for i in range(1000)] def bruteForce(a): l = len(a) max = 0 for i in range(l): sum = 0 for j in range(i, l): sum += a[j] if(sum > max): max = sum return max start = datetime.now() bruteForce(arr) end = datetime.now() print(format(end-start)) And Javascript: function randInt(start, end) { return Math.floor(Math.random() * (end - start + 1)) } var arr = Array(1000).fill(randInt(-1000, 1000)) function bruteForce(arr) { var max = 0 for (let i = 0; i < arr.length; i++) { var sum = 0 for (let j = i; j < arr.length; j++) { sum += arr[j] max = Math.max(max, sum) } } return max } var start = performance.now() bruteForce(arr) var end = performance.now() console.log(end - start) Javascript got a result of about 0.187 seconds, while python got 4.75s - about 25 times slower. Does my code not optimized or python is really that slower than javascript?
Python is not per se slower than Javascript, it depends on the implementation. Here the results comparing node and PyPy which also uses JIT: > /pypy39/python brute.py 109.8594 ms N= 10000 result= 73682 > node brute.js 167.4442000091076 ms N= 10000 result= 67495 So we could even say "python is somewhat faster" ... And if we use Cython, with a few type-hints, it will be again a lot faster - actual full C speed: > cythonize -a -i brutec.pyx > python -c "import brutec" 69.28919999999998 ms N= 10000 result= 52040 To make the comparison reasonable, I fixed a few issues in your scripts: Fix: the js script filled an array with all the same values from a single random Does the same basic kind of looping in Python - instead of using the range iterator (otherwise its a little slower) Use the same time format and increase the array length to 10000 - otherwise the times are too small regarding resolution and thread switching jitter Python code: from time import perf_counter as clock from random import randint N = 10000 arr = [randint(-1000,1000) for i in range(N)] def bruteForce(a): l = len(a) max = 0 i = 0 while i < l: sum = 0 j = i while j < l: sum += a[j] if sum > max: max = sum j += 1 i += 1 return max start = clock() r = bruteForce(arr) end = clock() print((end - start) * 1000, 'ms', 'N=', N, 'result=', r) ##print(arr[:10]) JS code: var start = -1000, end = 1000, N=10000 var arr = Array.from({length: N}, () => Math.floor(Math.random() * (end - start + 1) + start)) function bruteForce(arr) { var max = 0 for (let i = 0; i < arr.length; i++) { var sum = 0 for (let j = i; j < arr.length; j++) { sum += arr[j]; max = Math.max(max, sum) //~ if (sum > max) max = sum; } } return max } var start = performance.now() r = bruteForce(arr) var end = performance.now() console.log(end - start, 'ms', 'N=', N, 'result=', r) //~ console.log(arr.slice(0, 10)) Code for Cython (or Python), enriched with a few type-hints: import cython from time import perf_counter as clock from random import randint N = 10000 arr = [randint(-1000,1000) for i in range(N)] def bruteForce(arr): l: cython.int = len(arr) assert l <= 10000 a: cython.int[10000] = arr # copies mem from Python array max: cython.int = 0 i: cython.int = 0 while i < l: sum: cython.int = 0 j: cython.int = i while j < l: sum += a[j] if sum > max: max = sum j += 1 i += 1 return max start = clock() r = bruteForce(arr) end = clock() print((end - start) * 1000, 'ms', 'N=', N, 'result=', r) ##print(arr[:10]) (Done on a slow notebook)
12
19
71,708,147
2022-4-1
https://stackoverflow.com/questions/71708147/mlflow-tracking-ui-not-showing-experiments-on-local-machine-laptop
I am a beginner in mlflow and was trying to set it up locally using Anaconda 3. I have created a new environment in anaconda and install mlflow and sklearn in it. Now I am using jupyter notebook to run my sample code for mlflow. ''' import os import warnings import sys import pandas as pd import numpy as np from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score from sklearn.model_selection import train_test_split from sklearn.linear_model import ElasticNet from urllib.parse import urlparse import mlflow import mlflow.sklearn import logging logging.basicConfig(level=logging.WARN) logger = logging.getLogger(__name__) warnings.filterwarnings("ignore") np.random.seed(40) mlflow.set_tracking_uri("file:///Users/Swapnil/Documents/LocalPython/MLFLowDemo/mlrun") mlflow.get_tracking_uri() mlflow.get_experiment #experiment_id = mlflow.create_experiment("Mlflow_demo") experiment_id = mlflow.create_experiment("Demo3") experiment = mlflow.get_experiment(experiment_id) print("Name: {}".format(experiment.name)) print("Experiment_id: {}".format(experiment.experiment_id)) print("Artifact Location: {}".format(experiment.artifact_location)) print("Tags: {}".format(experiment.tags)) print("Lifecycle_stage: {}".format(experiment.lifecycle_stage)) mlflow.set_experiment("Demo3") def eval_metrics(actual, pred): rmse = np.sqrt(mean_squared_error(actual, pred)) mae = mean_absolute_error(actual, pred) r2 = r2_score(actual, pred) return rmse, mae, r2 # Read the wine-quality csv file from the URL csv_url =\ 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv' try: data = pd.read_csv(csv_url, sep=';') except Exception as e: logger.exception( "Unable to download training & test CSV, check your internet connection. Error: %s", e) data.head(2) def train_model(data, alpha, l1_ratio): # Split the data into training and test sets. (0.75, 0.25) split. train, test = train_test_split(data) # The predicted column is "quality" which is a scalar from [3, 9] train_x = train.drop(["quality"], axis=1) test_x = test.drop(["quality"], axis=1) train_y = train[["quality"]] test_y = test[["quality"]] # Set default values if no alpha is provided alpha = alpha l1_ratio = l1_ratio # Execute ElasticNet lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42) lr.fit(train_x, train_y) # Evaluate Metrics predicted_qualities = lr.predict(test_x) (rmse, mae, r2) = eval_metrics(test_y, predicted_qualities) # Print out metrics print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio)) print(" RMSE: %s" % rmse) print(" MAE: %s" % mae) print(" R2: %s" % r2) # Log parameter, metrics, and model to MLflow with mlflow.start_run(experiment_id = experiment_id): mlflow.log_param("alpha", alpha) mlflow.log_param("l1_ratio", l1_ratio) mlflow.log_metric("rmse", rmse) mlflow.log_metric("r2", r2) mlflow.log_metric("mae", mae) mlflow.sklearn.log_model(lr, "model") train_model(data, 0.5, 0.5) train_model(data, 0.5, 0.3) train_model(data, 0.4, 0.3) ''' using above code, I am successfully able to create 3 different experiment as I can see the folders created in my local directory as shown below: enter image description here Now, I am trying to run the mlflow ui using the jupyter terminal in my chrome browser and I am able to open the mlflow ui but cannot see and experiments as shown below: enter image description here Could you help me in finding where I am going wrong?
Where do you run mlflow ui command? I think if you pass tracking ui path in the arguments, it would work: mlflow ui --backend-store-uri file:///Users/Swapnil/Documents/LocalPython/MLFLowDemo/mlrun
9
5
71,716,936
2022-4-2
https://stackoverflow.com/questions/71716936/how-to-log-stacktrace-in-json-format-python
Im using structlog for logging and want the exception/stacktrace to be printed in json format. Currently its not formatted and in string format which is not very readable { "message": "Error info with an exc", "timestamp": "2022-03-31T13:32:33.928188+00:00", "logger": "__main__", "level": "error", "exception": "Traceback (most recent call last):\n File \"../../main.py\", line 21, in <module>\n assert 'foo' == 'bar'\nAssertionError" } Wanted exception in json format like { "message": "Error info with an exc", "timestamp": "2022-03-31T13:32:33.928188+00:00", "logger": "__main__", "level": "error", "exception": { "File": "../.../main.py", "line": 21, "function": "<module>", "errorStatement": "assert 'foo' == 'bar'", "errorType":"AssertionError", } } This is just a small example I am aslo using traceback library and passing the stackTrace which gets printed in large string block Do we have any library available which does stacktrace json formatting. or do we have to write a custom function?
Here is a snippet which does that: https://gitlab.com/-/snippets/2284049 It will eventually land directly in structlog. edit: https://github.com/hynek/structlog/pull/407 has been merged and will be part of v22.1.
4
5
71,712,258
2022-4-1
https://stackoverflow.com/questions/71712258/error-could-not-build-wheels-for-backports-zoneinfo-which-is-required-to-insta
The Heroku Build is returning this error when I'm trying to deploy a Django application for the past few days. The Django Code and File Structure are the same as Django's Official Documentation and Procfile is added in the root folder. Log - -----> Building on the Heroku-20 stack -----> Determining which buildpack to use for this app -----> Python app detected -----> No Python version was specified. Using the buildpack default: python-3.10.4 To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes Building wheels for collected packages: backports.zoneinfo Building wheel for backports.zoneinfo (pyproject.toml): started Building wheel for backports.zoneinfo (pyproject.toml): finished with status 'error' ERROR: Command errored out with exit status 1: command: /app/.heroku/python/bin/python /app/.heroku/python/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpqqu_1qow cwd: /tmp/pip-install-txfn1ua9/backports-zoneinfo_a462ef61051d49e7bf54e715f78a34f1 Complete output (41 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.10 creating build/lib.linux-x86_64-3.10/backports copying src/backports/__init__.py -> build/lib.linux-x86_64-3.10/backports creating build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/_zoneinfo.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/_tzpath.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/_common.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/_version.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/__init__.py -> build/lib.linux-x86_64-3.10/backports/zoneinfo running egg_info writing src/backports.zoneinfo.egg-info/PKG-INFO writing dependency_links to src/backports.zoneinfo.egg-info/dependency_links.txt writing requirements to src/backports.zoneinfo.egg-info/requires.txt writing top-level names to src/backports.zoneinfo.egg-info/top_level.txt reading manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching '*.png' under directory 'docs' warning: no files found matching '*.svg' under directory 'docs' no previously-included directories found matching 'docs/_build' no previously-included directories found matching 'docs/_output' adding license file 'LICENSE' adding license file 'licenses/LICENSE_APACHE' writing manifest file 'src/backports.zoneinfo.egg-info/SOURCES.txt' copying src/backports/zoneinfo/__init__.pyi -> build/lib.linux-x86_64-3.10/backports/zoneinfo copying src/backports/zoneinfo/py.typed -> build/lib.linux-x86_64-3.10/backports/zoneinfo running build_ext building 'backports.zoneinfo._czoneinfo' extension creating build/temp.linux-x86_64-3.10 creating build/temp.linux-x86_64-3.10/lib gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/app/.heroku/python/include/python3.10 -c lib/zoneinfo_module.c -o build/temp.linux-x86_64-3.10/lib/zoneinfo_module.o -std=c99 lib/zoneinfo_module.c: In function β€˜zoneinfo_fromutc’: lib/zoneinfo_module.c:600:19: error: β€˜_PyLong_One’ undeclared (first use in this function); did you mean β€˜_PyLong_New’? 600 | one = _PyLong_One; | ^~~~~~~~~~~ | _PyLong_New lib/zoneinfo_module.c:600:19: note: each undeclared identifier is reported only once for each function it appears in error: command '/usr/bin/gcc' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for backports.zoneinfo Failed to build backports.zoneinfo ERROR: Could not build wheels for backports.zoneinfo, which is required to install pyproject.toml-based projects ! Push rejected, failed to compile Python app. ! Push failed Thanks.
Avoid installing backports.zoneinfo when using python >= 3.9 Edit your requirements.txt file FROM: backports.zoneinfo==0.2.1 TO: backports.zoneinfo;python_version<"3.9" OR: backports.zoneinfo==0.2.1;python_version<"3.9" You can read more about this here and here
72
155
71,688,065
2022-3-31
https://stackoverflow.com/questions/71688065/generic-requirements-txt-for-tensorflow-on-both-apple-m1-and-other-devices
I have a new MacBook with the Apple M1 chipset. To install tensorflow, I follow the instructions here, i.e., installing tensorflow-metal and tensorflow-macos instead of the normal tensorflow package. While this works fine, it means that I can't run the typical pip install -r requirements.txt as long as we have tensorflow in the requirements.txt. If we instead include tensorflow-macos, it'll lead to problems for non-M1 or even non-macOS users. Our library must work on all platforms. Is there a generic install command that installs the correct TensorFlow version depending on whether the computer is a M1 Mac or not? So that we can use a single requirements.txt for everyone? Or if that's not possible, can we pass some flag/option, e.g., pip install -r requirements.txt --m1 to install some variation? What's the simplest and most elegant solution here?
According to this post Is there a way to have a conditional requirements.txt file for my Python application based on platform? You can use conditionals on your requirements.txt, thus tensorflow==2.8.0; sys_platform != 'darwin' or platform_machine != 'arm64' tensorflow-macos==2.8.0; sys_platform == 'darwin' and platform_machine == 'arm64'
6
8
71,673,404
2022-3-30
https://stackoverflow.com/questions/71673404/importerror-cannot-import-name-unicodefun-from-click
When running our lint checks with the Python Black package, an error comes up: ImportError: cannot import name '_unicodefun' from 'click' (/Users/robot/.cache/pre-commit/repo3u71ccm2/py_env-python3.9/lib/python3.9/site-packages/click/init.py)` In researching this, I found the following related issues: ImportError: cannot import name '_unicodefun' from 'click' #2976 ImportError: cannot import name '_unicodefun' from 'click' #6013 How can I solve this problem? Is this a false positive from the linter? Do I need to modify my code?
This has been fixed by Black 22.3.0. Versions before that won't work with click 8.1.0. Incompatible with click 8.1.0 (ImportError: cannot import name '_unicodefun' from 'click') #2964 E.g.: black.yml python-version: 3.8 - name: install black run: | - pip install black==20.8b1 + pip install black==22.3.0 - name: run black run: | black . --check --line-length 100 https://github.com/Clinical-Genomics/cgbeacon2/pull/221/files As a workaround, pin click to the last version via pip install --upgrade click==8.0.2.
184
240
71,695,387
2022-3-31
https://stackoverflow.com/questions/71695387/connecting-to-a-different-google-drive-than-the-one-logged-into-google-colab
recently colab removed the ability to connect to google drive from different accounts other than the one you were logged into in google drive. There was a workaround someone posted with the following code which worked great, until now... !apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} %cd /content !mkdir gdrive %cd gdrive !mkdir "My Drive" %cd .. %cd .. !google-drive-ocamlfuse "/content/gdrive/My Drive" the auth.authenticate_user() line now gives a popup that resembles the recently updated normal authentication process giving this popup I go through the process and log into my other account and I am met with this message. is there any workaround to this? the reason this matters is that I have unlimited storage on my edu account for free but I couldn't get my edu account to work the paid version of colab due to security restrictions on my universities system, hence I use a payed colab on my personal account and store my data on the edu account
edit: do it all in one cell without printing unneeded information edit2 april 28th 2022: changed !sudo apt install google-drive-ocamlfuse >/dev/null 2>&1 to a new line because sudo apt update was failing on colab and preventing it from running the install line. !sudo echo -ne '\n' | sudo add-apt-repository ppa:alessandro-strada/ppa >/dev/null 2>&1 # note: >/dev/null 2>&1 is used to supress printing !sudo apt update >/dev/null 2>&1 !sudo apt install google-drive-ocamlfuse >/dev/null 2>&1 !google-drive-ocamlfuse !sudo apt-get install w3m >/dev/null 2>&1 # to act as web browser !xdg-settings set default-web-browser w3m.desktop >/dev/null 2>&1 # to set default browser %cd /content !mkdir gdrive %cd gdrive !mkdir "My Drive" !google-drive-ocamlfuse "/content/gdrive/My Drive" then just click the link Failure("Error opening URL:https://accounts.google.com/o/oauth2/auth?client_id=56492... and authorize your account I found one solution I am not sure how fast it it in terms of connection to grive etc but it mounts at least. I figured this out thanks to link1, link2 first run this, you'll be promted to (click in the box) and then click enter !sudo add-apt-repository ppa:alessandro-strada/ppa !sudo apt update !sudo apt install google-drive-ocamlfuse !google-drive-ocamlfuse you'll see this output pictured below, just click the link and authorize your account. next for some reason you'll need to install a browser, even though you already authorized your account, so run this !sudo apt-get install w3m # to act as web browser !xdg-settings set default-web-browser w3m.desktop # to set default browser finally mount it %cd /content !mkdir gdrive %cd gdrive !mkdir "My Drive" !google-drive-ocamlfuse "/content/gdrive/My Drive" you should see
4
11
71,745,931
2022-4-5
https://stackoverflow.com/questions/71745931/restrictedpython-call-other-functions-within-user-specified-code
Using Yuri Nudelman's code with the custom _import definition to specify modules to restrict serves as a good base but when calling functions within said user_code naturally due to having to whitelist everything is there any way to permit other user defined functions to be called? Open to other sandboxing solutions although Jupyter didn't seem straight-forward to embed within a web interface. from RestrictedPython import safe_builtins, compile_restricted from RestrictedPython.Eval import default_guarded_getitem def _import(name, globals=None, locals=None, fromlist=(), level=0): safe_modules = ["math"] if name in safe_modules: globals[name] = __import__(name, globals, locals, fromlist, level) else: raise Exception("Don't you even think about it {0}".format(name)) safe_builtins['__import__'] = _import # Must be a part of builtins def execute_user_code(user_code, user_func, *args, **kwargs): """ Executed user code in restricted env Args: user_code(str) - String containing the unsafe code user_func(str) - Function inside user_code to execute and return value *args, **kwargs - arguments passed to the user function Return: Return value of the user_func """ def _apply(f, *a, **kw): return f(*a, **kw) try: # This is the variables we allow user code to see. @result will contain return value. restricted_locals = { "result": None, "args": args, "kwargs": kwargs, } # If you want the user to be able to use some of your functions inside his code, # you should add this function to this dictionary. # By default many standard actions are disabled. Here I add _apply_ to be able to access # args and kwargs and _getitem_ to be able to use arrays. Just think before you add # something else. I am not saying you shouldn't do it. You should understand what you # are doing thats all. restricted_globals = { "__builtins__": safe_builtins, "_getitem_": default_guarded_getitem, "_apply_": _apply, } # Add another line to user code that executes @user_func user_code += "\nresult = {0}(*args, **kwargs)".format(user_func) # Compile the user code byte_code = compile_restricted(user_code, filename="<user_code>", mode="exec") # Run it exec(byte_code, restricted_globals, restricted_locals) # User code has modified result inside restricted_locals. Return it. return restricted_locals["result"] except SyntaxError as e: # Do whaever you want if the user has code that does not compile raise except Exception as e: # The code did something that is not allowed. Add some nasty punishment to the user here. raise i_example = """ import math def foo(): return 7 def myceil(x): return math.ceil(x)+foo() """ print(execute_user_code(i_example, "myceil", 1.5)) Running this returns 'foo' is not defined
First of all, the replacement for the __import__ built-in is implemented incorrectly. That built-in is supposed to return the imported module, not mutate the globals to include it: Python 3.9.12 (main, Mar 24 2022, 13:02:21) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> __import__('math') <module 'math' (built-in)> >>> math Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'math' is not defined A better way to reimplement __import__ would be this: _SAFE_MODULES = frozenset(("math",)) def _safe_import(name, *args, **kwargs): if name not in _SAFE_MODULES: raise Exception(f"Don't you even think about {name!r}") return __import__(name, *args, **kwargs) The fact that you mutated globals in your original implementation was partially masking the primary bug. Namely: name assignments within restricted code (function definitions, variable assignments and imports) mutate the locals dict, but name look-ups are by default done as global look-ups, bypassing the locals entirely. You can see this by disassembling the restricted bytecode using __import__('dis').dis(byte_code): 2 0 LOAD_CONST 0 (0) 2 LOAD_CONST 1 (None) 4 IMPORT_NAME 0 (math) 6 STORE_NAME 0 (math) 4 8 LOAD_CONST 2 (<code object foo at 0x7fbef4eef3a0, file "<user_code>", line 4>) 10 LOAD_CONST 3 ('foo') 12 MAKE_FUNCTION 0 14 STORE_NAME 1 (foo) 7 16 LOAD_CONST 4 (<code object myceil at 0x7fbef4eef660, file "<user_code>", line 7>) 18 LOAD_CONST 5 ('myceil') 20 MAKE_FUNCTION 0 22 STORE_NAME 2 (myceil) 24 LOAD_CONST 1 (None) 26 RETURN_VALUE Disassembly of <code object foo at 0x7fbef4eef3a0, file "<user_code>", line 4>: 5 0 LOAD_CONST 1 (7) 2 RETURN_VALUE Disassembly of <code object myceil at 0x7fbef4eef660, file "<user_code>", line 7>: 8 0 LOAD_GLOBAL 0 (_getattr_) 2 LOAD_GLOBAL 1 (math) 4 LOAD_CONST 1 ('ceil') 6 CALL_FUNCTION 2 8 LOAD_FAST 0 (x) 10 CALL_FUNCTION 1 12 LOAD_GLOBAL 2 (foo) 14 CALL_FUNCTION 0 16 BINARY_ADD 18 RETURN_VALUE The documentation for exec explains (emphasis mine): If only globals is provided, it must be a dictionary (and not a subclass of dictionary), which will be used for both the global and the local variables. If globals and locals are given, they are used for the global and local variables, respectively. If provided, locals can be any mapping object. Remember that at the module level, globals and locals are the same dictionary. If exec gets two separate objects as globals and locals, the code will be executed as if it were embedded in a class definition. This makes separate mappings for locals and globals completely spurious. We can therefore simply get rid of the locals dict, and put everything in globals. The entire code should look something like this: from RestrictedPython import safe_builtins, compile_restricted _SAFE_MODULES = frozenset(("math",)) def _safe_import(name, *args, **kwargs): if name not in _SAFE_MODULES: raise Exception(f"Don't you even think about {name!r}") return __import__(name, *args, **kwargs) def execute_user_code(user_code, user_func, *args, **kwargs): my_globals = { "__builtins__": { **safe_builtins, "__import__": _safe_import, }, } try: byte_code = compile_restricted( user_code, filename="<user_code>", mode="exec") except SyntaxError: # syntax error in the sandboxed code raise try: exec(byte_code, my_globals) return my_globals[user_func](*args, **kwargs) except BaseException: # runtime error (probably) in the sandboxed code raise Above I also managed to fix a couple of tangential issues: Instead of injecting the function call into the compiled snippet, I look up the function in the globals dict directly. This avoids a potential code injection vector if user_func happens to come from an untrusted source, and avoids having to inject args, kwargs and result into the sandbox, which would enable sandboxed code to clobber it. I avoid mutating the safe_builtins object provided by the RestrictedPython module. Otherwise, if any other code within your program happens to be using RestrictedPython, it might have been affected. I split the exception handling between the two steps: compilation and execution. This minimises the probability that bugs in the sandbox code will be misattributed to the sandboxed code. I changed the caught runtime exception type to BaseException, to also catch cases when sandboxed code attempts to raise KeyboardInterrupt or SystemExit (which do not derive from Exception, but only BaseException). I also removed references to _getitem_ and _apply_, which don’t seem to be used for anything. If they turn out to be necessary after all, you may restore them. (Note, however, that this still does not protect against DoS via infinite loops within the sandbox.)
6
7
71,737,316
2022-4-4
https://stackoverflow.com/questions/71737316/problems-installing-lxml-on-m1-mac
So, I'm having the classic trouble install lxml. Initially I was just pip installing, but when I tried to free up memory using Element.clear() I was getting the following error: Python(58695,0x1001b4580) malloc: *** error for object 0x600000bc3f60: pointer being freed was not allocated I thought this must be because lxml is using the system's libxml2 which is probably out of date. So I used homebrew to install libxml2 and libxlt, and I force linked them both. I then tried to install using the following command: ❯ STATIC_DEPS=true pip install lxml --no-cache-dir 13:01:46 Collecting lxml Downloading lxml-4.8.0.tar.gz (3.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 5.4 MB/s eta 0:00:00 Preparing metadata (setup.py) ... done Building wheels for collected packages: lxml Building wheel for lxml (setup.py) ... done Created wheel for lxml: filename=lxml-4.8.0-cp310-cp310-macosx_12_0_arm64.whl size=1683935 sha256=47912c1ba66d274c3ad7b2a2db00243f96d334a3fd5e439725f5005a7a72a602 Stored in directory: /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-ephem-wheel-cache-4_v4ov7s/wheels/e4/52/34/64064e2e2f1ce84d212a6dde6676f3227846210a7996fc2530 Successfully built lxml Installing collected packages: lxml Successfully installed lxml-4.8.0 ..but then when I tried to import etree I would get this error: Traceback (most recent call last): File "/Users/human/Code/ia_book_images/viewer/book_image_downloader.py", line 4, in <module> from lxml import etree as ET ImportError: dlopen(/Users/human/.virtualenvs/ia_book_images/lib/python3.10/site-packages/lxml/etree.cpython-310-darwin.so, 0x0002): symbol not found in flat namespace '___htmlDefaultSAXHandler' So then I thought let's make 100% sure that it's using the right versions of libxml2 using CFLAGS and got the following result: ❯ CFLAGS="-I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include" STATIC_DEPS=true pip install lxml --no-cache-dir Collecting lxml Downloading lxml-4.8.0.tar.gz (3.2 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2/3.2 MB 4.4 MB/s eta 0:00:00 Preparing metadata (setup.py) ... error error: subprocess-exited-with-error Γ— python setup.py egg_info did not run successfully. β”‚ exit code: 1 ╰─> [199 lines of output] Checking for gcc... Checking for shared library support... Building shared library libz.1.2.12.dylib with gcc. Checking for size_t... Yes. Checking for off64_t... No. Checking for fseeko... Yes. Checking for strerror... Yes. Checking for unistd.h... Yes. Checking for stdarg.h... Yes. Checking whether to use vs[n]printf() or s[n]printf()... using vs[n]printf(). Checking for vsnprintf() in stdio.h... Yes. Checking for return value of vsnprintf()... Yes. Checking for attribute(visibility) support... Yes. gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o example.o test/example.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o adler32.o adler32.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o crc32.o crc32.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o deflate.o deflate.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o infback.o infback.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inffast.o inffast.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inflate.o inflate.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o inftrees.o inftrees.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o trees.o trees.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o zutil.o zutil.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o compress.o compress.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o uncompr.o uncompr.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzclose.o gzclose.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzlib.o gzlib.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzread.o gzread.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -c -o gzwrite.o gzwrite.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -I. -c -o minigzip.o test/minigzip.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/adler32.o adler32.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/crc32.o crc32.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/deflate.o deflate.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/infback.o infback.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inflate.o inflate.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inffast.o inffast.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/inftrees.o inftrees.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/trees.o trees.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/zutil.o zutil.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzclose.o gzclose.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/uncompr.o uncompr.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/compress.o compress.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzlib.o gzlib.c libtool -o libz.a adler32.o crc32.o deflate.o infback.o inffast.o inflate.o inftrees.o trees.o zutil.o compress.o uncompr.o gzclose.o gzlib.o gzread.o gzwrite.o gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzread.o gzread.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -DPIC -c -o objs/gzwrite.o gzwrite.c gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o example example.o -L. libz.a gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzip minigzip.o -L. libz.a gcc -dynamiclib -install_name /private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/build/tmp/libxml2/lib/libz.1.dylib -compatibility_version 1 -current_version 1.2.12 -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -fPIC -DHAVE_HIDDEN -o libz.1.2.12.dylib adler32.lo crc32.lo deflate.lo infback.lo inffast.lo inflate.lo inftrees.lo trees.lo zutil.lo compress.lo uncompr.lo gzclose.lo gzlib.lo gzread.lo gzwrite.lo -lc -arch x86_64 ld: warning: ignoring file crc32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file adler32.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file deflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file infback.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file inffast.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file inflate.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file inftrees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file trees.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file compress.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file zutil.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file uncompr.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file gzread.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file gzlib.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file gzclose.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 ld: warning: ignoring file gzwrite.lo, building for macOS-x86_64 but attempting to link with file built for unknown-arm64 rm -f libz.dylib libz.1.dylib ln -s libz.1.2.12.dylib libz.dylib ln -s libz.1.2.12.dylib libz.1.dylib gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o examplesh example.o -L. libz.1.2.12.dylib gcc -I/opt/homebrew/opt/libxslt/include -I/opt/homebrew/opt/libxml2/include -DHAVE_HIDDEN -o minigzipsh minigzip.o -L. libz.1.2.12.dylib ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64 ld: warning: ignoring file libz.1.2.12.dylib, building for macOS-arm64 but attempting to link with file built for macOS-x86_64 Undefined symbols for architecture arm64: "_gzclose", referenced from: _gz_compress in minigzip.o _gz_uncompress in minigzip.o "_gzdopen", referenced from: _main in minigzip.o "_gzerror", referenced from: _gz_compress in minigzip.o _gz_uncompress in minigzip.o "_gzopen", referenced from: _file_compress in minigzip.o _file_uncompress in minigzip.o _main in minigzip.o "_gzread", referenced from: _gz_uncompress in minigzip.o "_gzwrite", referenced from: _gz_compress in minigzip.o ld: symbol(s) not found for architecture arm64 Undefined symbols for architecture arm64: "_compress", referenced from: _test_compress in example.o (maybe you meant: _test_compress) "_deflate", referenced from: _test_deflate in example.o _test_large_deflate in example.o _test_flush in example.o _test_dict_deflate in example.o (maybe you meant: _test_large_deflate, _test_deflate , _test_dict_deflate ) "_deflateEnd", referenced from: _test_deflate in example.o _test_large_deflate in example.o _test_flush in example.o _test_dict_deflate in example.o "_deflateInit_", referenced from: _test_deflate in example.o _test_large_deflate in example.o _test_flush in example.o _test_dict_deflate in example.o "_deflateParams", referenced from: _test_large_deflate in example.o "_deflateSetDictionary", referenced from: _test_dict_deflate in example.o "_gzclose", referenced from: _test_gzio in example.o "_gzerror", referenced from: _test_gzio in example.o "_gzgetc", referenced from: _test_gzio in example.o "_gzgets", referenced from: _test_gzio in example.o "_gzopen", referenced from: _test_gzio in example.o "_gzprintf", referenced from: _test_gzio in example.o "_gzputc", referenced from: _test_gzio in example.o "_gzputs", referenced from: _test_gzio in example.o "_gzread", referenced from: _test_gzio in example.o "_gzseek", referenced from: _test_gzio in example.o "_gztell", referenced from: _test_gzio in example.o "_gzungetc", referenced from: _test_gzio in example.o "_inflate", referenced from: _test_inflate in example.o _test_large_inflate in example.o _test_sync in example.o _test_dict_inflate in example.o (maybe you meant: _test_large_inflate, _test_inflate , _test_dict_inflate ) "_inflateEnd", referenced from: _test_inflate in example.o _test_large_inflate in example.o _test_sync in example.o _test_dict_inflate in example.o "_inflateInit_", referenced from: _test_inflate in example.o _test_large_inflate in example.o _test_sync in example.o _test_dict_inflate in example.o "_inflateSetDictionary", referenced from: _test_dict_inflate in example.o "_inflateSync", referenced from: _test_sync in example.o "_uncompress", referenced from: _test_compress in example.o "_zlibCompileFlags", referenced from: _main in example.o "_zlibVersion", referenced from: _main in example.o clang: error: linker command failed with exit code 1 (use -v to see invocation) ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make: *** [minigzipsh] Error 1 make: *** Waiting for unfinished jobs.... make: *** [examplesh] Error 1 Traceback (most recent call last): File "<string>", line 2, in <module> File "<pip-setuptools-caller>", line 34, in <module> File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 270, in <module> **setup_extra_options() File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setup.py", line 162, in setup_extra_options ext_modules = setupinfo.ext_modules( File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/setupinfo.py", line 74, in ext_modules XML2_CONFIG, XSLT_CONFIG = build_libxml2xslt( File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 428, in build_libxml2xslt cmmi(zlib_configure_cmd, zlib_dir, multicore, **call_setup) File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 352, in cmmi call_subprocess( File "/private/var/folders/g9/lqph46sj36n9kkvjt1pzdxhm0000gn/T/pip-install-kl4hmrrk/lxml_4ecb3c255ad049e39a89a66ee0a50e76/buildlibxml.py", line 335, in call_subprocess raise Exception('Command "%s" returned code %s' % (cmd_desc, returncode)) Exception: Command "make -j6" returned code 2 Building lxml version 4.8.0. Latest version of zlib is 1.2.12 Downloading zlib into libs/zlib-1.2.12.tar.gz from https://zlib.net/zlib-1.2.12.tar.gz Unpacking zlib-1.2.12.tar.gz into build/tmp Latest version of libiconv is 1.16 Downloading libiconv into libs/libiconv-1.16.tar.gz from https://ftp.gnu.org/pub/gnu/libiconv/libiconv-1.16.tar.gz Unpacking libiconv-1.16.tar.gz into build/tmp Latest version of libxml2 is 2.9.12 Downloading libxml2 into libs/libxml2-2.9.12.tar.gz from http://xmlsoft.org/sources/libxml2-2.9.12.tar.gz Unpacking libxml2-2.9.12.tar.gz into build/tmp Latest version of libxslt is 1.1.34 Downloading libxslt into libs/libxslt-1.1.34.tar.gz from http://xmlsoft.org/sources/libxslt-1.1.34.tar.gz Unpacking libxslt-1.1.34.tar.gz into build/tmp Starting build in build/tmp/zlib-1.2.12 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. Do I need to do something special to build lxml on an m1 mac?
It turned out that installing lxml with a simple pip install was working fine. The reason for my malloc error was the fact that I was trying to clear the element before the end tag had been seen. Turns out this isn't possible and you need to wait for the end tag even if you already know you aren't interested in the element.
8
-1
71,758,620
2022-4-5
https://stackoverflow.com/questions/71758620/error-installing-python-3-7-6-using-pyenv-on-new-macbook-pro-m1-in-os-12-3
I am struggling to install python version 3.7.6 using pyenv on my new macbook pro M1 running on mac os 12.3.1. My configuration $ clang -v Apple clang version 13.1.6 (clang-1316.0.21.2) Target: arm64-apple-darwin21.4.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin $ pyenv install 3.7.6 python-build: use [email protected] from homebrew python-build: use readline from homebrew Downloading Python-3.7.6.tar.xz... -> https://www.python.org/ftp/python/3.7.6/Python-3.7.6.tar.xz Installing Python-3.7.6... python-build: use tcl-tk from homebrew python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 12.3.1 using python-build 2.2.5-10-g58427b9a) Inspect or clean up the working tree at /var/folders/4t/1qfwng092qz2qxwxm6ss2f1c0000gp/T/python-build.20220405170233.32567 Results logged to /var/folders/4t/1qfwng092qz2qxwxm6ss2f1c0000gp/T/python-build.20220405170233.32567.log Last 10 log lines: checking for --with-cxx-main=<compiler>... no checking for clang++... no configure: By default, distutils will build C++ extension modules with "clang++". If this is not intended, then set CXX on the configure command line. checking for the platform triplet based on compiler characteristics... darwin configure: error: internal configure error for the platform triplet, please file a bug report
Finally this patch works in installing 3.7.6 on macbook m1 using pyenv. To install python 3.7.6 version in mac os 12+ , M1 chip, apple clang version 13+ using pyenv, create a file anywhere in your local and call it python-3.7.6-m1.patch and copy the contents(below) to that file and save it. diff --git a/configure b/configure index b769d59629..8b018b6fe8 100755 --- a/configure +++ b/configure @@ -3370,7 +3370,7 @@ $as_echo "#define _BSD_SOURCE 1" >>confdefs.h # has no effect, don't bother defining them Darwin/[6789].*) define_xopen_source=no;; - Darwin/1[0-9].*) + Darwin/[12][0-9].*) define_xopen_source=no;; # On AIX 4 and 5.1, mbstate_t is defined only when _XOPEN_SOURCE == 500 but # used in wcsnrtombs() and mbsnrtowcs() even if _XOPEN_SOURCE is not defined @@ -5179,8 +5179,6 @@ $as_echo "$as_me: fi -MULTIARCH=$($CC --print-multiarch 2>/dev/null) - { $as_echo "$as_me:${as_lineno-$LINENO}: checking for the platform triplet based on compiler characteristics" >&5 $as_echo_n "checking for the platform triplet based on compiler characteristics... " >&6; } @@ -5338,6 +5336,11 @@ $as_echo "none" >&6; } fi rm -f conftest.c conftest.out +if test x$PLATFORM_TRIPLET != xdarwin; then + MULTIARCH=$($CC --print-multiarch 2>/dev/null) +fi + + if test x$PLATFORM_TRIPLET != x && test x$MULTIARCH != x; then if test x$PLATFORM_TRIPLET != x$MULTIARCH; then as_fn_error $? "internal configure error for the platform triplet, please file a bug report" "$LINENO" 5 @@ -9247,6 +9250,9 @@ fi ppc) MACOSX_DEFAULT_ARCH="ppc64" ;; + arm64) + MACOSX_DEFAULT_ARCH="arm64" + ;; *) as_fn_error $? "Unexpected output of 'arch' on OSX" "$LINENO" 5 ;; diff --git a/configure.ac b/configure.ac index 49acff3136..2f66184b26 100644 --- a/configure.ac +++ b/configure.ac @@ -490,7 +490,7 @@ case $ac_sys_system/$ac_sys_release in # has no effect, don't bother defining them Darwin/@<:@6789@:>@.*) define_xopen_source=no;; - Darwin/1@<:@0-9@:>@.*) + Darwin/@<:@[12]@:>@@<:@0-9@:>@.*) define_xopen_source=no;; # On AIX 4 and 5.1, mbstate_t is defined only when _XOPEN_SOURCE == 500 but # used in wcsnrtombs() and mbsnrtowcs() even if _XOPEN_SOURCE is not defined @@ -724,8 +724,7 @@ then fi -MULTIARCH=$($CC --print-multiarch 2>/dev/null) -AC_SUBST(MULTIARCH) + AC_MSG_CHECKING([for the platform triplet based on compiler characteristics]) cat >> conftest.c <<EOF @@ -880,6 +879,11 @@ else fi rm -f conftest.c conftest.out +if test x$PLATFORM_TRIPLET != xdarwin; then + MULTIARCH=$($CC --print-multiarch 2>/dev/null) +fi +AC_SUBST(MULTIARCH) + if test x$PLATFORM_TRIPLET != x && test x$MULTIARCH != x; then if test x$PLATFORM_TRIPLET != x$MULTIARCH; then AC_MSG_ERROR([internal configure error for the platform triplet, please file a bug report]) NOW we can Install python 3.7.6 using pyenv as follows (need to be in the same directory as the patch file that we just created): pyenv install --patch 3.7.6 < python-3.7.6-m1.patch To install other python version on mac os 12+ , M1 chip, apple clang version 13+ using pyenv (not tested but should work) Shallow clone the branch of python version you are interested in installing. go to https://github.com/python/cpython and find the versions available for cloning under "tags" dropdown git clone https://github.com/python/cpython --branch v3.x.x --single-branch cd cpython Now make changes to the two files in it (configure.ac and configure). the git diff should look like the one shown above. The line numbers will be different based on which version of python you are installing, this git diff file is for 3.7.6 and can't be directly used for other versions. for other versions of python, search for the exact line of code being edited/deleted in the exact file as shown in the above git diff and make the changes accordingly. then save the git diff in a new file as follows. git diff > python-3.x.x-m1.patch Now we can install that version using: pyenv install --patch 3.x.x < python-3.x.x-m1.patch
5
3
71,691,598
2022-3-31
https://stackoverflow.com/questions/71691598/how-to-run-python-as-x86-with-rosetta2-on-arm-macos-machine
I have a python app with downstream dependencies on dynamic libraries that are available as X86 only. The app runs on a X86 MacOS machine, but on a ARM MacOS machine it fails with an ImportError. I've run lipo -archs on the libraries and they are x86_64 only. I have Python running in a virtualenv and it is a universal binary x86_64 arm64. The intermediary object file built by the application when it installs is also a universal binary x86_64 arm64. I suspect that Python is being run native as an ARM app, but because of the dependencies I need it run as an X86 app. Is there a MacOS or Rosetta2 option or environmental setting that I can use that would force the X86 Python binary to be executed as opposed to the ARM binary?
Looks like the only way to do this is to install a X86 version of python. I found a how to guide here - https://towardsdatascience.com/how-to-use-manage-multiple-python-versions-on-an-apple-silicon-m1-mac-d69ee6ed0250 but couldn't quite get the pyenv build part to work. So in the Rosetta i386 terminal I brew86 installed python. This put a X86 version of python into /usr/local/bin/python3 from which I was able to create a X86 only virtualenv. Broadly the steps are from the above link (minus the pyenv parts): Install Rosetta Create a Rosetta terminal Install X86 homebrew in the Rosetta terminal Create an alias for the X86 homebrew in /usr/local/bin/brew Use the X86 brew to install X86 python (ends up /usr/local/bin/python3) Create a virtualenv based on the X86 python path pip install
11
5
71,709,229
2022-4-1
https://stackoverflow.com/questions/71709229/vscode-debugger-can-not-import-queue-due-to-shadowing
When I try to run any python code in debug mode using VScode, I got an error message saying: 42737 -- /home/<username>/Desktop/development/bopi/experiment_handler.py .vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/launcher 4 Traceback (most recent call last): File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "/usr/lib/python2.7/runpy.py", line 72, in _run_code exec code in run_globals File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/__main__.py", line 43, in <module> from debugpy.server import cli File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/../debugpy/server/__init__.py", line 9, in <module> import debugpy._vendored.force_pydevd # noqa File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/../debugpy/_vendored/force_pydevd.py", line 37, in <module> pydevd_constants = import_module('_pydevd_bundle.pydevd_constants') File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_constants.py", line 362, in <module> from _pydev_bundle._pydev_saved_modules import thread, threading File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_saved_modules.py", line 97, in <module> import queue as _queue; verify_shadowed.check(_queue, ['Queue', 'LifoQueue', 'Empty', 'Full', 'deque']) File "/home/<username>/.vscode-server/extensions/ms-python.python-2022.4.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_saved_modules.py", line 75, in check raise DebuggerInitializationError(msg) _pydev_bundle._pydev_saved_modules.DebuggerInitializationError: It was not possible to initialize the debugger due to a module name conflict. i.e.: the module "queue" could not be imported because it is shadowed by: /home/<username>/.local/lib/python2.7/site-packages/queue/__init__.pyc Please rename this file/folder so that the original module from the standard library can be imported. Deleting the init.pyc and init.py resulting with an error message about missing queue import.
Downgrading my Python extension in Visual Studio Code to v2022.2.1924087327 worked for me. Elevating @Onur Berk's comment below as part of the answer: Its is very easy to downgrade the python extension, just click 'extensions' and find the Python extension and select it. Rather than clicking 'uninstall' click the arrow next to it, this will give you an option to install another version
12
25
71,753,428
2022-4-5
https://stackoverflow.com/questions/71753428/how-to-get-shap-values-for-each-class-on-a-multiclass-classification-problem-in
I have the following dataframe: import pandas as pd import random import xgboost import shap foo = pd.DataFrame({'id':[1,2,3,4,5,6,7,8,9,10], 'var1':random.sample(range(1, 100), 10), 'var2':random.sample(range(1, 100), 10), 'var3':random.sample(range(1, 100), 10), 'class': ['a','a','a','a','a','b','b','c','c','c']}) I want to run a classification algorithm to predict the 3 classes. So I split my dataset into a training and testing set and I ran an xgboost classification cl_cols = foo.filter(regex='var').columns X_train, X_test, y_train, y_test = train_test_split(foo[cl_cols], foo[['class']], test_size=0.33, random_state=42) model = xgboost.XGBClassifier(objective="binary:logistic") model.fit(X_train, y_train) Now I would like to get the mean SHAP values for each class, instead of the mean from the absolute SHAP values generated from this code: shap_values = shap.TreeExplainer(model).shap_values(X_test) shap.summary_plot(shap_values, X_test) Also, the plot labels the class as 0,1,2. How can I know to which class the 0,1 & 2 from the original correspond? Because this code: shap.summary_plot(shap_values, X_test, class_names= ['a', 'b', 'c']) gives and this code: shap.summary_plot(shap_values, X_test, class_names= ['b', 'c', 'a']) gives So I am not sure about the legend anymore. Any ideas?
By doing some research and with the help of this post and @Alessandro Nesti 's answer, here is my solution: foo = pd.DataFrame({'id':[1,2,3,4,5,6,7,8,9,10], 'var1':random.sample(range(1, 100), 10), 'var2':random.sample(range(1, 100), 10), 'var3':random.sample(range(1, 100), 10), 'class': ['a','a','a','a','a','b','b','c','c','c']}) cl_cols = foo.filter(regex='var').columns X_train, X_test, y_train, y_test = train_test_split(foo[cl_cols], foo[['class']], test_size=0.33, random_state=42) model = xgboost.XGBClassifier(objective="multi:softmax") model.fit(X_train, y_train) def get_ABS_SHAP(df_shap,df): #import matplotlib as plt # Make a copy of the input data shap_v = pd.DataFrame(df_shap) feature_list = df.columns shap_v.columns = feature_list df_v = df.copy().reset_index().drop('index',axis=1) # Determine the correlation in order to plot with different colors corr_list = list() for i in feature_list: b = np.corrcoef(shap_v[i],df_v[i])[1][0] corr_list.append(b) corr_df = pd.concat([pd.Series(feature_list),pd.Series(corr_list)],axis=1).fillna(0) # Make a data frame. Column 1 is the feature, and Column 2 is the correlation coefficient corr_df.columns = ['Variable','Corr'] corr_df['Sign'] = np.where(corr_df['Corr']>0,'red','blue') shap_abs = np.abs(shap_v) k=pd.DataFrame(shap_abs.mean()).reset_index() k.columns = ['Variable','SHAP_abs'] k2 = k.merge(corr_df,left_on = 'Variable',right_on='Variable',how='inner') k2 = k2.sort_values(by='SHAP_abs',ascending = True) k2_f = k2[['Variable', 'SHAP_abs', 'Corr']] k2_f['SHAP_abs'] = k2_f['SHAP_abs'] * np.sign(k2_f['Corr']) k2_f.drop(columns='Corr', inplace=True) k2_f.rename(columns={'SHAP_abs': 'SHAP'}, inplace=True) return k2_f foo_all = pd.DataFrame() for k,v in list(enumerate(model.classes_)): foo = get_ABS_SHAP(shap_values[k], X_test) foo['class'] = v foo_all = pd.concat([foo_all,foo]) import plotly_express as px px.bar(foo_all,x='SHAP', y='Variable', color='class') which results in
5
2
71,746,654
2022-4-5
https://stackoverflow.com/questions/71746654/how-do-i-add-selenium-chromedriver-to-an-aws-lambda-function
I am trying to host a webscraping function on aws lambda and am running into webdriver errors for selenium. Could someone show me how you go about adding the chromedriver.exe file and how do you get the pathing to work in AWS Lambda function. This is the portion of my function that has to do with selenium, from selenium import webdriver from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import Select from selenium.webdriver.chrome.service import Service import pandas as pd import mysql.connector from sqlalchemy import create_engine url = 'https://covid19criticalcare.com/pharmacies/' driver = webdriver.Chrome(service=Service(ChromeDriverManager().install())) driver.maximize_window() driver.get(url) wait = WebDriverWait(driver, 5) I tried creating a lambda layer with the chromedriver.exe file I followed this guide (https://dev.to/awscommunity-asean/creating-an-api-that-runs-selenium-via-aws-lambda-3ck3) but I couldn't add the headless chromium because of the file size pushing me over my function limit (my pandas and numpy dependence layers have taken up most of my space) I tried driver = webdriver.Chrome(with a path variable) and tried different pathing but wasn't sure what the beginning of the path would be since its on a lambda function.
I've been struggling adding selenium to the aws lambda for last couple days. I have a web scraping function (uses selenium and google api) which extracts data from a website and writes the outputs to a google spreadsheet. Let me explain what i did step by step and how i finally succeeded so you don't have to deal with it as much as me: 1- I tried to add selenium as a layer described here https://www.youtube.com/watch?v=jWqbYiHudt8. What i ended up was, i was succesfull with adding selenium but deployment package is over 250mb (describe lambda quotaas here: How to increase the maximum size of the AWS lambda deployment package (RequestEntityTooLargeException)?) so it did not work. 2- To overcome deployment package size, it is a good option to add as container images(10 gb deployment package size limit). Here is a good explanation of adding as container images https://cloudbytes.dev/snippets/run-selenium-in-aws-lambda-for-ui-testing#using-the-github-repository-directly . i tried it but i could not able to deploy as described due to missing/wrong webdrivers(the shell script seems to be wrong) 3- And finally, i was fully able to publish my selenium function as docker image as described here https://github.com/umihico/docker-selenium-lambda. There are lots of discussions about which version work with what. The most important issue about selenium is, you have to be careful about package and driver version when deploying to aws lambda.
8
19
71,672,179
2022-3-30
https://stackoverflow.com/questions/71672179/the-file-is-not-a-zip-file-error-for-the-output-of-git-show-by-gitpython
The script to reproduce the issue Save this code as a shell script and run it. The code should report the File is not a zip file error. #!/bin/bash set -eu mkdir foo cd foo pip install --user GitPython echo foo > a zip a.zip a # -t option validates the zip file. # See https://unix.stackexchange.com/questions/197127/test-integrity-of-zip-file unzip -t a.zip git init git add a.zip git commit -m 'init commit' cat << EOF > test.py from git import Repo import zipfile from io import StringIO repo = Repo('.', search_parent_directories=True) raw = repo.git.show("HEAD:a.zip") z = zipfile.ZipFile(StringIO(raw), "r") EOF python3 test.py Original Question I'm writing a Krita plugin for viewing files in previous commits of a Git repository, and I want to get the thumbnail file of a Krita file. To do so, I'm trying to get a file with git show, unzip it because a Krita file is a Zip file, and get preview.png or mergedimage.png. %unzip image.kra Archive: image.kra extracting: mimetype inflating: maindoc.xml inflating: documentinfo.xml inflating: preview.png inflating: image/layers/layer2 inflating: image/layers/layer2.defaultpixel inflating: image/layers/layer2.icc inflating: image/annotations/icc inflating: mergedimage.png inflating: image/animation/index.xml We can get the .kra file from the Git repository as str with GitPython. However, I can't parse the file with zipfile.ZipFile as it says File is not a zip file. (This code is based on this SO answer) from git import Repo import zipfile from io import StringIO repo = Repo('.', search_parent_directories=True) raw = repo.git.show("HEAD~:image.kra") z = zipfile.ZipFile(StringIO(raw), "r") will emit Traceback (most recent call last): File "/home/hiroki/krita_question/test.py", line 11, in <module> z = zipfile.ZipFile(StringIO(raw), "r") File "/usr/lib/python3.9/zipfile.py", line 1257, in __init__ self._RealGetContents() File "/usr/lib/python3.9/zipfile.py", line 1324, in _RealGetContents raise BadZipFile("File is not a zip file") zipfile.BadZipFile: File is not a zip file I believe this is a valid Krita file because I can restore the file with git show on the command line. So, %git show HEAD~:image.kra > prev.kra %krita prev.kra works correctly. Unzipping the file works too. Why can't I parse the git show output as a Zip file? git log --stat|grep -v 'Author': commit b96d915862b39a204a9f4350e7e56634b6fcfe0b Date: Wed Mar 30 14:44:02 2022 +0900 chore: add ls | 231 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 231 insertions(+) commit 619984a842c6c2daf31559c1979f91227a323648 Date: Wed Mar 30 14:43:58 2022 +0900 chore: add image.kra | Bin 0 -> 777685 bytes 1 file changed, 0 insertions(+), 0 deletions(-) Versions Python: 3.9.9 GitPython: 3.1.27 Krita: 5.0.2 Linux 5.15.16-gentoo All files are created in Linux.
Update GitPython version 3.1.28 (not yet released) should add the strip_newline_in_stdout option. If the option is set to False, the trailing \n of the stdout of any commands run by repo.git.foobar will be preserved. raw = repo.git.show("HEAD~:image.kra", strip_newline_in_stdout=False) Original answer It seems that this is caused by the GitPython's bug. It truncated the last \n of the output of git show and made the file invalid. I changed the code to use subprocess.Popen and ZipFile succeeded. import zipfile from io import BytesIO import subprocess p = subprocess.Popen(["git", "show", "HEAD:a.zip"], stdout = subprocess.PIPE) out, _ = p.communicate() z = zipfile.ZipFile(BytesIO(out), "r")
4
5
71,731,988
2022-4-4
https://stackoverflow.com/questions/71731988/sum-of-the-maximums-of-all-subarrays-multiplied-by-their-lengths-in-linear-ti
Given an array I should compute the following sum in linear time: My most naive implementation is O(n3): sum_ = 0 for i in range(n): for j in range(n, i, -1): sum_ += max(arr[i:j]) * (j-i) I have no idea what to do. I have tried many algorithms but they were at best O(n*log(n)), but I should solve it in linear time. Also, I don't get the idea, is there a mathematical way of just looking at an array and telling the result of the above sum?
Keep a stack of (indices of) non-increasing values. So before appending the new value, pop smaller ones. Whenever you pop one, add its contribution to the total. def solution(arr): arr.append(float('inf')) I = [-1] total = 0 for i in range(len(arr)): while arr[i] > arr[I[-1]]: j = I.pop() a = j - I[-1] b = i - j total += (a+b)*a*b//2 * arr[j] I.append(i) arr.pop() return total The bars represent values, larger values are larger bars. The value at i is about to be added. The light grey ones come later. The green ones are on the stack. The brown ones already don't play a role anymore. First the one at i-1 gets popped, but that's less informative. Then the one at j gets popped. It dominates the range between I[-1] and i: it's the maximum in all subarrays in that range that contain it. These subarrays contain j as well as 0 to a-1 more elements to the left and 0 to b-1 more elements to the right. That's a*b subarrays and their average length is (a+b)/2. I temporarily append infinity to the values so it works as a sentinel on the left (avoids an extra check in the while condition) and as a cleaner at the end (it causes all remaining values to get popped from the stack). Non-Python-coders: Python supports negative indexes, -1 means "last element" (1st from the end). Correctness test with random lists of 500 values (Try it online!): import random def reference(arr): n = len(arr) return sum(max(arr[L : R+1]) * (R - (L-1)) for L in range(n) for R in range(L, n)) for _ in range(5): arr = random.choices(range(10000), k=500) expect = reference(arr) result = solution(arr) print(result == expect, result) Sample output (results for five lists, True means it's correct): True 207276773131 True 208127393653 True 208653950227 True 208073567605 True 206924015682
4
6
71,706,506
2022-4-1
https://stackoverflow.com/questions/71706506/why-does-open3d-visualization-disappear-when-i-left-click-the-window
I try to write a simple application in python which views a 3d mesh on the right and have some user input on the left in a single window. I use a SceneWidget to visualize a mesh and add it to a horizontal gui element. I also add a filepicker to that gui element and then add the gui element to the window. So far so good it seems that it works as intended but as soon as I make a left click inside the window the visualization disappears with no error message. Does anyone have an idea why and can help me? Here is the code: import os.path import sys import open3d as o3d import open3d.visualization.gui as gui import open3d.visualization.rendering as rendering print("Project") print("python version", sys.version) print("open3d version", o3d.__version__) class WindowApp: def __init__(self): self.window = gui.Application.instance.create_window("Project", 1400, 900) w = self.window # member variables self.model_dir = "" self.model_name = "" em = w.theme.font_size layout = gui.Horiz(0, gui.Margins(0.5 * em, 0.5 * em, 0.5 * em, 0.5 * em)) # 3D Widget _widget3d = gui.SceneWidget() _widget3d.scene = rendering.Open3DScene(w.renderer) _widget3d.set_view_controls(gui.SceneWidget.Controls.ROTATE_CAMERA) mesh = o3d.geometry.TriangleMesh.create_sphere() mesh.compute_vertex_normals() material = rendering.MaterialRecord() material.shader = "defaultLit" _widget3d.scene.add_geometry('mesh', mesh, material) _widget3d.scene.set_background([200, 0, 0, 200]) # not working?! _widget3d.scene.camera.look_at([0, 0, 0], [1, 1, 1], [0, 0, 1]) _widget3d.set_on_mouse(self._on_mouse_widget3d) # gui layout gui_layout = gui.Vert(0, gui.Margins(0.5 * em, 0.5 * em, 0.5 * em, 0.5 * em)) # File-chooser widget self._fileedit = gui.TextEdit() filedlgbutton = gui.Button("...") filedlgbutton.horizontal_padding_em = 0.5 filedlgbutton.vertical_padding_em = 0 filedlgbutton.set_on_clicked(self._on_filedlg_button) fileedit_layout = gui.Horiz() fileedit_layout.add_child(gui.Label("Model file")) fileedit_layout.add_child(self._fileedit) fileedit_layout.add_fixed(0.25 * em) fileedit_layout.add_child(filedlgbutton) # add to the top-level (vertical) layout gui_layout.add_child(fileedit_layout) layout.add_child(gui_layout) layout.add_child(_widget3d) w.add_child(layout) def _on_mouse_widget3d(self, event): print(event.type) return gui.Widget.EventCallbackResult.IGNORED def _on_filedlg_button(self): filedlg = gui.FileDialog(gui.FileDialog.OPEN, "Select file", self.window.theme) filedlg.add_filter(".obj .ply .stl", "Triangle mesh (.obj, .ply, .stl)") filedlg.add_filter("", "All files") filedlg.set_on_cancel(self._on_filedlg_cancel) filedlg.set_on_done(self._on_filedlg_done) self.window.show_dialog(filedlg) def _on_filedlg_cancel(self): self.window.close_dialog() def _on_filedlg_done(self, path): self._fileedit.text_value = path self.model_dir = os.path.normpath(path) # load model self.window.close_dialog() def main(): gui.Application.instance.initialize() w = WindowApp() gui.Application.instance.run() if __name__ == "__main__": main() I use the open3d library version 0.15.1 and python3.9. Note - If I add the SceneWidget directly to the window it works, but then I can't have the gui on the left. Does anyone have a solution to this?
Finally I found a solution: Adding a Scenewidget to a gui container doesn't seem to work. But encapsulating it inside a frame and move it to the right side and add it directly to the window works. _widget3d.frame = gui.Rect(500, w.content_rect.y, 900, w.content_rect.height) Creating a frame for the gui in a similar way is also possible. Here is the working code for anyone interested: class WindowApp: def __init__(self): self.window = gui.Application.instance.create_window("Spinnables", 1400, 900) w = self.window # member variables self.model_dir = "" self.model_name = "" em = w.theme.font_size # 3D Widget _widget3d = gui.SceneWidget() _widget3d.scene = rendering.Open3DScene(w.renderer) _widget3d.set_view_controls(gui.SceneWidget.Controls.ROTATE_CAMERA) # create a frame that encapsulates the Scenewidget _widget3d.frame = gui.Rect(500, w.content_rect.y, 900, w.content_rect.height) mesh = o3d.geometry.TriangleMesh.create_sphere() mesh.compute_vertex_normals() material = rendering.MaterialRecord() material.shader = "defaultLit" _widget3d.scene.add_geometry('mesh', mesh, material) _widget3d.scene.set_background([200, 0, 0, 200]) # not working?! _widget3d.scene.camera.look_at([0, 0, 0], [1, 1, 1], [0, 0, 1]) _widget3d.set_on_mouse(self._on_mouse_widget3d) # gui layout gui_layout = gui.Vert(0, gui.Margins(0.5 * em, 0.5 * em, 0.5 * em, 0.5 * em)) # create frame that encapsulates the gui gui_layout.frame = gui.Rect(w.content_rect.x, w.content_rect.y, 500, w.content_rect.height) # File-chooser widget self._fileedit = gui.TextEdit() filedlgbutton = gui.Button("...") filedlgbutton.horizontal_padding_em = 0.5 filedlgbutton.vertical_padding_em = 0 filedlgbutton.set_on_clicked(self._on_filedlg_button) fileedit_layout = gui.Horiz() fileedit_layout.add_child(gui.Label("Model file")) fileedit_layout.add_child(self._fileedit) fileedit_layout.add_fixed(0.25 * em) fileedit_layout.add_child(filedlgbutton) # add to the top-level (vertical) layout gui_layout.add_child(fileedit_layout) w.add_child(gui_layout) w.add_child(_widget3d)
5
6
71,757,871
2022-4-5
https://stackoverflow.com/questions/71757871/why-is-sys-getsizeof-reporting-bigger-values-for-smaller-lists
I don't understand how sizeof behaves differently when both the lists are created like literals. I expect the output of the second sizeof to be equal or less than that of the first sizeof, not greater! >>> sys.getsizeof([0,1,2,3,4,5,6]) 120 >>> sys.getsizeof([0,1,2,3,4,5]) 152
Short story: It's about overallocation and avoiding useless overallocation. Your cases have 6 and 7 elements. In both cases, Python first calculates 12 as the number of spots to allocate. The purpose of overallocation is to allow fast future extensions with more elements, so Python tries to guess what will happen in the future and acts accordingly. For the 6 elements case it thinks "Hmm, in case we shall add another 6 elements, then it would indeed be good to already have 12 spots, so let's do that now." For the 7 elements case it thinks "Hmm, in case we shall add another 7 elements, then 12 spots wouldn't be enough anyway (for then 14 elements), so we'd have to re-overallocate anyway, so let's just not overallocate now. Maybe there won't even be another extension." So for 6 elements it allocates 12 spots and for 7 it allocates 8 spots (minimal overallocation to a multiple of 4). That's 4 spots difference. A spot holds a pointer to an object, which with 64-bit Python takes 8 bytes. So 7 elements need 4*8 = 32 fewer bytes than 6 elements, which is what you observed (120 bytes vs 152 bytes). Long story: I can reproduce it in CPython 3.10.0. Here's what happens: >>> import dis >>> dis.dis('[0,1,2,3,4,5,6]') 1 0 BUILD_LIST 0 2 LOAD_CONST 0 ((0, 1, 2, 3, 4, 5, 6)) 4 LIST_EXTEND 1 6 RETURN_VALUE An empty list is built and then extended by that tuple. It first resizes to make room for the elements. Which does this to compute how many spots to allocate: new_allocated = ((size_t)newsize + (newsize >> 3) + 6) & ~(size_t)3; /* Do not overallocate if the new size is closer to overallocated size * than to the old size. */ if (newsize - Py_SIZE(self) > (Py_ssize_t)(new_allocated - newsize)) new_allocated = ((size_t)newsize + 3) & ~(size_t)3; Let's test that in Python: >>> for newsize in range(10): ... new_allocated = (newsize + (newsize >> 3) + 6) & ~3 ... if newsize - oldsize > new_allocated - newsize: ... new_allocated = (newsize + 3) & ~3 ... s = f'[{",".join(map(str, range(newsize)))}]' ... calculated_size = sys.getsizeof([]) + 8 * new_allocated ... actual_size = sys.getsizeof(eval(s)) ... print(f'{s:20} {calculated_size=} {actual_size=}') ... [] calculated_size=88 actual_size=56 [0] calculated_size=88 actual_size=64 [0,1] calculated_size=120 actual_size=72 [0,1,2] calculated_size=120 actual_size=120 [0,1,2,3] calculated_size=120 actual_size=120 [0,1,2,3,4] calculated_size=120 actual_size=120 [0,1,2,3,4,5] calculated_size=152 actual_size=152 [0,1,2,3,4,5,6] calculated_size=120 actual_size=120 [0,1,2,3,4,5,6,7] calculated_size=120 actual_size=120 [0,1,2,3,4,5,6,7,8] calculated_size=152 actual_size=152 Our calculated size matches the actual size. Except for fewer than three elements, but that's because they're not created by extending like that (I'll show that at the end), so it's not surprising our formula doesn't apply there. Let's look at the code again: new_allocated = (newsize + (newsize >> 3) + 6) & ~3 if newsize - oldsize > new_allocated - newsize: new_allocated = (newsize + 3) & ~3 And the values for your cases: [0,1,2,3,4,5] [0,1,2,3,4,5,6] oldsize 0 0 newsize 6 7 new_allocated 12 12 corrected 12 8 The reasoning from the code again: /* Do not overallocate if the new size is closer to overallocated size * than to the old size. The newsize 7 is closer to 12 than to 0, so it decides not to overallocate (well, it does overallocate to the closest multiple of 4, for memory alignment and because that appears to work well). The reasoning behind that, as stated by Serhiy Storchaka in the proposal: It is common enough case if the list is created from a sequence of a known size and do not adds items anymore. Or if it is created by concatenating of few sequences. In such cases the list can overallocate a space which will never be used. [...] My idea, that if we adds several items at a time and need to reallocate an array, we check if the overallocated size is enough for adding the same amount of items next time. If it is not enough, we do not overallocate. [...] It will save a space if extend a list by many items few times. So the idea is to consider future growths of the same size, and if the next growth would require a new overallocation anyway, the current overallocation wouldn't help, so let's not do it. About the sizes up to two elements: That doesn't use a tuple and LIST_EXTEND but instead puts the individual values onto the stack and builds the list directly in BUILD_LIST (note its argument 0, 1 or 2): dis.dis('[]') 1 0 BUILD_LIST 0 2 RETURN_VALUE dis.dis('[1969]') 1 0 LOAD_CONST 0 (1969) 2 BUILD_LIST 1 4 RETURN_VALUE dis.dis('[1969, 1956]') 1 0 LOAD_CONST 0 (1969) 2 LOAD_CONST 1 (1956) 4 BUILD_LIST 2 6 RETURN_VALUE The code to execute BUILD_LIST builds a new list object with the exact number of spots needed (the number oparg: 0, 1 or 2), no overallocation. And then right there it just uses a quick little loop to pop the values off the stack and put them into the list: case TARGET(BUILD_LIST): { PyObject *list = PyList_New(oparg); if (list == NULL) goto error; while (--oparg >= 0) { PyObject *item = POP(); PyList_SET_ITEM(list, oparg, item); } PUSH(list); DISPATCH(); }
8
11
71,699,098
2022-3-31
https://stackoverflow.com/questions/71699098/optuna-lightgbm-lightgbmpruningcallback
I am getting an error on my modeling of lightgbm searching for optimal auc. Any help would be appreciated. import optuna from sklearn.model_selection import StratifiedKFold from optuna.integration import LightGBMPruningCallback def objective(trial, X, y): param = { "objective": "binary", "metric": "auc", "verbosity": -1, "boosting_type": "gbdt", "lambda_l1": trial.suggest_loguniform("lambda_l1", 1e-8, 10.0), "lambda_l2": trial.suggest_loguniform("lambda_l2", 1e-8, 10.0), "num_leaves": trial.suggest_int("num_leaves", 2, 256), "feature_fraction": trial.suggest_uniform("feature_fraction", 0.4, 1.0), "bagging_fraction": trial.suggest_uniform("bagging_fraction", 0.4, 1.0), "bagging_freq": trial.suggest_int("bagging_freq", 1, 7), "min_child_samples": trial.suggest_int("min_child_samples", 5, 100), } cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=1121218) cv_scores = np.empty(5) for idx, (train_idx, test_idx) in enumerate(cv.split(X, y)): X_train, X_test = X.iloc[train_idx], X.iloc[test_idx] y_train, y_test = y[train_idx], y[test_idx] pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc") model = lgb.LGBMClassifier(**param) model.fit( X_train, y_train, eval_set=[(X_test, y_test)], early_stopping_rounds=100, callbacks=[pruning_callback]) preds = model.predict_proba(X_test) cv_scores[idx] = log_loss(y_test, preds) auc_scores[idx] = roc_auc_score(y_test, preds) return np.mean(cv_scores), np.mean(auc_scores) study = optuna.create_study(direction="minimize", study_name="LGBM Classifier") func = lambda trial: objective(trial, sample_df[cols_to_keep], sample_df[target]) study.optimize(func, n_trials=1) Trial 0 failed because of the following error: ValueError('The intermediate values are inconsistent with the objective values in terms of study directions. Please specify a metric to be minimized for LightGBMPruningCallback.',)*
Your objective function returns two values but you specify only one direction when creating the study. Try this: study = optuna.create_study(directions=["minimize", "maximize"], study_name="LGBM Classifier")
5
3
71,757,452
2022-4-5
https://stackoverflow.com/questions/71757452/is-it-possible-to-properly-type-hint-the-filterm-function-in-python
I'm currently self-studying functional programming by writing a monad library in python. And I'm having trouble with type hinting. So for example, there is a function filterM in Haskell with signature filterM :: (a -> m Bool) -> [a] -> m [a] Ideally, if python can pattern match "subtypes" of a TypeVar by putting a bracket after it, then I should be able to do it with something like this: T = TypeVar('T') M = TypeVar('M', bound=Monad) def filterM(filter_func: Callable[[T], M[bool]], iterable: list[T]) -> M[list[T]] But it seems that the above syntax wouldn't work. In fact, it seems like there is no way to "extract" the type of monad I pass in at all. Say I pass in a Callable[[int], Maybe[bool]], the best I achieved was to take the whole Maybe[bool] as a single TypeVar. Then there is no way to convert it to the correct output type Maybe[list[int]].
Currently, what you want cannot be done. You'll have to make a plan that doesn't require it.
5
4
71,756,172
2022-4-5
https://stackoverflow.com/questions/71756172/how-to-unpin-pinned-package-in-conda-mamba
I have a conda environment that has a package pinned as follows: Pinned packages: - python 3.8.* - bcbio-gff 0.6.7.* - snakemake 6.7.0.* How do I remove the pin for one of the pinned packages, just using command line conda / mamba? I've tried conda update snakemake but that doesn't remove the pin. I can change the pin easily, e.g. by conda install snakemake=7, but then I have snakemake still pinned. I want to unpin snakemake entirely. I had a look at potentially similar questions, but none seemed to answer my question.
This is only a suboptimal answer, but it's the best I could find so far: You need to manually remove the pinned package from a config file called pinned which you can find in CONDA_PATH/base/envs/ENV_NAME/conda-meta/pinned In my case I had to do: vim /usr/local/Caskroom/mambaforge/base/envs/nextstrain/conda-meta/pinned And remove the line: snakemake=6.7.0 It would be much nicer if there was a conda CLI command - but it doesn't seem to exist.
6
5
71,754,506
2022-4-5
https://stackoverflow.com/questions/71754506/viewing-pytorch-weights-from-a-pth-file
I have a .pth file created with Pytorch with weights. How would I be able to view the weights from this file? I tried this code to load and view but it was not working (as a newbie, I might be entirely wrong)- import torch import torchvision.models as models torch.save('weights\kharif_crops_final.pth') models.load_state_dict(torch.load('weights\kharif_crops_final.pth')) models.eval() print(models)
import torch model = torch.load('path') print(model) (Verify and confirm)
5
5
71,753,572
2022-4-5
https://stackoverflow.com/questions/71753572/importerror-cannot-import-name-callable-from-traitlets
I can run jupyter notebook, but when I try to open a jupyter file I get the following error on my browser 500 : Internal Server Error in the console, I get this error message To access the notebook, open this file in a browser: file:///C:/Users/Bruno/AppData/Roaming/jupyter/runtime/nbserver-23164-open.html Or copy and paste one of these URLs: http://localhost:8888/?token=1e5a289e6fd9b36cab176131f1e3d0b673921c1a76258552 or http://127.0.0.1:8888/?token=1e5a289e6fd9b36cab176131f1e3d0b673921c1a76258552 [W 16:19:04.850 NotebookApp] 404 GET /ipyparallel/clusters?_=1649168344196 (::1) 16.95ms referer=http://localhost:8888/tree [E 16:19:24.938 NotebookApp] Uncaught exception GET /notebooks/examples/jupyter_notebooks/greenhouse.ipynb (::1) HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/notebooks/examples/jupyter_notebooks/greenhouse.ipynb', version='HTTP/1.1', remote_ip='::1') Traceback (most recent call last): File "c:\users\...\tornado\web.py", line 1697, in _execute result = method(*self.path_args, **self.path_kwargs) File "c:\users\..\tornado\web.py", line 3174, in wrapper return method(self, *args, **kwargs) File "c:\users\..\notebook\notebook\handlers.py", line 96, in get get_frontend_exporters=get_frontend_exporters File "c:\users\..\notebook\base\handlers.py", line 507, in render_template return template.render(**ns) File "c:\users\...\jinja2\asyncsupport.py", line 76, in render return original_render(self, *args, **kwargs) File "c:\users\...\jinja2\environment.py", line 1008, in render return self.environment.handle_exception(exc_info, True) File "c:\users\...\jinja2\environment.py", line 780, in handle_exception reraise(exc_type, exc_value, tb) File "c:\users\...\jinja2\_compat.py", line 37, in reraise raise value.with_traceback(tb) File "c:\users\...\notebook\templates\notebook.html", line 1, in top-level template code {% extends "page.html" %} File "c:\users\...\notebook\templates\page.html", line 154, in top-level template code {% block header %} File "c:\users\...\notebook\templates\notebook.html", line 114, in block "header" {% for exporter in get_frontend_exporters() %} File "c:\users\...\notebook\notebook\handlers.py", line 19, in get_frontend_exporters from nbconvert.exporters.base import get_export_names, get_exporter File "c:\users\...\nbconvert\__init__.py", line 4, in <module> from .exporters import * File "c:\users\...\nbconvert\exporters\__init__.py", line 4, in <module> from .slides import SlidesExporter File "c:\users\...\nbconvert\exporters\slides.py", line 12, in <module> from ..preprocessors.base import Preprocessor File "c:\users\...\nbconvert\preprocessors\__init__.py", line 10, in <module> from .execute import ExecutePreprocessor File "c:\users\...\nbconvert\preprocessors\execute.py", line 8, in <module> from nbclient import NotebookClient, execute as _execute File "c:\users\...\nbclient\__init__.py", line 6, in <module> from .client import NotebookClient, execute # noqa: F401 File "c:\users\...\nbclient\client.py", line 18, in <module> from traitlets import ( ImportError: cannot import name 'Callable' from 'traitlets' (c:\users\...\traitlets\__init__.py) [E 16:19:24.954 NotebookApp] { "Host": "localhost:8888", "Connection": "keep-alive", "Sec-Ch-Ua": "\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"99\", \"Google Chrome\";v=\"99\"", "Sec-Ch-Ua-Mobile": "?0", "Sec-Ch-Ua-Platform": "\"Windows\"", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.84 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9", "Sec-Fetch-Site": "same-origin", "Sec-Fetch-Mode": "navigate", "Sec-Fetch-User": "?1", "Sec-Fetch-Dest": "document", "Referer": "http://localhost:8888/tree/examples/jupyter_notebooks", "Accept-Encoding": "gzip, deflate, br", "Accept-Language": "es-MX,es;q=0.9,en-US;q=0.8,en;q=0.7,es-419;q=0.6,de;q=0.5,it;q=0.4", "Cookie": "_xsrf=2|d1d886ef|7780482558d83a8cceac4628e4649561|1649085007; username-localhost-8888=\"2|1:0|10:1649168343|23:username-localhost-8888|44:MTBiMWRjMTMzODlhNDUyM2IxYzMxNTgwZThlYTUyODk=|6f47252b3d75cbe3567f 0fb5380e2c6522193331 I am using Python 3.7 on windows and this is the list of packages I have installed absl-py==0.8.0 alabaster==0.7.12 appdirs==1.4.3 astor==0.8.0 atomicwrites==1.3.0 attrs==19.1.0 Babel==2.9.0 backcall==0.1.0 bleach==3.1.0 bokeh==2.3.1 build==0.3.1.post1 cachetools==4.2.1 certifi==2020.12.5 chardet==4.0.0 charset-normalizer==2.0.12 colorama==0.4.4 coverage==4.5.4 cycler==0.10.0 Cython==0.29.23 decorator==4.4.0 defusedxml==0.6.0 depinfo==1.5.1 docutils==0.16 entrypoints==0.3 fastjsonschema==2.15.3 future==0.17.1 gast==0.3.1 google-pasta==0.2.0 gpflow==1.5.0 gpytorch==1.4.2 grpcio==1.37.0 h5py==2.10.0 idna==2.10 imagesize==1.2.0 importlib-metadata==4.11.3 ipykernel==5.1.3 ipyparallel==8.2.1 ipython==7.10.2 ipython-genutils==0.2.0 ipywidgets==7.5.1 jedi==0.15.1 Jinja2==2.10.1 joblib==0.14.0 jsonschema==3.2.0 jupyter==1.0.0 jupyter-client==6.2.0 jupyter-console==6.4.0 jupyter-core==4.7.1 jupyterlab-pygments==0.1.2 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.1.0 latexcodec==2.0.1 Markdown==3.1.1 MarkupSafe==1.1.1 matplotlib==3.1.1 matplotlib-inline==0.1.3 mistune==0.8.4 more-itertools==7.2.0 mpmath==1.1.0 multipledispatch==0.6.0 nbclient==0.5.11 nbconvert==6.0.7 nbformat==5.3.0 nest-asyncio==1.5.1 nose==1.3.7 notebook==6.0.2 numpy==1.17.1 oauthlib==3.1.0 optlang==1.4.4 packaging==19.1 pandas==1.1.0 pandocfilters==1.4.2 parso==0.5.2 pep517==0.10.0 pexpect==4.7.0 phantomjs-binary==2.1.3 pickleshare==0.7.5 Pillow==8.2.0 Pint==0.10.1 pipdeptree==0.13.2 pluggy==0.13.0 ply==3.11 prettytable==0.7.2 prometheus-client==0.7.1 prompt-toolkit==2.0.10 protobuf==3.9.1 psutil==5.9.0 PTable==0.9.2 ptyprocess==0.6.0 py==1.8.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pybtex==0.24.0 pybtex-docutils==1.0.0 Pygments==2.5.2 pyparsing==2.4.2 PyQt5==5.13.2 PyQt5-sip==12.7.0 pyrsistent==0.15.6 pytest==5.1.2 python-dateutil==2.8.0 python-libsbml-experimental==5.18.0 pytz==2019.2 PyUtilib==5.7.3 pywin32==303 pywinpty==2.0.5 PyYAML==5.1.2 pyzmq==18.1.1 qtconsole==4.6.0 requests==2.27.1 rsa==4.7.2 ruamel.yaml==0.16.5 ruamel.yaml.clib==0.1.2 scikit-learn==0.21.3 scipy==1.3.1 seaborn==0.9.0 selenium==3.141.0 Send2Trash==1.5.0 six==1.12.0 snowballstemmer==2.1.0 Sphinx==4.5.0 sphinxcontrib-applehelp==1.0.2 sphinxcontrib-devhelp==1.0.2 sphinxcontrib-htmlhelp==2.0.0 sphinxcontrib-jsmath==1.0.1 sphinxcontrib-qthelp==1.0.3 sphinxcontrib-serializinghtml==1.1.5 swiglpk==4.65.0 sympy==1.4 tabulate==0.8.6 tensorboard==1.14.0 tensorboard-plugin-wit==1.8.0 tensorflow==1.14.0 tensorflow-estimator==1.14.0 termcolor==1.1.0 terminado==0.8.3 terminaltables==3.1.0 testpath==0.4.4 toml==0.10.2 torch==1.8.1 torchaudio==0.8.1 tornado==6.0.3 tqdm==4.64.0 traitlets==4.3.3 typing-extensions==3.7.4.3 urllib3==1.25.7 wcwidth==0.1.7 webencodings==0.5.1 Werkzeug==0.15.6 widgetsnbextension==3.5.1 wrapt==1.11.2 xlrd==1.2.0 zipp==0.6.0 The following is not related to the question above! I don't have any more details to write, but if I don't write something Stackoverflow complains that I am writing only code and doesn't let me post my message. So I am writing some more stuff to get to the point that he lets me post it. I find this check a bit annoing
The problem was solved by installing traitlets v5.1.1 and traitlets-widget v5.5.0
5
6
71,743,450
2022-4-4
https://stackoverflow.com/questions/71743450/how-to-cache-python-dependecies-in-gitlab-ci-cd-without-using-venv
I am trying to use cache in my .gitlab-ci.yml file, but the time only increases (testing by adding blank lines). I want to cache python packages I install with pip. Here is the stage where I install and use these packages (other stages uses Docker): image: python:3.8-slim-buster variables: PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache/pip" cache: paths: - .cache/pip stages: - lint - test - build - deploy test-job: stage: test before_script: - apt-get update - apt-get install -y --no-install-recommends gcc - apt install -y default-libmysqlclient-dev - pip3 install -r requirements.txt script: - pytest tests/test.py After running this pipeline, with each pipeline, the pipeline time just increases. I was following these steps from GitLab documentation - https://docs.gitlab.com/ee/ci/caching/#cache-python-dependencies Although I am not using venv since it works without it. I am still not sure why the PIP_CACHE_DIR variable is needed if it is not used, but I followed the documentation. What is the correct way to cache python dependencies? I would prefer not to use venv.
PIP_CACHE_DIR is a pip feature that can be used to set the cache dir. The second answer to this question explains it. There may be some disagreement on this, but I think that for something like pip packages or node modules, it is quicker to download them fresh for each pipeline. When the packages are cached by Gitlab by using cache: paths: - .cache/pip The cache that Gitlab creates gets zipped and stored somewhere(where it gets stored depends on runner config). This requires zipping and uploading the cache. Then when another pipeline gets created, the cache needs to be downloaded and unpacked. If using a cache is slowing down job execution, then it might make sense to just remove the cache.
12
6
71,689,095
2022-3-31
https://stackoverflow.com/questions/71689095/how-to-solve-the-pytorch-runtimeerror-numpy-is-not-available-without-upgrading
I am running a simple CNN using Pytorch for some audio classification on my Raspberry Pi 4 on Python 3.9.2 (64-bit). For the audio manipulation needed I am using librosa. librosa depends on the numba package which is only compatible with numpy version <= 1.20. When running my code, the line spect_tensor = torch.from_numpy(spect).double() throws the RuntimeError: RuntimeError: Numpy is not available Searching the internet for solutions I found upgrading Numpy to the latest version to resolve that specific error, but throwing another error, because Numba only works with Numpy <= 1.20. Is there a solution to this problem which does not include searching for an alternative to using librosa?
Just wanted to give an update on my situation. I downgraded torch to version 0.9.1 which solved the original issue. Now OpenBLAS is throwing a warning because of an open MPLoop. But for now my code is up and running.
29
6
71,737,743
2022-4-4
https://stackoverflow.com/questions/71737743/how-can-i-change-playback-speed-of-an-audio-file-in-python-whilst-it-is-playing
I've done alot of searching to try and find a way to achieve this but the solutions I've found either don't do what I need or I don't understand them. I'm looking for a way of playing a sound in python (non-blocking) that allows me to change the playback speed in real time, as it's playing, with no gaps or cutouts. Changing the pitch is fine. Audio quality isn't even that important. Most of the solutions I've found only allow setting the playback speed once, before the file is played.
I've found a solution, using python-mpv, a wrapper for mpv.io from pynput.keyboard import Key, Listener import mpv speed=1 #quick function to change speed via keyboard. def on_press(key): global speed if key.char == 'f' : speed=speed-0.1 player.speed=speed if key.char == 'g' : speed=speed+0.1 player.speed=speed player = mpv.MPV(ytdl=True) player.play('/Users/regvardy/mediapipe_faceswap-main/test.wav') with Listener( on_press=on_press) as listener: listener.join() while True: player.speed=speed I haven't tested it for stability yet. It feels like a workaround rather than me actually finding out how to do it so I may try and find a different solution.
6
0
71,747,998
2022-4-5
https://stackoverflow.com/questions/71747998/pytorch-assign-values-from-one-mask-to-another-masked-by-itself
I have a mask active that tracks batches that still have not terminated in a recurrent process. It's dimension is [batch_full,], and it's true entries show which elements need to still be used in current step. The recurrent process generates another mask, terminated, which has as many elements as true values in active mask. Now, I want to take values from ~terminated and put them back into active, but at the right indices. Basically I want to do: import torch active = torch.ones([4,], dtype=torch.bool) active[:2] = torch.tensor(False) terminated = torch.tensor([True, False]) active[active] = ~terminated print(active) # expected [F, F, F, T] However, I get error: RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation. How can I do the described above operation in an effective way?
There are a few solutions, I will also give their speed as measured by timeit, 10k repetitions, on 2021 macbook pro. The simplest solution, taking 0.260s: active[active.clone()] = ~terminated We can use masked_scatter_ inplace operation for abt. 2x speedup (0.136s): active.masked_scatter_( active, ~terminated, ) Out of place operation, taking 0.161s, would be: active = torch.masked_scatter( active, active, ~terminated, )
4
5
71,747,824
2022-4-5
https://stackoverflow.com/questions/71747824/slicing-2d-python-list
Let's say I have a list: list = [[1, 2, 3, 4], ['a', 'b', 'c', 'd'], [9, 8, 7, 6]] and I would like to get something like: newList = [[2, 3, 4], ['b', 'c', 'd'], [8, 7, 6]] hence I tried going with this solution print(list[0:][1:]) But I get this output [['a', 'b', 'c', 'd'], [9, 8, 7, 6]] Therefore I tried print(list[1:][0:]) but I get precisely the same result. I tried to make some research and experiments about this specific subject but without any result.
You want the 1 to end element of every row in your matrix. mylist = [[1, 2, 3, 4], ['a', 'b', 'c', 'd'], [9, 8, 7, 6]] new_list = [row[1:] for row in mylist]
4
5
71,735,261
2022-4-4
https://stackoverflow.com/questions/71735261/how-can-i-show-transformation-of-coordinate-grid-lines-in-python
Suppose I have the regular cartesian coordinate system $(x,y)$ and I consider a rectangular mesh region $D$ (split into little squares). I want to see how the domain D would be mapped under a coordinate transform T:(x,y) -> (u(x,y) ,v(x,y) ) in Python? I'm looking for something like this: See here. Could I be advised on how this could be done? I am a total beginner at python and programming.
If I understand you correctly, you want to be able to see a sort of a grid plot of a transformed cartesian space? In that case, maybe something like this, using Numpy and Matplotlib. (You could do the same with just Pillow to draw some lines, but this is more convenient...) EDIT: Following the discussion in the comments, I changed this a bit compared to the simpler original (see edit history), to make it easier to plot multiple transformations, as well as to color the lines to make it easier to follow how they're transformed. It's still pretty simple, though. (For fun, I also threw in a couple of extra transformations.) import math import numpy as np import matplotlib.pyplot as plt from matplotlib import cm colormap = cm.get_cmap("rainbow") def plot_grid( xmin: float, xmax: float, ymin: float, ymax: float, n_lines: int, line_points: int, map_func, ): """ Plot a transformation of a regular grid. :param xmin: Minimum x value :param xmax: Maximum x value :param ymin: Minimum y value :param ymax: Maximum y value :param n_lines: Number of lines per axis :param line_points: Number of points per line :param map_func: Function to map the grid points to new coordinates """ # List for gathering the lines into. lines = [] # Iterate over horizontal lines. for y in np.linspace(ymin, ymax, n_lines): lines.append([map_func(x, y) for x in np.linspace(xmin, xmax, line_points)]) # Iterate over vertical lines. for x in np.linspace(xmin, xmax, n_lines): lines.append([map_func(x, y) for y in np.linspace(ymin, ymax, line_points)]) # Plot all the lines. for i, line in enumerate(lines): p = i / (len(lines) - 1) # Normalize to 0-1. # Transpose the list of points for passing to plot. xs, ys = zip(*line) # Get the line color from the colormap. plt.plot(xs, ys, color=colormap(p)) # Define some mapping functions. def identity(x, y): return x, y def example(x, y): c = complex(x, y) ** 2 return (c.real, c.imag) def wobbly(x: float, y: float): return x + math.sin(y * 2) * 0.2, y + math.cos(x * 2) * 0.3 def vortex(x: float, y: float): dst = (x - 2) ** 2 + (y - 2) ** 2 ang = math.atan2(y - 2, x - 2) return math.cos(ang - dst * 0.1) * dst, math.sin(ang - dst * 0.1) * dst # Set up the plot surface... plt.figure(figsize=(8, 8)) plt.tight_layout() plt.subplot(2, 2, 1) plt.title("Identity") plot_grid(0, 4, 0, 4, 10, 10, identity) plt.subplot(2, 2, 2) plt.title("Example") plot_grid(0, 4, 0, 4, 10, 10, example) plt.subplot(2, 2, 3) plt.title("Wobbly") plot_grid(0, 4, 0, 4, 10, 40, wobbly) plt.subplot(2, 2, 4) plt.title("Vortex") plot_grid(0, 4, 0, 4, 10, 40, vortex) plt.savefig("so71735261-2.png") plt.show() The result image is:
4
7
71,744,729
2022-4-4
https://stackoverflow.com/questions/71744729/how-to-grab-last-row-of-datetime-in-pandas-dataframe
II currently have a very large .csv with 2 million rows. I've read in the csv and only have 2 columns, number and timestamp (in unix). My goal is to grab the last and largest number for each day (eg. 1/1/2021, 1/2/2021, etc.) I have converted unix to datetime and used df.groupby('timestamp').tail(1) but am still not able to return the last row per day. Am I using the groupby wrong? import pandas as pd def main(): df = pd.read_csv('blocks.csv', usecols=['number', 'timestamp']) print(df.head()) df['timestamp'] = pd.to_datetime(df['timestamp'],unit='s') x = df.groupby('timestamp').tail(1) print(x) if __name__ == '__main__': main() Desired Output: number timestamp 11,509,218 2021-01-01 11,629,315 2021-01-02 11,782,116 2021-01-03 12,321,123 2021-01-04 ...
The "problem" lies in the grouper, use .dt.date for correct grouping (assuming your data is already sorted): x = df.groupby(df['timestamp'].dt.date).tail(1) print(x)
4
4
71,739,517
2022-4-4
https://stackoverflow.com/questions/71739517/detect-squares-paintings-in-images-and-draw-contour-around-them-using-python
I'm trying to detect and draw a rectangular contour on every painting on for example this image: I followed some guides and did the following: Grayscale conversion Applied median blur Sharpen image Applied adaptive Threshold Applied Morphological Gradient Find contours Draw contours And got the following result: I know it's messy but is there a way to somehow detect and draw a contour around the paintings better? Here is the code I used: path = '<PATH TO THE PICTURE>' #reading in and showing original image image = cv2.imread(path) image = cv2.resize(image,(880,600)) # resize was nessecary because of the large images cv2.imshow("original", image) cv2.waitKey(0) cv2.destroyAllWindows() # grayscale conversion gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) cv2.imshow("painting_gray", gray) cv2.waitKey(0) cv2.destroyAllWindows() # we need to find a way to detect the edges better so we implement a couple of things # A little help was found on stackoverflow: https://stackoverflow.com/questions/55169645/square-detection-in-image median = cv2.medianBlur(gray,5) cv2.imshow("painting_median_blur", median) #we use median blur to smooth the image cv2.waitKey(0) cv2.destroyAllWindows() # now we sharpen the image with help of following URL: https://www.analyticsvidhya.com/blog/2021/08/sharpening-an-image-using-opencv-library-in-python/ kernel = np.array([[0, -1, 0], [-1, 5,-1], [0, -1, 0]]) image_sharp = cv2.filter2D(src=median, ddepth=-1, kernel=kernel) cv2.imshow('painting_sharpend', image_sharp) cv2.waitKey(0) cv2.destroyAllWindows() # now we apply adapptive thresholding # thresholding: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_thresholding/py_thresholding.html#adaptive-thresholding thresh = cv2.adaptiveThreshold(src=image_sharp,maxValue=255,adaptiveMethod=cv2.ADAPTIVE_THRESH_GAUSSIAN_C, thresholdType=cv2.THRESH_BINARY,blockSize=61,C=20) cv2.imshow('thresholded image', thresh) cv2.waitKey(0) cv2.destroyAllWindows() # lets apply a morphological transformation kernel = np.ones((7,7),np.uint8) gradient = cv2.morphologyEx(thresh, cv2.MORPH_GRADIENT, kernel) cv2.imshow('dilated image', gradient) cv2.waitKey(0) cv2.destroyAllWindows() # # lets now find the contours of the image # # find contours: https://docs.opencv.org/4.x/dd/d49/tutorial_py_contour_features.html contours, hierarchy = cv2.findContours(gradient, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) print("contours: ", len(contours)) print("hierachy: ", len(hierarchy)) print(hierarchy) cv2.drawContours(image, contours, -1, (0,255,0), 3) cv2.imshow("contour image", image) cv2.waitKey(0) cv2.destroyAllWindows() Tips, help or code is appreciated!
Here's a simple approach: Obtain binary image. We load the image, grayscale, Gaussian blur, then Otsu's threshold to obtain a binary image. Two pass dilation to merge contours. At this point, we have a binary image but individual separated contours. Since we can assume that a painting is a single large square contour, we can merge small individual adjacent contours together to form a single contour. To do this, we create a vertical and horizontal kernel using cv2.getStructuringElement then dilate to merge them together. Depending on the image, you may need to adjust the kernel sizes or number of dilation iterations. Detect paintings. Now we find contours and filter using contour area using a minimum threshold area to filter out small contours. Finally we obtain the bounding rectangle coordinates and draw the rectangle with cv2.rectangle. Code import cv2 # Load image, grayscale, Gaussian blur, Otsu's threshold image = cv2.imread('1.jpeg') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) blur = cv2.GaussianBlur(gray, (13,13), 0) thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1] # Two pass dilate with horizontal and vertical kernel horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9,5)) dilate = cv2.dilate(thresh, horizontal_kernel, iterations=2) vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,9)) dilate = cv2.dilate(dilate, vertical_kernel, iterations=2) # Find contours, filter using contour threshold area, and draw rectangle cnts = cv2.findContours(dilate, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area > 20000: x,y,w,h = cv2.boundingRect(c) cv2.rectangle(image, (x, y), (x + w, y + h), (36, 255, 12), 3) cv2.imshow('thresh', thresh) cv2.imshow('dilate', dilate) cv2.imshow('image', image) cv2.waitKey()
4
4
71,740,863
2022-4-4
https://stackoverflow.com/questions/71740863/django-celery-error-unrecoverable-error-attributeerrorentrypoint-object-ha
I am perplexed,from a weird error which i have no idea as i am new to celery, this error occurs on just the setup phase, every thing is simply configured as written in the celery doc https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html the tracback is: (env) muhammad@huzaifa:~/Desktop/practice/app$ celery -A app worker -l INFO [2022-04-04 16:21:40,988: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost' [2022-04-04 16:21:40,993: CRITICAL/MainProcess] Unrecoverable error: AttributeError("'EntryPoint' object has no attribute 'module_name'") Traceback (most recent call last): File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/app/base.py", line 1250, in backend return self._local.backend AttributeError: '_thread._local' object has no attribute 'backend' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/worker/worker.py", line 203, in start self.blueprint.start(self) File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/bootsteps.py", line 112, in start self.on_start() File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/apps/worker.py", line 136, in on_start self.emit_banner() File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/apps/worker.py", line 170, in emit_banner ' \n', self.startup_info(artlines=not use_image))), File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/apps/worker.py", line 232, in startup_info results=self.app.backend.as_uri(), File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/app/base.py", line 1252, in backend self._local.backend = new_backend = self._get_backend() File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/app/base.py", line 955, in _get_backend backend, url = backends.by_url( File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/app/backends.py", line 69, in by_url return by_name(backend, loader), url File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/app/backends.py", line 47, in by_name aliases.update(load_extension_class_names(extension_namespace)) File "/home/muhammad/Desktop/practice/env/lib/python3.8/site-packages/celery/utils/imports.py", line 146, in load_extension_class_names yield ep.name, ':'.join([ep.module_name, ep.attrs[0]]) AttributeError: 'EntryPoint' object has no attribute 'module_name' the init file is: from __future__ import absolute_import, unicode_literals from .celery import app as celery_app __all__ = ('celery_app',) celery.py: import os from celery import Celery # Set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app.settings') app = Celery('app', broker='localhost') # Using a string here means the worker doesn't have to serialize # the configuration object to child processes. # - namespace='CELERY' means all celery-related configuration keys # should have a `CELERY_` prefix. app.config_from_object('django.conf:settings', namespace='CELERY') # Load task modules from all registered Django apps. app.autodiscover_tasks() @app.task(bind=True) def debug_task(self): print(f'Request: {self.request!r}') For your information, rabbitmq-server is running on localhost thats why i have set BROKER_URL TO 'localhost' rabbitmq-server.service - RabbitMQ Messaging Server Loaded: loaded (/lib/systemd/system/rabbitmq-server.service; enabled; vendor preset: enabled) Active: active (running) since Sun 2022-04-03 16:14:04 PKT; 24h ago Main PID: 1005 (beam.smp) Status: "Initialized" Tasks: 91 (limit: 9090) idk why this is happening, i have been look around for like hours and cant find a solution and not even the error on google or anywhere. any help would be greatly appreciated Thanks!
You are encountering a new bug with celery, reported here: https://github.com/celery/celery/issues/7409 The workaround you can try is to pin the versions of your dependency for celery to an older version (i.e. before release of the celery bug). For instance, my requirements.txt includes: celery==5.2.3 or on the command line you might just be able to run pip install celery==5.2.3 (For reference, the versions where I was seeing the error reported here were celery 5.2.5 & click 8.1.2) Hopefully this gets fixed upstream and we can remove our version pins.
7
14
71,739,870
2022-4-4
https://stackoverflow.com/questions/71739870/how-to-install-python-2-on-macos-12-3
macOS 12.3 update drops Python 2 and replaces it with version 3: https://developer.apple.com/documentation/macos-release-notes/macos-12_3-release-notes Python Deprecations Python 2.7 was removed from macOS in this update. Developers should use Python 3 or an alternative language instead. (39795874) I understand we need to migrate to version 3, but in the meantime we still need version 2. Homebrew does not seem to have it anymore: brew install [email protected] Warning: No available formula with the name "[email protected]". Did you mean [email protected], [email protected], [email protected], [email protected] or python-yq? brew install python2 Warning: No available formula with the name "python2". Did you mean ipython, bpython, jython or cython? What gives?
You can get any Python release, including the last Python 2, from the official download site: https://www.python.org/downloads/release/python-2718/ β†’ macOS 64-bit installer
48
101
71,736,573
2022-4-4
https://stackoverflow.com/questions/71736573/how-to-get-the-deepest-list-in-list-with-abstract-element
Here I have some lists with any particular element (a, b, and c) a = [2, 4, [9, 10]] b = [1, 3, 5, 9, [11, 13, 14, 15, [16, 17, 19, 24]]] c = [2, 4, [5, 11, 13, [14, 17, 29, [31, 19]]]] npA = np.array(a, dtype = object) npB = np.array(b, dtype = object) npC = np.array(c, dtype = object) I am trying to get the deepest list in each list a, b, c print(npA[-1]) # it shows [9,10] print(npB[-1][-1]) # it shows [16, 17, 19, 24] print(npC[-1][-1][-1]) # it shows [31, 19] How do get the generalization of this problem? or is there any built-in function of NumPy that can directly handle this?
You can solve this recursively, without numpy: from typing import List def deepest_list(l: List) -> List: last_item = l[-1] if isinstance(last_item, list): return deepest_list(last_item) return l output: deepest_list(c) [31, 19]
4
5
71,735,869
2022-4-4
https://stackoverflow.com/questions/71735869/how-to-reduce-number-if-statements-using-dict
I have the following code with multiple cases: def _extract_property_value(selector: Selector) -> str: raw_value = selector.xpath("span[2]") default_value = raw_value.xpath("./text()").get().strip() value_with_a = ', '.join([value.strip() for value in raw_value.xpath("./a /text()").getall()]) value_with_div_and_a = ', '.join([value.strip() for value in raw_value.xpath("./div /a /text()").getall()]) if value_with_a: return value_with_a elif value_with_div_and_a: return value_with_div_and_a elif default_value: return default_value I want to get rid of if-statements and simplify this code as much as it is possible. I am not good Pyton dev. I know there is pattern "strategy" and below I was trying to implement that: def _extract_property_value(selector: Selector) -> str: raw_value = selector.xpath("span[2]") values_dict = { 'default value': raw_value.xpath("./text()").get().strip(), 'value with a': ', '.join([value.strip() for value in raw_value.xpath("./a /text()").getall()]), 'value with div and a': ', '.join([value.strip() for value in raw_value.xpath("./div /a /text()").getall()]) } return [item for item in values_dict.values() if item != ''][0] but... Now when I think of this - that was bad idea to use strategy there. I am not sure. Can someone help me to simplify that code? Or those if-statements are just necessary.
We can reduce the number of if statements but without the aid of a dictionary. The code in question unconditionally assigns values to 3 variables. Having done so, those variables are examined to determine which, if any, is to be returned to the caller. However, there are no dependencies between those variables. Therefore they could be processed in order of priority and, if an appropriate value is acquired, then it could be returned immediately thereby making the process considerably more efficient. def _extract_property_value(selector): def f1(rv): return rv.xpath("./text()").get().strip() def f2(rv): return ', '.join([value.strip() for value in rv.xpath("./a /text()").getall()]) def f3(rv): return ', '.join([value.strip() for value in rv.xpath("./div /a /text()").getall()]) raw_value = selector.xpath("span[2]") for func in [f2, f3, f1]: # note the priority order - default last if (v := func(raw_value)): return v The revised function will implicitly return None if suitable values are not found. In this respect it is no different to the OP's original code
6
5
71,733,837
2022-4-4
https://stackoverflow.com/questions/71733837/how-to-figure-out-correct-headers-of-an-excel-file-programmatically-while-readin
I have a list of excel files (.xlsx,.xls), I'm trying to get headers of each of these files after loaded. Here I have taken a one excel file and loaded into pandas as. pd.read_excel("sample.xlsx") output is: Here we would like to get an header information as per our requirement, here in the attached image the required headers are existed at index 8 as you can see in red color coded. pd.read_excel('sample.xlsx',skiprows=9) as we know now we have a correct header at 8 i can go back and specify in read_excel as skip_rows at 8 so that it reads from this index and headers will be appeared as. How to handle this type of cases programmatically among a list of excel files where we don't know where the header is existed? in this case we have known that header is at 8. but what if we don't know this in other files. Sample file can be downloaded for your ref: https://github.com/myamullaciencia/pg_diploma_ai_ml_uohyd/blob/main/sample_file.xlsx
Use: df = pd.read_excel('sample_file.xlsx') #test all rows if previous row is only NaNs m1 = df.shift(fill_value=0).isna().all(axis=1) #test all rows if no NaNs m2 = df.notna().all(axis=1) #chain together and filter all next rows after first match df = df[(m1 & m2).cummax()] #set first row to columns names df = df.set_axis(df.iloc[0].rename(None), axis=1).iloc[1:].reset_index(drop=True) print (df) LN FN SSN DOB DOH Gender Comp_2011 Comp_2010 \ 0 Ax Bx 000-00-0000 8/3/1800 1/1/1800 Male 384025.56 396317 1 Er Ds 000-00-0000 5/7/1800 7/1/1800 Male 382263.86 392474 2 Po Ch 000-00-0000 9/9/1800 1/1/1800 Male 406799.34 395677 3 Rt Da 000-00-0000 6/24/1800 7/1/1800 Male 395767.12 424093 4 Yh St 000-00-0000 3/15/1800 7/1/1800 Male 376936.58 373754 5 Ws Ra 000-00-0000 6/12/1800 7/10/1800 Male 425720.06 420927 Comp_2009 Allocation Group NRD 0 360000 0.05 2022-09-01 00:00:00 1 360000 0.05 2015-06-01 00:00:00 2 360000 0.05 2013-01-01 00:00:00 3 360000 0.05 2020-07-01 00:00:00 4 360000 0 2013-01-01 00:00:00 5 306960 0 2034-07-01 00:00:00
4
4
71,729,997
2022-4-3
https://stackoverflow.com/questions/71729997/numpyic-way-to-take-the-first-n-rows-and-columns-out-of-every-m-rows-and-columns
I have a 20 x 20 square matrix. I want to take the first 2 rows and columns out of every 5 rows and columns, which means the output should be a 8 x 8 square matrix. This can be done in 2 consecutive steps as follows: import numpy as np m = 5 n = 2 A = np.arange(400).reshape(20,-1) B = np.asarray([row for i, row in enumerate(A) if i % m < n]) C = np.asarray([col for j, col in enumerate(B.T) if j % m < n]).T However, I am looking for efficiency. Is there a more Numpyic way to do this? I would prefer to do this in one step.
You can use np.ix_ to retain the elements whose row / column indices are less than 2 modulo 5: import numpy as np m = 5 n = 2 A = np.arange(400).reshape(20,-1) mask = np.arange(20) % 5 < 2 result = A[np.ix_(mask, mask)] print(result) This outputs: [[ 0 1 5 6 10 11 15 16] [ 20 21 25 26 30 31 35 36] [100 101 105 106 110 111 115 116] [120 121 125 126 130 131 135 136] [200 201 205 206 210 211 215 216] [220 221 225 226 230 231 235 236] [300 301 305 306 310 311 315 316] [320 321 325 326 330 331 335 336]]
4
3
71,722,066
2022-4-3
https://stackoverflow.com/questions/71722066/ipykernel-launcher-processes-are-consuming-memory-not-able-to-kill
What are these zombie ipykernel_launcher process in my machine, which are hogging to much memory: This is output of htop command, but I ps for those processes,(to kill them) I do not see them as: ps -ef|grep ipykernel Not sure, how to get rid of these memory hogs!
The reason why you're seeing all these processes in htop and not with ps is that htop is showing threads (see https://serverfault.com/questions/24198/why-does-htop-show-lots-of-apache2-processes-by-ps-aux-doesnt). Type "-H" inside htop to toggle showing threads. Automatically stop idle kernels Concerning Jupyter notebook processes in general: kernels are small computational engines and consume a lot of resources (mainly memory) even when they're not active. This is why one should encourage users to stop running kernels when they're not in use. The problem is that even if one closes a tab or the whole browser, the kernel keeps running, so one forgets about the kernels! Since it's unlikely that users will shutdown their kernels, consider stopping idle kernels by configuring the parameter NotebookApp.shutdown_no_activity_timeoutInt in your Jupyter configuration file jupyter_notebook_config.py. NotebookApp.shutdown_no_activity_timeoutInt. Default: 0 Shut down the server after N seconds with no kernels or terminals running and no activity. This can be used together with culling idle kernels (MappingKernelManager.cull_idle_timeout) to shutdown the notebook server when it’s not in use. This is not precisely timed: it may shut down up to a minute later. 0 (the default) disables this automatic shutdown. See also these properties: # shutdown the server after no activity for an hour c.ServerApp.shutdown_no_activity_timeout = 60 * 60 # shutdown kernels after no activity for 20 minutes c.MappingKernelManager.cull_idle_timeout = 20 * 60 # check for idle kernels every two minutes c.MappingKernelManager.cull_interval = 2 * 60 If that doesn't work, you may need to run a cron job to kill the ipykernel processes with kill after a certain amount of elapsed time (see for instance https://unix.stackexchange.com/questions/531040/list-processes-that-have-been-running-more-than-2-hours). One-time solution A quick solution to solve the problem is to restart the Jupyter notebook/Jupyter Hub. This will stop all kernels.
4
5
71,724,842
2022-4-3
https://stackoverflow.com/questions/71724842/gitlab-ci-python-black-formatter-says-would-reformat-whereas-running-black-doe
When I run GitLab CI on this commit with this gitlab-ci.yml: stages: - format - test black_formatting: image: python:3.6 stage: format before_script: # Perform an update to make sure the system is up to date. - sudo apt-get update --fix-missing # Download miniconda. - wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # (Re)create the environment.yml file for the repository. - conda env create -q -f environment.yml -n checkstyle-for-bash --force # Activate the environment of the repository. - source activate checkstyle-for-bash script: # Verify the Python code is black formatting compliant. - black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' # Verify the Python code is flake8 formatting compliant. - flake8 . allow_failure: false test:pytest:36: stage: test image: python:3.6 script: # Ensure the (mini) conda environment can be activated. - export PATH="$HOME/miniconda/bin:$PATH" # Activate the environment of the repository. - source activate checkstyle-for-bash # Run the python tests. - python -m pytest it outputs: Running with gitlab-runner 14.8.0 (565b6c0b) on trucolrunner DS42qHSq Preparing the "shell" executor 00:00 Using Shell executor... Preparing environment 00:00 Running on pcname... Getting source from Git repository 00:02 Fetching changes with git depth set to 20... Reinitialized existing Git repository in /home/gitlab-runner/builds/DS42qHSq/0/root/checkstyle-for-bash/.git/ Checking out 001577c3 as main... Removing miniconda.sh Skipping Git submodules setup Executing "step_script" stage of the job script 02:55 $ sudo apt-get update --fix-missing Hit:1 http://nl.archive.ubuntu.com/ubuntu impish InRelease Get:2 http://security.ubuntu.com/ubuntu impish-security InRelease [110 kB] Get:3 http://nl.archive.ubuntu.com/ubuntu impish-updates InRelease [115 kB] Hit:4 https://repo.nordvpn.com/deb/nordvpn/debian stable InRelease Get:5 http://nl.archive.ubuntu.com/ubuntu impish-backports InRelease [101 kB] Hit:6 https://brave-browser-apt-release.s3.brave.com stable InRelease Get:7 http://security.ubuntu.com/ubuntu impish-security/main amd64 DEP-11 Metadata [20,3 kB] Get:8 http://security.ubuntu.com/ubuntu impish-security/universe amd64 DEP-11 Metadata [3.624 B] Get:9 http://nl.archive.ubuntu.com/ubuntu impish-updates/main amd64 DEP-11 Metadata [25,8 kB] Get:10 http://nl.archive.ubuntu.com/ubuntu impish-updates/universe amd64 DEP-11 Metadata [35,4 kB] Get:11 http://nl.archive.ubuntu.com/ubuntu impish-updates/multiverse amd64 DEP-11 Metadata [940 B] Get:12 http://nl.archive.ubuntu.com/ubuntu impish-backports/universe amd64 DEP-11 Metadata [16,4 kB] Fetched 428 kB in 2s (235 kB/s) Reading package lists... $ wget -q https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O miniconda.sh; bash miniconda.sh -b -f -p $HOME/miniconda; PREFIX=/home/gitlab-runner/miniconda Unpacking payload ... Collecting package metadata (current_repodata.json): ...working... done Solving environment: ...working... done # All requested packages already installed. installation finished. $ export PATH="$HOME/miniconda/bin:$PATH" $ conda env create -q -f environment.yml -n checkstyle-for-bash --force Collecting package metadata (repodata.json): ...working... done Solving environment: ...working... done Preparing transaction: ...working... done Verifying transaction: ...working... done Executing transaction: ...working... done Installing pip dependencies: ...working... done $ source activate checkstyle-for-bash $ black . --check --exclude '\.venv/|\.local/|\.cache/|\.git/' would reformat src/arg_parser.py would reformat src/helper_text_parsing.py Oh no! πŸ’₯ πŸ’” πŸ’₯ 2 files would be reformatted, 10 files would be left unchanged. ERROR: Job failed: exit status 1 However, if I run black src/** on the GitHub repository, black returns: ~/git/checkstyle-for-bash$ git pull Already up to date. (base) some@name:~/git/checkstyle-for-bash$ black src/** All done! ✨ 🍰 ✨ 8 files left unchanged. Just in case I did not clone the right repository, I also manually copy-pasted the content of the src/arg_parser.py file from GitLab into ~/git/checkstyle-for-bash/src/arg_parser.py and ran black again. However, the output is the same, it does not change anything. For completeness, this is the content of the src/arg_parser.py file: # This is the main code of this project nr, and it manages running the code and # outputting the results to LaTex. import argparse def parse_cli_args(): # Instantiate the parser parser = argparse.ArgumentParser(description="Optional app description") # Include argument parsing for default code. # Allow user to load a graph from file. parser.add_argument( "--ggl", dest="google_style_guide", action="store_true", help=( "boolean flag, determines whether the Google Style Guide for " "Bash rules are followed." ), ) # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. parser.set_defaults(google_style_guide=True,) # Load the arguments that are given. args = parser.parse_args() return args Question What could be causing the GitLab CI to say the files would be reformatted by black (even though the files are not reformatted when running black on them (on the same device (in a different conda environment)))? Setup I run my own GitLab CI on the same device on which I test the conda black commands. The GitLab CI copies the GitHub commits of a repository, one at a time, runs its CI on it, and then reports the results back to GitHub. I am currently not able to expose my GitLab server on clearnet as I am behind a gateway that I currently do not control. Doubts I am fairly certain it is a "silly" mistake from my side, however I have not yet been able to figure out what it is. Especially since I manually copy pasted the GitLab file content of the src/arg_parser.py file twice, and ran black twice to verify black indeed does not change "that" file. Also, to be sure it was not a trailing newline or anything, I used the copy button with the mouse, instead of manual selection: Additionally, the file is flake8 compliant. My current guess is that somehow the CI is not ran on the most recent commit of that repository. However, to verify that I clicked on the GitLab of the failed commit, which indeed redirects to the "05e85fd54f93ccfc427023b21f9cdb0c0cd6db2e" commit (copy) in GitLab: It is from this commit that I copy-pasted the src/arg_parser.py file twice. Another guess would be that the gitlab-ci.yml loads a miniconda environment whereas my local version of black uses a full conda environment. Perhaps they have a different newline character that would lead to a formatting difference. (Even though I doubt that would be the case). Issue I ran the CI again whilst include the black . --diff command in the gitlab-ci.yml script, and the difference is between a: ''' Return and: '''Return as is displayed in the accompanying output: $ black . --diff --exclude '\.venv/|\.local/|\.cache/|\.git/' --- src/arg_parser.py 2022-04-03 10:13:10.751289 +0000 +++ src/arg_parser.py 2022-04-03 11:11:26.297995 +0000 @@ -24,10 +24,12 @@ # Allow user to specify an infile. parser.add_argument("infile", nargs="?", type=argparse.FileType("r")) # Specify default argument values for the parser. - parser.set_defaults(google_style_guide=True,) + parser.set_defaults( + google_style_guide=True, + ) # Load the arguments that are given. args = parser.parse_args() return args would reformat src/arg_parser.py --- src/helper_text_parsing.py 2022-04-02 19:35:45.142619 +0000 +++ src/helper_text_parsing.py 2022-04-03 11:11:26.342908 +0000 @@ -5,11 +5,11 @@ def add_two(x): return x + 2 def get_function_line_nrs(filecontent, rules): - """ Returns two lists containing the starting and ending line numbers of + """Returns two lists containing the starting and ending line numbers of the functions respectively. :param filecontent: The content of the bash file that is being analysed. :param rules: The Bash formatting rules that are chosen by the user. """ would reformat src/helper_text_parsing.py All done! ✨ 🍰 ✨ 2 files would be reformatted, 10 files would be left unchanged. I am not quite sure exactly why this happens, as I thought python black would always converge to the exact same formatting for a given valid python file. I assume it is because two different black versions are used between the miniconda environment and the anaconda environment on the same device. To test this hypothesis I am including a black --version command in the gitlab-ci.yml.
The miniconda environment in the GitLab CI used python black version: black, 22.3.0 (compiled: yes) Whereas the local environment used python black version: black, version 19.10b0 Updating the local black version, pushing the formatted code according to the latest python black version, and running the GitLab CI on that GitHub commit resulted in a successful GitLab CI run.
12
12
71,724,403
2022-4-3
https://stackoverflow.com/questions/71724403/crop-an-image-in-pil-using-the-4-points-of-a-rotated-rectangle
I have a list of four points of a rotated rectangle in the form of: points = [[x1, y1], [x2, y2], [x3, y3], [x4, y4]] I can crop in PIL using: img.crop((x1, y1, x2, y2)) But this doesn't work with a rotated rectangle. Just to clarify I want the resulting cropped image to be rotated so the cropped area becomes a non-rotated rectangle. I am willing to use openCV, although would like to avoid it as the conversion of an image from PIL to openCV takes time, and I will be itterating through this process about 100 times.
If you start with this image: You can do it like this using a QuadTransform: #!/usr/bin/env python3 from PIL import Image, ImageTransform # Open starting image and ensure RGB im = Image.open('a.png').convert('RGB') # Define 8-tuple with x,y coordinates of top-left, bottom-left, bottom-right and top-right corners and apply transform=[31,146,88,226,252,112,195,31] result = im.transform((200,100), ImageTransform.QuadTransform(transform)) # Save the result result.save('result.png')
4
4
71,717,020
2022-4-2
https://stackoverflow.com/questions/71717020/how-do-i-fill-nan-values-with-different-random-numbers-on-python
I want to replace the missing values from a column with people's ages (which also contains numerical values, not only NaN values) but everything I've tried so far either doesn't work how I want it to or it doesn't work at all. I wish to apply a random variable generator which follows a normal distribution using the mean and standard deviation obtained with that column. I have tried the following: Replacing with numpy, replaces NaN values but with the same number for all of them df_travel['Age'] = df_travel['Age'].replace(np.nan, round(rd.normalvariate(age_mean, age_std),0)) Fillna with pandas, also replaces NaN values but with the same number for all of them df_travel['Age'] = df_travel['Age'].fillna(round(rd.normalvariate(age_mean, age_std),0)) Applying a function on the dataframe with pandas, replaces NaN values but also changes all existing numerical values (I only wish to fill the NaN values) df_travel['Age'] = df_travel['Age'].where(df_travel['Age'].isnull() == True).apply(lambda v: round(rd.normalvariate(age_mean, age_std),0)) Any ideas would be appreciated. Thanks in advance.
Series.fillna can accept a Series, so generate a random array of size len(df_travel): rng = np.random.default_rng(0) mu = df_travel['Age'].mean() sd = df_travel['Age'].std() filler = pd.Series(rng.normal(loc=mu, scale=sd, size=len(df_travel))) df_travel['Age'] = df_travel['Age'].fillna(filler)
4
5
71,713,961
2022-4-2
https://stackoverflow.com/questions/71713961/how-to-deconstruct-a-python-like-function-call
Suppose I had a function call as a string, like "log(2, floor(9.4))". I want to deconstruct the call in a way that allows me to access the function name and arguments for the firstmost call and accurately deducts whether a function call as an argument is an argument or not. For example, the arguments when deconstructing the string above would come to [2, floor(9.4)] I've already tried to use some string parsing techniques (e.g. splitting on commas), but it doesn't appear to be working.
You can use the ast module: import ast data = "log(2, floor(9.4))" parse_tree = ast.parse(data) # ast.unparse() is for 3.9+ only. # If using an earlier version, use the astunparse package instead. result = [ast.unparse(node) for node in parse_tree.body[0].value.args] print(result) This outputs: ['2', 'floor(9.4)'] I pulled the value to iterate over from manually inspecting the output of ast.dump(parse_tree). Note that I've written something a bit quick and dirty, since there's only one string to parse. If you're looking to parse a lot of these strings (or a larger program), you should create a subclass of ast.NodeVisitor. If you want to also make modifications to the source code, you should create a subclass of ast.NodeTransformer instead.
4
9
71,701,041
2022-4-1
https://stackoverflow.com/questions/71701041/jaxxla-vs-numballvm-reduction
Is it possible to make CPU only reductions with JAX comparable to Numba in terms of computation time? The compilers come straight from conda: $ conda install -c conda-forge numba jax Here is a 1-d NumPy array example import numpy as np import numba as nb import jax as jx @nb.njit def reduce_1d_njit_serial(x): s = 0 for xi in x: s += xi return s @jx.jit def reduce_1d_jax_serial(x): s = 0 for xi in x: s += xi return s N = 2**10 a = np.random.randn(N) Using timeit on the following np.add.reduce(a) gives 1.99 Β΅s ... reduce_1d_njit_serial(a) gives 1.43 Β΅s ... reduce_1d_jax_serial(a).item() gives 23.5 Β΅s ... Note that jx.numpy.sum(a) and using jx.lax.fori_loop gives comparable (marginally slower) comp. times to reduce_1d_jax_serial. It seems there is a better way to craft the reduction for XLA. EDIT: compile times were not included as a print statement proceeded to check results.
When performing these kinds of microbenchmarks with JAX, you have to be careful to ensure you're measuring what you think you're measuring. There are some tips in the JAX Benchmarking FAQ. Implementing some of these best practices, I find the following for your benchmarks: import jax.numpy as jnp # Native jit-compiled XLA sum jit_sum = jx.jit(jnp.sum) # Avoid including device transfer cost in the benchmarks a_jax = jnp.array(a) # Prevent measuring compilation time _ = reduce_1d_njit_serial(a) _ = reduce_1d_jax_serial(a_jax) _ = jit_sum(a_jax) %timeit np.add.reduce(a) # 100000 loops, best of 5: 2.33 Β΅s per loop %timeit reduce_1d_njit_serial(a) # 1000000 loops, best of 5: 1.43 Β΅s per loop %timeit reduce_1d_jax_serial(a_jax).block_until_ready() # 100000 loops, best of 5: 6.24 Β΅s per loop %timeit jit_sum(a_jax).block_until_ready() # 100000 loops, best of 5: 4.37 Β΅s per loop You'll see that for these microbenchmarks, JAX is a few milliseconds slower than both numpy and numba. So does this mean JAX is slow? Yes and no; you'll find a more complete answer to that question in JAX FAQ: is JAX faster than numpy?. The short summary is that this computation is so small that the differences are dominated by Python dispatch time rather than time spent operating on the array. The JAX project has not put much effort into optimizing for Python dispatch of microbenchmarks: it's not all that important in practice because the cost is incurred once per program in JAX, as opposed to once per operation in numpy.
4
7
71,686,960
2022-3-31
https://stackoverflow.com/questions/71686960/typeerror-credentials-need-to-be-from-either-oauth2client-or-from-google-auth
I'm new to python and currently is working on a project that requires me to export pandas data frame from google collab to a google spreadsheet with multiple tabs. Previously when I run this specific code, there are no errors but then now it shows an error like this: TypeError Traceback (most recent call last) <ipython-input-74-c8b829c43616> in <module>() 5 gauth.credentials = GoogleCredentials.get_application_default() 6 drive = GoogleDrive(gauth) ----> 7 gc = gspread.authorize(GoogleCredentials.get_application_default()) 2 frames /usr/local/lib/python3.7/dist-packages/gspread/utils.py in convert_credentials(credentials) 59 60 raise TypeError( ---> 61 'Credentials need to be from either oauth2client or from google-auth.' 62 ) 63 TypeError: Credentials need to be from either oauth2client or from google-auth. Here is the code that I use to create authentication. #Import PyDrive and associated libraries. #This only needs to be done once per notebook. from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials import gspread #Authenticate and create the PyDrive client. #This only needs to be done once per notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) gc = gspread.authorize(GoogleCredentials.get_application_default()) Any help would be much appreciated.
I had the same problem today and found this answer: https://github.com/burnash/gspread/issues/1014#issuecomment-1082536016 I finally solved it by replacing the old code with this one: from google.colab import auth auth.authenticate_user() import gspread from google.auth import default creds, _ = default() gc = gspread.authorize(creds)
17
47
71,707,006
2022-4-1
https://stackoverflow.com/questions/71707006/how-to-use-pt-file
I'm trying to make a currency recognition model and I did so using a dataset on kaggle and colab using yolov5 and I exactly carried out the steps explained on yolov5 github. At the end, I downloaded a .pt file which has the weights of the model and now I want to use it in python file to detect and recognize currency . How to do this? I am a beginner in computer vision and I am totally confused about what to do. I am searching over and over but I don't reach anything. import torch # Model model=torch.load('E:\_best.pt') # Images imgs=['E:\Study\currency.jpg'] # Inference results = model(imgs) # Results results.print() results.save() # or .show() results.show() results.xyxy[0] # img1 predictions (tensor) results.pandas().xyxy[0]
If you want to read trained parameters from .pt file and load it into your model, you could do the following. file = "model.pt" model = your_model() model.load_state_dict(torch.load(file)) # this will automatically load the file and load the parameters into the model. before calling load_state_dict(), be sure that the .pt file contains only model parameters, otherwise, error occurs. This can be checked by print(torch.load(file)).
6
5
71,705,240
2022-4-1
https://stackoverflow.com/questions/71705240/how-to-convert-boolean-column-to-0-and-1-by-using-pd-get-dummies
I want to convert all boolean columns in my pandas dataframe into 0 and 1 by using pd.get_dummies. However, the boolean values stay the same after the get_dummies function. For example: tmp = pd.DataFrame([ ['green' , True], ['red' , False], ['blue' , True]]) tmp.columns = ['color', 'class'] pd.get_dummies(tmp) # I have also tried pd.get_dummies(tmp, dtype=int), but got the same output I got: class color_blue color_green color_red 0 True 0 1 0 1 False 0 0 1 2 True 1 0 0 but I need: color_blue color_green color_red class_True class_False 0 0 1 0 1 0 1 0 0 1 0 1 2 1 0 0 1 0 update: My dataframe includes numeric data so convert all columns in the dataframe into string may not be the best soluion.
For processing boolean columns convert them to strings (here are converted all columns): print (pd.get_dummies(tmp.astype(str))) color_blue color_green color_red class_False class_True 0 0 1 0 0 1 1 0 0 1 1 0 2 1 0 0 0 1 Of convert only boolean: print (pd.get_dummies(tmp.astype({'class':'str'}))) color_blue color_green color_red class_False class_True 0 0 1 0 0 1 1 0 0 1 1 0 2 1 0 0 0 1 You can create dictionary only for boolean columns: d = dict.fromkeys(tmp.select_dtypes('bool').columns, 'str') print (pd.get_dummies(tmp.astype(d)))
8
1
71,690,620
2022-3-31
https://stackoverflow.com/questions/71690620/debugging-python-program-doesnt-work-after-updating-to-vs-code-1-66-0
Today (2022/3/31) I let the auto-update function update my VS Code to latest version 1.66.0 on Windows. After that, my normal debugging process doesn't work any more: when I press F5, the debugging control panel flashes and disappears immediately, nothing else happends. I couldn't find any useful error message on output and terminal windows. My launch.json file looks like this: { "name": "DEBUG", "type": "python", "request": "launch", "program": "${workspaceFolder}\\starting.py", "console": "integratedTerminal", "justMyCode": false, } I tried to change the console above to externalTerminal but it didn't help. Could someone tell me how to find out what's going on here? Cheers,
You can try to reinstall the Python extension or install the older version of the Python extension. Delete the deprecated configuration of python.pythonPath in your settings.json.
8
3
71,701,629
2022-4-1
https://stackoverflow.com/questions/71701629/importerror-no-module-named-thread
Compiling python2 in vscode gives an error. But when I compile python3 it succeeds. print('test') returns: ImportError: No module named _thread PS C:\source> c:; cd 'c:\source'; & 'C:\Python27\python.exe' 'c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy\launcher' '52037' '--' 'c:\source\test.py' Traceback (most recent call last): File "C:\Python27\lib\runpy.py", line 174, in _run_module_as_main "__main__", fname, loader, pkg_name) File "C:\Python27\lib\runpy.py", line 72, in _run_code exec code in run_globals File "c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy\__main__.py", line 43, in <module> from debugpy.server import cli File "c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy/../debugpy\server\__init__.py", line 9, in <module> import debugpy._vendored.force_pydevd # noqa File "c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy/../debugpy\_vendored\force_pydevd.py", line 37, in <module> pydevd_constants = import_module('_pydevd_bundle.pydevd_constants') File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module __import__(name) File "c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_constants.py", line 362, in <module> from _pydev_bundle._pydev_saved_modules import thread, threading File "c:\Users\keinblue\.vscode\extensions\ms-python.python-2022.4.0\pythonFiles\lib\python\debugpy\_vendored\pydevd\_pydev_bundle\_pydev_saved_modules.py", line 94, in <module> import _thread as thread; verify_shadowed.check(thread, ['start_new_thread', 'start_new', 'allocate_lock']) ImportError: No module named _thread
There is an issue with the vscode python extension version 2022.4.0 just downgrade to version 2022.2.1924087327 and it will work as it works for me now Just follow these steps: Go to extensions. Click on Gear Icon for the installed extension Click on Install Another Version select the version you wish to install
14
36
71,697,019
2022-3-31
https://stackoverflow.com/questions/71697019/generating-list-of-every-combination-without-duplicates
I would like to generate a list of combinations. I will try to simplify my problem to make it understandable. We have 3 variables : x : number of letters k : number of groups n : number of letters per group I would like to generate using python a list of every possible combinations, without any duplicate knowing that : i don't care about the order of the groups and the order of the letters within a group. As an example, with x = 4, k = 2, n = 2 : # we start with 4 letters, we want to make 2 groups of 2 letters letters = ['A','B','C','D'] # here would be a code that generate the list # Here is the result that is very simple, only 3 combinations exist. combos = [ ['AB', 'CD'], ['AC', 'BD'], ['AD', 'BC'] ] Since I don't care about the order of or within the groups, and letters within a group, ['AB', 'CD'] and ['DC', 'BA'] is a duplicate. This is a simplification of my real problem, which has those values : x = 12, k = 4, n = 3. I tried to use some functions from itertools, but with that many letters my computer freezes because it's too many combinations. Another way of seeing the problem : you have 12 players, you want to make 4 teams of 3 players. What are all the possibilities ? Could anyone help me to find an optimized solution to generate this list?
There will certainly be more sophisticated/efficient ways of doing this, but here's an approach that works in a reasonable amount of time for your example and should be easy enough to adapt for other cases. It generates unique teams and unique combinations thereof, as per your specifications. from itertools import combinations # this assumes that team_size * team_num == len(players) is a given team_size = 3 team_num = 4 players = list('ABCDEFGHIJKL') unique_teams = [set(c) for c in combinations(players, team_size)] def duplicate_player(combo): """Returns True if a player occurs in more than one team""" return len(set.union(*combo)) < len(players) result = (combo for combo in combinations(unique_teams, team_num) if not duplicate_player(combo)) result is a generator that can be iterated or turned into a list with list(result). On kaggle.com, it takes a minute or so to generate the whole list of all possible combinations (a total of 15400, in line with the computations by @beaker and @John Coleman in the comments). The teams are tuples of sets that look like this: [({'A', 'B', 'C'}, {'D', 'E', 'F'}, {'G', 'H', 'I'}, {'J', 'K', 'L'}), ({'A', 'B', 'C'}, {'D', 'E', 'F'}, {'G', 'H', 'J'}, {'I', 'K', 'L'}), ({'A', 'B', 'C'}, {'D', 'E', 'F'}, {'G', 'H', 'K'}, {'I', 'J', 'L'}), ... ] If you want, you can cast them into strings by calling ''.join() on each of them.
8
2
71,690,992
2022-3-31
https://stackoverflow.com/questions/71690992/cannot-install-latest-version-of-numpy-1-22-3
I am trying to install the latest version of numpy, the 1.22.3, but it looks like pip is not able to find this last release. I know I can install it locally from the source code, but I want to understand why I cannot install it using pip. PS: I have the latest version of pip, the 22.0.4 ERROR: Could not find a version that satisfies the requirement numpy==1.22.3 (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0rc1, 1.15.0rc2, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0rc1, 1.16.0rc2, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0rc1, 1.17.0rc2, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0rc1, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0rc1, 1.19.0rc2, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0rc1, 1.20.0rc2, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0rc1, 1.21.0rc2, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5) ERROR: No matching distribution found for numpy==1.22.3
Please check your Python version. Support for Python 3.7 is dropped since Numpy 1.22.0 release. [source]
8
14
71,683,346
2022-3-30
https://stackoverflow.com/questions/71683346/find-element-by-the-value-attribute-using-selenium-python
I`m trying to find a specific element in the web page. It has to be by a specific number inside the text in value...In this case, I need to find the element by the number 565657 (see outer HTML below) I've tried via xpath: ("//*[contains(text(),'565657')]") and also ("//input[@value='565657']") but it did not work. Can someone please help? Thanks in advance! The original XPATH when you copy from the element on the web page is just ' //*[@id="checkbox15"]' and I need to find a way to look for the number I mentioned below.
To locate the element through the value attribute i.e. 565657 you can use either of the following Locator Strategies: Using css_selector: element = driver.find_element(By.CSS_SELECTOR, "input[value*='565657']") Using xpath: element = driver.find_element(By.XPATH, "//input[contains(@vlaue, '565657')]") To locate ideally you need to induce WebDriverWait for the visibility_of_element_located() and you can use either of the following locator strategies: Using CSS_SELECTOR: element = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.CSS_SELECTOR, "input[value*='565657']"))) Using XPATH: element = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//input[contains(@vlaue, '565657')]"))) Note : You have to add the following imports : from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC
5
5
71,677,490
2022-3-30
https://stackoverflow.com/questions/71677490/how-to-properly-include-data-folder-to-python-package
I'm building a small python package that I deploy to our internal pypi server to be easily installable with pip. I'm using setup.py to build the tar.gz archive to upload there. And I need to include some additional data - to be more specific, I use nltk in my project and I want to ship the package with specific nltk data already downloaded, since it doesn't make sense to me to make the person using my package responsible for downloading it themself. So I have the following structure β”œβ”€β”€ setup.py β”œβ”€β”€ src β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ my_pkg β”‚ β”‚ β”œβ”€β”€ __init__.py β”‚ β”‚ β”œβ”€β”€ my_module.py β”‚ β”‚ └── resources β”‚ β”‚ └── nltk_data | | └─... too many subfolders and files And I would like to include the whole nltk_data subfolder to be in the same place once the package is installed. I managed to get working package_data={'my_pkg' :['./resources/file.dat']}, for one file, but I don't know how to do the same with complex directory structure with many subfolder, subsubfolders, file of different extensions etc. Is there any way to do this? My setup.py is quite simple (I omitted things such as description or URL for simplicity) from setuptools import setup, find_packages setup( name='some-cool-name', version="1.0.0", classifiers=[], packages=find_packages(where='src'), package_dir={'': 'src'}, package_data={'my_pkg' :[]}, include_package_data=True, py_modules=[], python_requires='>=3.8', install_requires=['nltk==3.6.5'] )
You can simply specify the relative path to the data you want to include. You need to put an __init__.py-file in both subfolders though, but then it should work. package_data={'my_pkg' :['my_pkg/resources/nltk_data/*']} To use the data in your script, use importlib (for example importlib.read_text) to open your desired file.
5
6
71,671,866
2022-3-30
https://stackoverflow.com/questions/71671866/python-what-is-the-difference-between-lambda-and-lambda
I know the function of lambda: and lambda var: , but what does lambda_: means acutally?
lambda_ is just a variable name, like any other. Like foo or x. If you saw: lambda_: Something Then that is actually a variable annotation, for type hints, so the same as: num: int num = 0
6
5
71,670,608
2022-3-30
https://stackoverflow.com/questions/71670608/how-to-import-subrequest-pytest
In the following code, the request has the type of <class '_pytest.fixtures.SubRequest'>. I want to add type hint to the parameter request. @pytest.fixture def dlv_service(request: SubRequest): # How to import SubRequest? print(type(request), request) filepath = pathlib.Path(request.node.fspath.strpath) f = filepath.with_name("file.json") The following import doesn't work. from pytest.fixtures import SubRequest
I've found one on the internet, hope this will help. from _pytest.fixtures import SubRequest I think it's worth trying, but not sure whether it could work, sorry.
11
10