question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
63,860,576
2020-9-12
https://stackoverflow.com/questions/63860576/asyncio-event-loop-is-closed-when-using-asyncio-run
I'm getting started to AsyncIO and AioHTTP, and i'm writing some basic code to get familiar with the syntax. I tried the following code that should perform 3 requests concurrently: import time import logging import asyncio import aiohttp import json from aiohttp import ClientSession, ClientResponseError from aiocfscrape import CloudflareScraper async def nested(url): async with CloudflareScraper() as session: async with session.get(url) as resp: return await resp.text() async def main(): URL = "https://www.binance.com/api/v3/exchangeInfo" await asyncio.gather(nested(URL), nested(URL), nested(URL)) asyncio.run(main()) Here is the output: raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed I don't understand why do i get that error, can anyone help me on this?
Update Originally I was recommending Greg's answer below: import asyncio import sys if sys.platform: asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) Turned out, using WindowsSelectorEventLoop has functionality issues such as: Can't support more than 512 sockets Can't use pipe Can't use subprocesses due to the fact that Windows uses I/O completion Ports unlike *nix - Therefore SelectorEventLoop is not designed for Windows nor is implemented as full. If those limitations matters to you - You might be better off using lengthy workaround in this answer. Check out more about differences at documents. Or alternatively, consider using Trio over asyncio, which is much more stable and consistent. import trio async def task(): await trio.sleep(5) trio.run(task) Original post I've finally figured out how to keep ProactorEventLoop running, preventing unsuccessful IO closure. Really not sure why windows' Event loop is so faulty, as this also happens for asyncio.open_connection and asyncio.start_server. To workaround this, you need to run event loop in forever loop and close manually. Following code will cover both windows and other environments. import asyncio from aiocfscrape import CloudflareScraper async def nested(url): async with CloudflareScraper() as session: async with session.get(url) as resp: return await resp.text() async def main(): await nested("https://www.binance.com/api/v3/exchangeInfo") try: assert isinstance(loop := asyncio.new_event_loop(), asyncio.ProactorEventLoop) # No ProactorEventLoop is in asyncio on other OS, will raise AttributeError in that case. except (AssertionError, AttributeError): asyncio.run(main()) else: async def proactor_wrap(loop_: asyncio.ProactorEventLoop, fut: asyncio.coroutines): await fut loop_.stop() loop.create_task(proactor_wrap(loop, main())) loop.run_forever() This code will check if new EventLoop is ProactorEventLoop. If so, keep loop forever until proactor_wrap awaits main and schedules loop stop. Else - possibly all other OS than Windows - doesn't need these additional steps, simply call asyncio.run() instead. IDE like Pycharm will complain about passing AbstractEventLoop to ProactorEventLoop parameter, safe to ignore.
11
14
63,837,315
2020-9-10
https://stackoverflow.com/questions/63837315/change-environment-variables-saved-in-env-file-with-python-and-dotenv
I am trying to update .env environment variables with python. With os.environ I am able to view and change local environment variables, but I want to change the .env file. Using python-dotenv I can load .env entries into local environment variables .env File key=value test.py from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) print(os.environ['key']) # outputs 'value' os.environ['key'] = "newvalue" print(os.environ['key']) # outputs 'newvalue' .env File key=value The .env File is not changed! Only the local env variable is changed. I could not find any documentation on how to update the .env file. Does anyone know a solution?
Use dotenv.set_key. import dotenv dotenv_file = dotenv.find_dotenv() dotenv.load_dotenv(dotenv_file) print(os.environ["key"]) # outputs "value" os.environ["key"] = "newvalue" print(os.environ['key']) # outputs 'newvalue' # Write changes to .env file. dotenv.set_key(dotenv_file, "key", os.environ["key"])
19
53
63,906,100
2020-9-15
https://stackoverflow.com/questions/63906100/python-module-vs-sub-module-vs-package-vs-sub-package
In Python, What is the differences between module, sub-module, package and a sub-package?
package |-- __init__.py |-- module.py |-- sub_package |-- __init__.py |-- sub_module.py Consider packages and sub-packages as folders and sub-folders containing init.py file with other python files. modules are the python files inside the package. sub-modules are the python files inside the sub-package.
15
24
63,886,762
2020-9-14
https://stackoverflow.com/questions/63886762/tensorflow-none-of-the-mlir-optimization-passes-are-enabled-registered-1
I am using a very small model for testing purposes using tensorflow 2.3 and keras. Looking at my terminal, I get the following warning: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:118] None of the MLIR optimization passes are enabled (registered 1) However, the code works as expected. But what does this message mean? Thanks.
MLIR is being used as another solution to implementing and optimizing Tensorflow logic. This informative message is benign and is saying MLIR was not being used. This is expected as in TF 2.3, the MLIR based implementation is still being developed and proven, so end users are generally not expected to use the MLIR implementation and are instead expected to use the non-MLIR feature complete implementation. Update: still experimental on version 2.9.1. On the docs it is written: DO NOT USE, DEV AND TESTING ONLY AT THE MOMENT.
77
82
63,812,311
2020-9-9
https://stackoverflow.com/questions/63812311/how-to-create-children-with-uuid-with-pydantic
I try to create children of Foo, each should have its own uuid. In the real code no Instance of Foo will be created only it's children. The children will be saved in a database later, the uuid is to retrieve right objects from the database. In the first code snippet I tried to use the init method, which results in an AttributeError. I also tried to use a classmethod, which results in loosing all fields in my child objects. If I the second snippet every child gets the same uuid, which makes sense to me, as it's passed as default value. I could put the uuid creation into the children, though this feels wrong when using inheritance. Is there a better way to create a uuid for each child? # foo_init_.py class Foo(BaseModel): def __init__(self): self.id_ = uuid4() # >>> AttributeError: __fields_set__ # foo_classmethod.py class Foo(BaseModel): @classmethod def __init__(cls): cls.id_ = uuid4() # >>> Bar loses id_ fields from uuid import uuid4, UUID from pydantic import BaseModel class Foo(BaseModel): id_: UUID = uuid4() class Bar(Foo): pass class Spam(Foo): pass if __name__ == '__main__': b1 = Bar() print(b1.id_) # >>> 73860f46-5606-4912-95d3-4abaa6e1fd2c b2 = Bar() print(b2.id_) # >>> 73860f46-5606-4912-95d3-4abaa6e1fd2c s1 = Spam() print(s1.id_) # >>> 73860f46-5606-4912-95d3-4abaa6e1fd2c
You could use the default_factory parameter: class Foo(BaseModel): id_: UUID = Field(default_factory=uuid4)
15
30
63,829,680
2020-9-10
https://stackoverflow.com/questions/63829680/type-assertion-in-mypy
Some functions like numpy.intersect1d return differents types (in this case an ndarray or a tuple of three ndarrays) but the compiler can only infer one of them, so if I like to make: intersection: np.ndarray = np.intersect1d([1, 2, 3], [5, 6, 2]) It throws a type warning: Expected type 'ndarray', got 'Tuple[ndarray, ndarray, ndarray]' instead I could avoid this kind of problems in other languages like Typescript where I could use the as keyword to assert the type (without impact in runtime). I've read the documentation and saw the cast function, but I'd to know if there is any inline solution or something I'm missing.
According to the MyPy documentation, there are two ways to do type assertions: As an inline expression, you can use the typing.cast(..., ...) function. The docs say this is "usually" done to cast from a supertype to a subtype, but doesn't say you can't use it in other cases. As a statement, you can use assert isinstance(..., ...), but this will only work with concrete types like int or list which are represented at runtime, not more complex types like List[int] which can't be checked by isinstance. Since the documentation doesn't mention any other ways to do type assertions, it seems like these are the only ways.
11
19
63,872,924
2020-9-13
https://stackoverflow.com/questions/63872924/how-can-i-send-an-http-request-from-my-fastapi-app-to-another-site-api
I am trying to send 100 requests at a time to a server http://httpbin.org/uuid using the following code snippet from fastapi import FastAPI from time import sleep from time import time import requests import asyncio app = FastAPI() URL= "http://httpbin.org/uuid" # @app.get("/") async def main(): r = requests.get(URL) # print(r.text) return r.text async def task(): tasks = [main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main(),main()] # print(tasks) # input("stop") result = await asyncio.gather(*tasks) print (result) @app.get('/') def f(): start = time() asyncio.run(task()) print("time: ",time()-start) I am using FastAPI with Asyncio to achieve the lowest time possible around 3 seconds or less but using the above method I am getting an overall time of 66 seconds that is more than a minute. I also want to keep the main function for additional operations on r.text. I understand that to achieve such low time, concurrency is required but I am not sure what mistake I'm doing here.
requests is a synchronous library. You need to use an asyncio-based library to make requests asynchronously. httpx httpx is typically used in FastAPI applications to request external services. It provides synchronous and asynchronous clients which can be used in def and async def path operations appropriately. It is also recommended for asynchronous tests of application. I would advice using it by default. from fastapi import FastAPI from time import time import httpx import asyncio app = FastAPI() URL = "http://httpbin.org/uuid" async def request(client): response = await client.get(URL) return response.text async def task(): async with httpx.AsyncClient() as client: tasks = [request(client) for i in range(100)] result = await asyncio.gather(*tasks) print(result) @app.get('/') async def f(): start = time() await task() print("time: ", time() - start) Output ['{\n "uuid": "65c454bf-9b12-4ba8-98e1-de636bffeed3"\n}\n', '{\n "uuid": "03a48e56-2a44-48e3-bd43-a0b605bef359"\n}\n',... time: 0.5911855697631836 aiohttp aiohttp can also be used in FastAPI applications, if you prefer one. from fastapi import FastAPI from time import time import aiohttp import asyncio app = FastAPI() URL = "http://httpbin.org/uuid" async def request(session): async with session.get(URL) as response: return await response.text() async def task(): async with aiohttp.ClientSession() as session: tasks = [request(session) for i in range(100)] result = await asyncio.gather(*tasks) print(result) @app.get('/') async def f(): start = time() await task() print("time: ", time() - start) If you want to limit the number of requests executing in parallel, you can use asyncio.semaphore like so: MAX_IN_PARALLEL = 10 limit_sem = asyncio.Semaphore(MAX_IN_PARALLEL) async def request(client): async with limit_sem: response = await client.get(URL) return response.text
63
97
63,876,013
2020-9-13
https://stackoverflow.com/questions/63876013/using-next-on-an-async-generator
A generator can be iterated step by step by using the next() built-in function. For example: def sync_gen(n): """Simple generator""" for i in range(n): yield i**2 sg = sync_gen(4) print(next(sg)) # -> 0 print(next(sg)) # -> 1 print(next(sg)) # -> 4 Using next() on an asynchronous generator does not work: import asyncio async def async_gen(n): for i in range(n): yield i**2 async def main(): print("Async for:") async for v in async_gen(4): # works as expected print(v) print("Async next:") ag = async_gen(4) v = await next(ag) # raises: TypeError: 'async_generator' object is not an iterator print(v) asyncio.run(main()) Does something like v = await async_next(ag) exist to obtain same behavior as with normal generators?
Since Python 3.10 there are aiter(async_iterable) and awaitable anext(async_iterator) builtin functions, analogous to iter and next, so you don't have to rely on the async_iterator.__anext__() magic method anymore. This piece of code works in python 3.10: import asyncio async def async_gen(n): for i in range(n): yield i**2 async def main(): print("Async next:") ag = async_gen(4) print(await anext(ag)) asyncio.run(main())
18
21
63,859,803
2020-9-12
https://stackoverflow.com/questions/63859803/cant-install-xmlsec-using-pip-command
pip install xmlsec commands throws the below error. ERROR: Command errored out with exit status 1: command: /home/xxx/PycharmProjects/saml_impl/saml_impl/venv/bin/python /home/sathia/PycharmProjects/saml_impl/saml_impl/venv/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpu_b5m5vz cwd: /tmp/pip-install-gblz98sr/xmlsec Complete output (14 lines): running bdist_wheel running build running build_py package init file 'src/xmlsec/__init__.py' not found (or not a regular file) creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/xmlsec copying src/xmlsec/py.typed -> build/lib.linux-x86_64-3.8/xmlsec copying src/xmlsec/template.pyi -> build/lib.linux-x86_64-3.8/xmlsec copying src/xmlsec/constants.pyi -> build/lib.linux-x86_64-3.8/xmlsec copying src/xmlsec/__init__.pyi -> build/lib.linux-x86_64-3.8/xmlsec copying src/xmlsec/tree.pyi -> build/lib.linux-x86_64-3.8/xmlsec running build_ext error: Unable to invoke pkg-config. ---------------------------------------- ERROR: Failed building wheel for xmlsec Failed to build xmlsec ERROR: Could not build wheels for xmlsec which use PEP 517 and cannot be installed directly' I don't know how to resolve this issue. I tried to install other xmlsec package too, nothing worked.
Xmlsec listed here https://pypi.org/project/xmlsec/. The below command should install for download required native libraries. sudo apt-get install pkg-config libxml2-dev libxmlsec1-dev libxmlsec1-openssl
16
24
63,829,128
2020-9-10
https://stackoverflow.com/questions/63829128/how-can-i-make-bandit-skip-b101-within-tests
I'm using bandit to check my code for potential security issues: bandit -r git-repository/ However, the most common item found by bandit is B101. It is triggered by assert statements within tests. I use pytest, so this is not a concern, but a good practice. I've now created a .bandit file with [bandit] skips: B101 But that also skips a lot of other code. Is there a solution to this issue?
A possible solution is to tell bandit to skip tests altogether. Assuming your code lives in a src subfolder, run bandit --configfile bandit.yaml --recursive src with the following bandit.yaml in the project's root directory # Do not check paths including `/tests/`: # they use `assert`, leading to B101 false positives. exclude_dirs: - '/tests/' There is a bunch of related issues and pull requests. Update: I like Diego's solution better.
30
9
63,827,339
2020-9-10
https://stackoverflow.com/questions/63827339/how-to-build-a-custom-data-generator-for-keras-tf-keras-where-x-images-are-being
I am working on Image Binarization using UNet and have a dataset of 150 images and their binarized versions too. My idea is to augment the images randomly to make them look like they are differentso I have made a function which inserts any of the 4-5 types of Noises, skewness, shearing and so on to an image. I could have easily used ImageDataGenerator(preprocess_function=my_aug_function) to augment the images but the problem is that my y target is also an image. Also, I could have used something like: train_dataset = ( train_dataset.map( encode_single_sample, num_parallel_calls=tf.data.experimental.AUTOTUNE ) .batch(batch_size) .prefetch(buffer_size=tf.data.experimental.AUTOTUNE) ) But it has 2 problems: With larger dataset, it'll blow up the memory as data needs to be already in the memory This is the crucial part that I need to augment the images on the go to make it look like I have a huge dataset. Another Solution could be saving augmented images to a directory and making them 30-40K and then loading them. It would be silly thing to do. Now the idea part is that I can use Sequence as the parent class but How can I keep on augmenting and generating new images on the fly with respective Y binarized images? I have an idea as the below code. Can somebody help me with the augmentation and generation of y images. I have my X_DIR, Y_DIR where image names for binarised and original are same but stored in different directories. class DataGenerator(tensorflow.keras.utils.Sequence): def __init__(self, files_path, labels_path, batch_size=32, shuffle=True, random_state=42): 'Initialization' self.files = files_path self.labels = labels_path self.batch_size = batch_size self.shuffle = shuffle self.random_state = random_state self.on_epoch_end() def on_epoch_end(self): 'Updates indexes after each epoch' # Shuffle the data here def __len__(self): return int(np.floor(len(self.files) / self.batch_size)) def __getitem__(self, index): # What do I do here? def __data_generation(self, files): # I think this is responsible for Augmentation but no idea how should I implement it and how does it works.
Custom Image Data Generator load Directory data into dataframe for CustomDataGenerator def data_to_df(data_dir, subset=None, validation_split=None): df = pd.DataFrame() filenames = [] labels = [] for dataset in os.listdir(data_dir): img_list = os.listdir(os.path.join(data_dir, dataset)) label = name_to_idx[dataset] for image in img_list: filenames.append(os.path.join(data_dir, dataset, image)) labels.append(label) df["filenames"] = filenames df["labels"] = labels if subset == "train": split_indexes = int(len(df) * validation_split) train_df = df[split_indexes:] val_df = df[:split_indexes] return train_df, val_df return df train_df, val_df = data_to_df(train_dir, subset="train", validation_split=0.2) Custom Data Generator import tensorflow as tf from PIL import Image import numpy as np class CustomDataGenerator(tf.keras.utils.Sequence): ''' Custom DataGenerator to load img Arguments: data_frame = pandas data frame in filenames and labels format batch_size = divide data in batches shuffle = shuffle data before loading img_shape = image shape in (h, w, d) format augmentation = data augmentation to make model rebust to overfitting Output: Img: numpy array of image label : output label for image ''' def __init__(self, data_frame, batch_size=10, img_shape=None, augmentation=True, num_classes=None): self.data_frame = data_frame self.train_len = len(data_frame) self.batch_size = batch_size self.img_shape = img_shape self.num_classes = num_classes print(f"Found {self.data_frame.shape[0]} images belonging to {self.num_classes} classes") def __len__(self): ''' return total number of batches ''' self.data_frame = shuffle(self.data_frame) return math.ceil(self.train_len/self.batch_size) def on_epoch_end(self): ''' shuffle data after every epoch ''' # fix on epoch end it's not working, adding shuffle in len for alternative pass def __data_augmentation(self, img): ''' function for apply some data augmentation ''' img = tf.keras.preprocessing.image.random_shift(img, 0.2, 0.3) img = tf.image.random_flip_left_right(img) img = tf.image.random_flip_up_down(img) return img def __get_image(self, file_id): """ open image with file_id path and apply data augmentation """ img = np.asarray(Image.open(file_id)) img = np.resize(img, self.img_shape) img = self.__data_augmentation(img) img = preprocess_input(img) return img def __get_label(self, label_id): """ uncomment the below line to convert label into categorical format """ #label_id = tf.keras.utils.to_categorical(label_id, num_classes) return label_id def __getitem__(self, idx): batch_x = self.data_frame["filenames"][idx * self.batch_size:(idx + 1) * self.batch_size] batch_y = self.data_frame["labels"][idx * self.batch_size:(idx + 1) * self.batch_size] # read your data here using the batch lists, batch_x and batch_y x = [self.__get_image(file_id) for file_id in batch_x] y = [self.__get_label(label_id) for label_id in batch_y] return tf.convert_to_tensor(x), tf.convert_to_tensor(y)
7
5
63,849,023
2020-9-11
https://stackoverflow.com/questions/63849023/find-replace-in-vs-code-jupyter-notebooks
Is there a way to find and replace text for Jupyter Notebooks in Visual Studio Code. I can do it for a specific cell by clicking to that cell and pressing Ctrl+H. But I cannot find a way to do it for all the cells in the entire notebook. This is how it looks like when I press Ctrl+H for specific cells:
This issue no longer exists in Visual Studio Code as of Version 1.59.1. You can use Ctrl + H to find/replace in the whole Jupyter Notebook.
18
4
63,889,494
2020-9-14
https://stackoverflow.com/questions/63889494/testing-for-mongodb-functionality-using-motor-asyncio-and-pytest
So I am trying to write several tests to test my functions that use an async MongoDB connection. To connect to MongoDB I use Motor with asyncio. I need help with mocking the Motor connection. My Code: commons.py mongo = None blacklist.py import commons class Blacklist(object): async def check_if_blacklisted(self, word: str): blacklisted = False if await commons.mongo.dbtest.blacklist.find_one({'word': word}): blacklisted = True return blacklisted main.py import asyncio from blacklist import Blacklist from motor.motor_asyncio import AsyncIOMotorClient import commons async def run(): commons.mongo = AsyncIOMotorClient("mongodb://localhost", io_loop=asyncio.get_event_loop()) blacklist_checker = Blacklist() result = await blacklist_checker.check_if_blacklisted(word="should_be_false") print(result) # > False result = await blacklist_checker.check_if_blacklisted(word="should_be_true") print(result) # > True loop = asyncio.get_event_loop() loop.run_until_complete(run()) loop.close() I now want to test blacklist.py by mocking the Motor Connection but I cannot seem to get the test running properly. Here are the codes that I've tried: test_blacklist.py import pytest from blacklist import Blacklist class TestBlacklist(object): @pytest.fixture async def motor(self, event_loop): # I know I'm not mocking the Motor Connection here, # but just wanted to show you the output using this fixture. commons.mongo = motor.motor_asyncio.AsyncIOMotorClient(io_loop=event_loop) yield commons.mongo commons.mongo.close() @pytest.mark.asyncio async def test_check_if_blacklisted(self): blacklist_checker = Blacklist() blacklisted = await blacklist_checker.check_if_blacklisted(word="should_be_false") assert blacklisted == False # > AttributeError: 'NoneType' object has no attribute 'blacklist' pytest-mongodb: import pytest from unittest.mock import patch from blacklist import Blacklist class TestBlacklist(object): @pytest.mark.asyncio async def test_check_if_blacklisted(self, mongodb): with patch("blacklist.commons.mongo") as db: db = mongodb blacklist_checker = Blacklist() blacklisted = await blacklist_checker.check_if_blacklisted(word="should_be_false") assert blacklisted == False # > TypeError: object MagicMock can't be used in 'await' expression I tried searching online but I could not find a proper thread which would help me to perform the test while mocking the Motor connection which is async. Moreover, if you think that the direction I'm heading into for testing isn't right, kindly let me know since I am new to writing tests, especially with async db connections. Note: blacklist.py has various functions that require MongoDB functionality so it would be great if in my test_blacklist.py I could just initialize commons.mongo once and all the subsequent tests use that.
You can mock the async MongoDB database with pytest-async-mongodb but have in mind that it's outdated and has dependency errors so you have to fix the dependencies versions as followings: mongomock==3.12.0 pyyaml==3.13 pytest-asyncio==0.10.0 pytest==3.6.4 With pytest-async-mongodb you can get the mocked DB in the test by adding an argument called async_mongodb. I'm going to let you the code and the structure. project -app __init__.py blacklist.py commons.py -test -fixtures blacklist.json __init__.py test_blacklist.py main.py pytest.ini main.py import asyncio from app.blacklist import Blacklist from app.commons import get_database, set_client from motor.motor_asyncio import AsyncIOMotorClient async def run(): set_client( AsyncIOMotorClient("mongodb://localhost", io_loop=asyncio.get_event_loop()) ) db = await get_database() blacklist_checker = Blacklist() result = await blacklist_checker.check_if_blacklisted(db, word="should_be_false") print(result) # > False result = await blacklist_checker.check_if_blacklisted(db, word="should_be_true") print(result) # > True loop = asyncio.get_event_loop() loop.run_until_complete(run()) loop.close() blacklist.py from motor.motor_asyncio import AsyncIOMotorDatabase class Blacklist(object): async def check_if_blacklisted(self, db: AsyncIOMotorDatabase, word: str): blacklisted = False if await db.blacklist.find_one({"word": word}): blacklisted = True return blacklisted commons.py from motor.motor_asyncio import AsyncIOMotorClient, AsyncIOMotorDatabase class DataBase: client: AsyncIOMotorClient = None db = DataBase() async def get_database() -> AsyncIOMotorDatabase: return db.client["dbtest"] def set_client(client): db.client = client test_blacklist.py import pytest from app import blacklist @pytest.mark.asyncio async def test_should_be_false(async_mongodb): blacklist_checker = blacklist.Blacklist() blacklisted = await blacklist_checker.check_if_blacklisted( async_mongodb, word="should_be_false" ) assert blacklisted == False @pytest.mark.asyncio async def test_should_be_true(async_mongodb): blacklist_checker = blacklist.Blacklist() blacklisted = await blacklist_checker.check_if_blacklisted( async_mongodb, word="should_be_true" ) assert blacklisted == True pytest.ini [pytest] async_mongodb_fixture_dir = test/fixtures async_mongodb_fixtures = blacklist blacklist.json [ { "_id": {"$oid": "60511d158f80a8d34986e2b0"}, "word" : "should_be_true" } ] The fixture can be a .yaml too and can define the amount that you want. Read the package documentation for more information. Since it is outdated, I created a fork to update it and improve it with new features. You are invited to take a look at it and use it if you wish.
15
6
63,865,209
2020-9-12
https://stackoverflow.com/questions/63865209/plotly-how-to-show-both-a-normal-distribution-and-a-kernel-density-estimation-i
For a plotly figure factory distribution plot, the default distribution is kde (kernel density estimation): You can override the default by setting curve = 'normal' to get: But how can you show both kde and the normal curve in the same plot? Assigning a list like curve_type = ['kde', 'normal'] will not work. Complete code: import plotly.figure_factory as ff import plotly.graph_objects as go import plotly.express as px import numpy as np np.random.seed(2) x = np.random.randn(1000) hist_data = [x] group_labels = ['distplot'] # name of the dataset mean = np.mean(x) stdev_pluss = np.std(x) stdev_minus = np.std(x)*-1 fig = ff.create_distplot(hist_data, group_labels, curve_type='kde') fig.update_layout(template = 'plotly_dark') fig.show()
The easiest thing to do is build another figure fig2 with curve_type = 'normal' and pick up the values from there using: fig2 = ff.create_distplot(hist_data, group_labels, curve_type = 'normal') normal_x = fig2.data[1]['x'] normal_y = fig2.data[1]['y'] And then inlclude those values in the first fig using fid.add_trace(go.Scatter()) like this: fig2 = ff.create_distplot(hist_data, group_labels, curve_type = 'normal') normal_x = fig2.data[1]['x'] normal_y = fig2.data[1]['y'] fig.add_traces(go.Scatter(x=normal_x, y=normal_y, mode = 'lines', line = dict(color='rgba(0,255,0, 0.6)', #dash = 'dash' width = 1), name = 'normal' )) fig.show() Plot with two density curves:
7
8
63,909,243
2020-9-15
https://stackoverflow.com/questions/63909243/what-is-the-correct-boilerplate-for-explicit-relative-imports
In PEP 366 - Main module explicit relative imports which introduced the module-scope variable __package__ to allow explicit relative imports in submodules, there is the following excerpt: When the main module is specified by its filename, then the __package__ attribute will be set to None. To allow relative imports when the module is executed directly, boilerplate similar to the following would be needed before the first relative import statement: if __name__ == "__main__" and __package__ is None: __package__ = "expected.package.name" Note that this boilerplate is sufficient only if the top level package is already accessible via sys.path. Additional code that manipulates sys.path would be needed in order for direct execution to work without the top level package already being importable. This approach also has the same disadvantage as the use of absolute imports of sibling modules - if the script is moved to a different package or subpackage, the boilerplate will need to be updated manually. It has the advantage that this change need only be made once per file, regardless of the number of relative imports. I have tried to use this boilerplate in the following setting: Directory layout: foo ├── bar.py └── baz.py Contents of the bar.py submodule: if __name__ == "__main__" and __package__ is None: __package__ = "foo" from . import baz The boilerplate works when executing the submodule bar.py from the file system (the PYTHONPATH modification makes the package foo/ accessible on sys.path): PYTHONPATH=$(pwd) python3 foo/bar.py The boilerplate also works when executing the submodule bar.py from the module namespace: python3 -m foo.bar However the following alternative boilerplate works just as well in both cases as the contents of the bar.py submodule: if __package__: from . import baz else: import baz Furthermore this alternative boilerplate is simpler and does not require any update of the submodule bar.py when it is moved with the submodule baz.py to a different package (since it does not hard code the package name "foo"). So here are my questions about the boilerplate of PEP 366: Is the first subexpression __name__ == "__main__" necessary or is it already implied by the second subexpression __package__ is None? Shouldn’t the second subexpression __package__ is None be not __package__ instead, in order to handle the case where __package__ is the empty string (like in a __main__.py submodule executed from the file system by supplying the containing directory: PYTHONPATH=$(pwd) python3 foo/)?
The correct boilerplate is none, just write the explicit relative import and let the exception escape if someone tries to run the module as a script or has sys.path misconfigured: from . import baz The boilerplate given in PEP 366 is just there to show that the proposed change is sufficient to allow users to make direct execution* work if they really want to, it isn’t intended to suggest that making direct execution work is a good idea (it isn’t, it is a bad idea that will almost inevitably cause other problems, even with the boilerplate from the PEP). Your proposed alternative boilerplate recreates the problem caused by implicit relative imports in Python 2: the "baz" module gets imported as baz from __main__, but will be imported as "foo.baz" everywhere else, so you end up with two copies in sys.modules under different names. Amongst other problems, this means that if some other module throws foo.baz.SomeException and your __main__ module tries to catch baz.SomeException, it won’t work, as those will be two different exception objects coming from two different modules. By contrast, if you use the PEP boilerplate, then __main__ will correctly import baz as "foo.baz", and the only thing you have to worry about is other modules potentially importing foo.bar. If you want simpler boilerplate that explicitly guards against the "inadvertently making two copies of the same module under a different name" bug without hardcoding the package name, then you can use this: if not __package__: raise RuntimeError(f"{__file__} must be imported as a package submodule") However, if you are going to do that, you can just as well do from . import baz unconditionally as suggested above, and let the underlying exception escape if someone tries to run the script directly instead of via the -m switch. * Direct execution means executing code from: A file path argument except directory and zip file paths (python <file path>). A -c argument (python -c <code>). The interactive interpreter (python). Standard input (python < <file path>). Indirect execution means executing code from: A directory or zip file path argument (python <directory or zip file path>). A -m argument (python -m <module name>). An import statement (import <module name>) Now to answer your questions specifically: Is the first subexpression __name__ == "__main__" necessary or is it already implied by the second subexpression __package__ is None? It is hard to get __package__ is None anywhere other than the __main__ module with the modern import system. But it used to be a lot more common, as rather than being set by the import system on module load, __package__ would instead be set lazily by the first explicit relative import executed in the module. In other words, the boilerplate is only trying to let direct execution work (cases 1 to 4 above) but __package__ is None used to imply direct execution or an import statement (case 7 above), so to filter out case 7 the subexpression __name__ == "__main__" (cases 1 to 6 above) was necessary. Shouldn’t the second subexpression __package__ is None be not __package__ instead, in order to handle the case where __package__ is the empty string (like in a __main__.py submodule executed from the file system by supplying the containing directory: PYTHONPATH=$(pwd) python3 foo/)? No because the boilerplate is only trying to let direct execution work (cases 1 to 4 above), it isn’t trying to let other flavours of sys.path misconfiguration pass silently.
7
5
63,816,790
2020-9-9
https://stackoverflow.com/questions/63816790/youtube-dl-error-youtube-said-unable-to-extract-video-data
I'm making a little graphic interface with Python 3 which should download a youtube video with its URL. I used the youtube_dl module for that. This is my code : import youtube_dl # Youtube_dl is used for download the video ydl_opt = {"outtmpl" : "/videos/%(title)s.%(ext)s", "format": "bestaudio/best"} # Here we give some advanced settings. outtmpl is used to define the path of the video that we are going to download def operation(link): """ Start the download operation """ try: with youtube_dl.YoutubeDL(ydl_opt) as yd: # The method YoutubeDL() take one argument which is a dictionary for changing default settings video = yd.download([link]) # Start the download result.set("Your video has been downloaded !") except Exception: result.set("Sorry, we got an error.") operation("https://youtube.com/watch?v=...") When I execute my code, I get this error: ERROR: YouTube said: Unable to extract video data I saw here that it was because it doesn't find any video info, how can I resolve this problem?
Updating youtube-dl helped me. Depending on the way you installed it, here are the commands: youtube-dl --update (self-update) pip install -U youtube-dl (via python) brew upgrade youtube-dl (macOS + homebrew) choco upgrade youtube-dl (Windows + Chocolatey)
134
208
63,867,581
2020-9-13
https://stackoverflow.com/questions/63867581/install-python-3-7-via-google-colab-as-default-python
I need to use python3.7 as default python version to use in google colab(via this notebook ) for testing the faceswap GitHub project, by this codes: %cd "/content/faceit" !rm -rf faceswap !git clone https://github.com/deepfakes/faceswap.git %cd faceswap !python setup.py The reason is that,when i try to install faceswap in google colab i get this error: /content/faceit Cloning into 'faceswap'... remote: Enumerating objects: 7725, done. remote: Total 7725 (delta 0), reused 0 (delta 0), pack-reused 7725 Receiving objects: 100% (7725/7725), 194.20 MiB | 31.66 MiB/s, done. Resolving deltas: 100% (5338/5338), done. /content/faceit/faceswap INFO Running as Root/Admin INFO The tool provides tips for installation and installs required python packages INFO Setup in Linux 4.19.112+ INFO Installed Python: 3.6.9 64bit ERROR Please run this script with Python version 3.7 or 3.8 64bit and try again. So based of the different python module which needs to be installed by different files, it needs to install python 3.7 and set it as python default command. I would appropriate, any help to solve it. Thanks.
According to this post, there are different ways to run a specific version of Python on Colab: Installing Anaconda Adding (fake) google.colab library Starting Jupyterlab Accessing it with ngrok The code sample is below # install Anaconda3 !wget -qO ac.sh https://repo.anaconda.com/archive/Anaconda3-2020.07-Linux-x86_64.sh !bash ./ac.sh -b # a fake google.colab library !ln -s /usr/local/lib/python3.6/dist-packages/google \ /root/anaconda3/lib/python3.8/site-packages/google # start jupyterlab, which now has Python3 = 3.8 !nohup /root/anaconda3/bin/jupyter-lab --ip=0.0.0.0& # access through ngrok, click the link !pip install pyngrok -q from pyngrok import ngrok print(ngrok.connect(8888)) Additionally, I recommend you to use it by specifying the Python version to run a script on colab. # Install the python version !apt-get install python3.7 # Select the version !python3.7 setup.py You can see this example I have tried. If you will use multiple library versions, you can also use virtualenv on colab by specify the python version with --python option. For example: virtualenv env --python=python3.7
7
9
63,863,449
2020-9-12
https://stackoverflow.com/questions/63863449/oserror-cannot-load-library-c-program-files-r-r-4-0-2-bin-x64-r-dll-error-0
I am trying to import the rpy2 library into a Jupyter Notebook but I cannot get past this error. The PATH 'C:\Program Files\R\R-4.0.2\bin\x64' has been added. This is the only version of R installed on my computer. I have completely uninstalled and reinstalled R/Rstudio/Anaconda with no luck. Here is the full error: --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-7-098f0d39b3a3> in <module> ----> 1 from rpy2.robjects import pandas2ri C:\Anaconda\lib\site-packages\rpy2\robjects\__init__.py in <module> 14 from functools import partial 15 import types ---> 16 import rpy2.rinterface as rinterface 17 import rpy2.rlike.container as rlc 18 C:\Anaconda\lib\site-packages\rpy2\rinterface.py in <module> 4 import typing 5 from typing import Union ----> 6 from rpy2.rinterface_lib import openrlib 7 import rpy2.rinterface_lib._rinterface_capi as _rinterface 8 import rpy2.rinterface_lib.embedded as embedded C:\Anaconda\lib\site-packages\rpy2\rinterface_lib\openrlib.py in <module> 42 rlib = _rinterface_cffi.lib 43 else: ---> 44 rlib = _dlopen_rlib(R_HOME) 45 46 C:\Anaconda\lib\site-packages\rpy2\rinterface_lib\openrlib.py in _dlopen_rlib(r_home) 35 raise ValueError('The library path cannot be None.') 36 else: ---> 37 rlib = ffi.dlopen(lib_path) 38 return rlib 39 OSError: cannot load library 'C:\Program Files\R\R-4.0.2\bin\x64\R.dll': error 0x7e edit: Here is the code I run to import rpy2 library: from rpy2.robjects import r, pandas2ri
1 - Windows + IDE For those not using Anaconda, add the following in Windows' environment variables PATH: C:\Program Files\R\R-4.0.3\bin\x64 Your R version may differ from "R-4.0.3" 2 - Anaconda Otherwise, check Grayson Felt's reply: I found a solution here. Adding the PATH C:\Users\username\Anaconda2;C:\Users\username\Anaconda2\Scripts;C:\Users\username\Anaconda2\Library\bin;C:\Users\username\Anaconda2\Library\mingw-w64\lib;C:\Users\username\Anaconda2\Library\mingw-w64\bin and subsequently restarting Anaconda fixed the issue. 3 - Code header Windows basic Alternatively, following Bruno's suggestion (and being more sohpisticated): try: import rpy2.robjects as robjects except OSError as e: try: import os import platform if ('Windows', 'Microsoft') in platform.system(): os.environ["R_HOME"] = 'C:/Program Files/R/R-4.0.3/bin/x64' # Your R version here 'R-4.0.3' os.environ["PATH"] = "C:/Program Files/R/R-4.0.3/bin/x64" + ";" + os.environ["PATH"] import rpy2.robjects as robjects except OSError: raise(e) This code won't be effective for non-Windows platform. Also adjustments may be necessary for different R versions. If it gets more complicated than this, you should probably just go for solutions 1 or 2. NOTE: You may also face this issue if your Python and R versions are in different architechtures (x86 vs x64)
7
3
63,811,550
2020-9-9
https://stackoverflow.com/questions/63811550/plotly-how-to-display-graph-after-clicking-a-button
I want to use plotly to display a graph only after a button is clicked but am not sure how to make this work. My figure is stored in the following code bit fig1 = go.Figure(data=plot_data, layout=plot_layout) I then define my app layout with the following code bit: app.layout = html.Div([ #button html.Div(className='submit', children=[ html.Button('Forecast', id='submit', n_clicks=0) ]), #loading dcc.Loading( id="loading-1", type="default", children=html.Div(id="loading-output-1") ), #graph dcc.Graph(id= 'mpg-scatter',figure=fig), #hoverdata html.Div([ dcc.Markdown(id='hoverdata-text') ],style={'width':'50%','display':'inline-block'}) ]) @app.callback(Output('hoverdata-text','children'), [Input('mpg-scatter','hoverData')]) def callback_stats(hoverData): return str(hoverData) if __name__ == '__main__': app.run_server() But the problem is i only want the button displayed at first. Then when someone clicks on the forecast button the loading feature appears and a second later the graph displays. I defined a dcc.loading component but am not sure how to define the callback for this feature.
SUGGESTION 3 - dcc.Store() and dcc.Loading This suggestion uses a dcc.Store() component, a html.Button() and a dcc.Loading component to produce what I now understand to be the desired setup: Launch an app that only shows a button. Click a button to show a loading icon, and then display a figure. Click again to show the next figure in a sequence of three figures. Start again when the figure sequence is exhausted. Upon launch, the app will look like this: Now you can click Figures once to get Figure 1 below, but only after enjoying one of the following loading icons: ['graph', 'cube', 'circle', 'dot', or 'default'] of which 'dot' will trigger ptsd, and 'cube' happens to be my favorite: Loading... Figure 1 Now you cann keep on clicking for Figure 2 and Figure 3. I've set the loading time for Figure 1 no less than 5 seconds, and then 2 seconds for Figure 2 and Figure 3. But you can easily change that. When you've clicked more than three times, we start from the beginning again: I hope I've finally figured out a solution for what you were actually looking for. The setup in the code snippet below builds on the setup described here, but has been adjusted to hopefully suit your needs. Let me know how this works out for you! import pandas as pd import dash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State import plotly.graph_objects as go from jupyter_dash import JupyterDash import dash_table from dash.exceptions import PreventUpdate import dash_bootstrap_components as dbc import time time.sleep(5) # Delay for 5 seconds. global_df = pd.DataFrame({'value1':[1,2,3,4], 'value2':[10,11,12,14]}) # app = JupyterDash(__name__) app = JupyterDash(external_stylesheets=[dbc.themes.SLATE]) df = pd.DataFrame({'Value 1': [1,2,3], 'Value 2':[10,11,12], 'Value 3':[14,12,9]}) df.set_index('Value 1', inplace = True) app.layout = html.Div([ # The memory store reverts to the default on every page refresh dcc.Store(id='memory'), # The local store will take the initial data # only the first time the page is loaded # and keep it until it is cleared. # Same as the local store but will lose the data # when the browser/tab closes. html.Table([ html.Thead([ html.Tr(html.Th('Click to launch figure:')), html.Tr([ html.Th(html.Button('Figures', id='memory-button')), ]), ]), ]), dcc.Loading(id = "loading-icon", #'graph', 'cube', 'circle', 'dot', or 'default' type = 'cube', children=[html.Div(dcc.Graph(id='click_graph'))]) ]) # Create two callbacks for every store. # add a click to the appropriate store. @app.callback(Output('memory', 'data'), [Input('memory-button', 'n_clicks')], [State('memory', 'data')]) def on_click(n_clicks, data): if n_clicks is None: # prevent the None callbacks is important with the store component. # you don't want to update the store for nothing. raise PreventUpdate # Give a default data dict with 0 clicks if there's no data. data = data or {'clicks': 0} data['clicks'] = data['clicks'] + 1 if data['clicks'] > 3: data['clicks'] = 0 return data # output the stored clicks in the table cell. @app.callback(Output('click_graph', 'figure'), # Since we use the data prop in an output, # we cannot get the initial data on load with the data prop. # To counter this, you can use the modified_timestamp # as Input and the data as State. # This limitation is due to the initial None callbacks # https://github.com/plotly/dash-renderer/pull/81 [Input('memory', 'modified_timestamp')], [State('memory', 'data')]) def on_data(ts, data): if ts is None: #raise PreventUpdate fig = go.Figure() fig.update_layout(plot_bgcolor='rgba(0,0,0,0)', paper_bgcolor='rgba(0,0,0,0)', yaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)')), xaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)'))) return(fig) data = data or {} 0 # plotly y = 'Value 2' y2 = 'Value 3' fig = go.Figure() fig.update_layout(plot_bgcolor='rgba(0,0,0,0)', paper_bgcolor='rgba(0,0,0,0)', yaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)')), xaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)'))) if data.get('clicks', 0) == 1: fig = go.Figure(go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines')) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='plotly_dark', title = 'Plot number ' + str(data.get('clicks', 0))) # delay only after first click time.sleep(2) if data.get('clicks', 0) == 2: fig = go.Figure((go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'))) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='seaborn', title = 'Plot number ' + str(data.get('clicks', 0))) if data.get('clicks', 0) == 3: fig = go.Figure((go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'))) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='plotly_white', title = 'Plot number ' + str(data.get('clicks', 0))) # Aesthetics fig.update_layout(margin= {'t':30, 'b':0, 'r': 50, 'l': 50, 'pad': 0}, hovermode = 'x', legend=dict(x=1,y=0.85), uirevision='constant') # delay for every figure time.sleep(2) return fig app.run_server(mode='external', port = 8070, dev_tools_ui=True, dev_tools_hot_reload =True, threaded=True) SUGGESTION 2 After a little communation we now know that you'd like to: only display a button first (question) when the button is clicked once fig 1 is displayed at the bottom , on 2nd click fig 2 is displayed, and on 3rd click fig 3 is displayed (comment) I've made a new setup that should meet all criteria above. At first, only the control options are being showed. And then you can select which figure to display: Fig1, Fig2 or Fig3. To me it would seem like a non-optimal user iterface if you have to cycle through your figures in order to select which one you would like to display. So I'v opted for radio buttons such as this: Now you can freely select your figure to display, or go back to showing nothing again, like this: Display on startup, or when None is selected: Figure 1 is selected You still haven't provided a data sample, so I'm still using my synthetic data from Suggestion 1, and rather letting the different layouts indicate which figure is shown. I hope that suits your needs since it seemed that you would like to have different layouts for the different figures. Complete code 2 from jupyter_dash import JupyterDash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State, ClientsideFunction import dash_bootstrap_components as dbc import dash_bootstrap_components as dbc import dash_core_components as dcc import dash_html_components as html import pandas as pd import plotly.graph_objs as go from dash.dependencies import Input, Output import numpy as np from plotly.subplots import make_subplots import plotly.express as px pd.options.plotting.backend = "plotly" from datetime import datetime palette = px.colors.qualitative.Plotly # sample data df = pd.DataFrame({'Prices': [1,10,7,5, np.nan, np.nan, np.nan], 'Predicted_prices':[np.nan, np.nan, np.nan, 5, 8,6,9]}) # app setup app = JupyterDash(external_stylesheets=[dbc.themes.SLATE]) # controls controls = dbc.Card( [dbc.FormGroup( [ dbc.Label("Options"), dcc.RadioItems(id="display_figure", options=[ {'label': 'None', 'value': 'Nope'}, {'label': 'Figure 1', 'value': 'Figure1'}, {'label': 'Figure 2', 'value': 'Figure2'}, {'label': 'Figure 3', 'value': 'Figure3'} ], value='Nope', labelStyle={'display': 'inline-block', 'width': '10em', 'line-height':'0.5em'} ) ], ), dbc.FormGroup( [dbc.Label(""),] ), ], body=True, style = {'font-size': 'large'}) app.layout = dbc.Container( [ html.H1("Button for predictions"), html.Hr(), dbc.Row([ dbc.Col([controls],xs = 4), dbc.Col([ dbc.Row([ dbc.Col(dcc.Graph(id="predictions")), ]) ]), ]), html.Br(), dbc.Row([ ]), ], fluid=True, ) @app.callback( Output("predictions", "figure"), [Input("display_figure", "value"), ], ) def make_graph(display_figure): # main trace y = 'Prices' y2 = 'Predicted_prices' # print(display_figure) if 'Nope' in display_figure: fig = go.Figure() fig.update_layout(plot_bgcolor='rgba(0,0,0,0)', paper_bgcolor='rgba(0,0,0,0)', yaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)')), xaxis = dict(showgrid=False, zeroline=False, tickfont = dict(color = 'rgba(0,0,0,0)'))) return fig if 'Figure1' in display_figure: fig = go.Figure(go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines')) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='plotly_dark') # prediction trace if 'Figure2' in display_figure: fig = go.Figure((go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'))) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='seaborn') if 'Figure3' in display_figure: fig = go.Figure((go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'))) fig.add_traces(go.Scatter(name=y, x=df.index, y=df[y2], mode = 'lines')) fig.update_layout(template='plotly_white') # Aesthetics fig.update_layout(margin= {'t':30, 'b':0, 'r': 0, 'l': 0, 'pad': 0}) fig.update_layout(hovermode = 'x') fig.update_layout(showlegend=True, legend=dict(x=1,y=0.85)) fig.update_layout(uirevision='constant') fig.update_layout(title = "Prices and predictions") return(fig) app.run_server(mode='external', port = 8005) SUGGESTION 1 This suggestion will focus directly on: I want to use plotly to display a graph only after a button is clicked Which means that I don't assume that dcc.Loading() has to be a part of the answer. I find that dcc.Checklist() is an extremely versatile and user-friendly component. And when set up correctly, it will appear as a button that has to be clicked (or an option that has to be marked) in order to trigger certain functionalities or visualizations. Here's a basic setup: dcc.Checklist( id="display_columns", options=[{"label": col + ' ', "value": col} for col in df.columns], value=[df.columns[0]], labelStyle={'display': 'inline-block', 'width': '12em', 'line-height':'0.5em'} And here's how it will look like: Along with, among other things, the following few lines, the dcc.Checklist() component will let you turn the Prediction trace on and off as you please. # main trace y = 'Prices' fig = make_subplots(specs=[[{"secondary_y": True}]]) if 'Prices' in display_columns: fig.add_trace(go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'), secondary_y=False) # prediction trace if 'Predicted_prices' in display_columns: fig.add_trace(go.Scatter(name = 'predictions', x=df.index, y=df['Predicted_prices'], mode = 'lines'), secondary_y=False Adding to that, this setup will easily let you handle multiple predictions for multiple traces if you would like to extend this example further. Give it a try, and let me know how it works out for you. And if something is not clear, then we can dive into the details when you find the time. Here's how the app will look like with and without Predictions activated: OFF ON Complete code: from jupyter_dash import JupyterDash import dash_core_components as dcc import dash_html_components as html from dash.dependencies import Input, Output, State, ClientsideFunction import dash_bootstrap_components as dbc import dash_bootstrap_components as dbc import dash_core_components as dcc import dash_html_components as html import pandas as pd import plotly.graph_objs as go from dash.dependencies import Input, Output import numpy as np from plotly.subplots import make_subplots import plotly.express as px pd.options.plotting.backend = "plotly" from datetime import datetime palette = px.colors.qualitative.Plotly # sample data df = pd.DataFrame({'Prices': [1,10,7,5, np.nan, np.nan, np.nan], 'Predicted_prices':[np.nan, np.nan, np.nan, 5, 8,6,9]}) # app setup app = JupyterDash(external_stylesheets=[dbc.themes.SLATE]) # input controls controls = dbc.Card( [dbc.FormGroup( [ dbc.Label("Options"), dcc.Checklist( id="display_columns", options=[{"label": col + ' ', "value": col} for col in df.columns], value=[df.columns[0]], labelStyle={'display': 'inline-block', 'width': '12em', 'line-height':'0.5em'} #clearable=False, #multi = True ), ], ), dbc.FormGroup( [dbc.Label(""),] ), ], body=True, style = {'font-size': 'large'}) app.layout = dbc.Container( [ html.H1("Button for predictions"), html.Hr(), dbc.Row([ dbc.Col([controls],xs = 4), dbc.Col([ dbc.Row([ dbc.Col(dcc.Graph(id="predictions")), ]) ]), ]), html.Br(), dbc.Row([ ]), ], fluid=True, ) @app.callback( Output("predictions", "figure"), [Input("display_columns", "value"), ], ) def make_graph(display_columns): # main trace y = 'Prices' fig = make_subplots(specs=[[{"secondary_y": True}]]) if 'Prices' in display_columns: fig.add_trace(go.Scatter(name=y, x=df.index, y=df[y], mode = 'lines'), secondary_y=False) # prediction trace if 'Predicted_prices' in display_columns: fig.add_trace(go.Scatter(name = 'predictions', x=df.index, y=df['Predicted_prices'], mode = 'lines'), secondary_y=False) # Aesthetics fig.update_layout(margin= {'t':30, 'b':0, 'r': 0, 'l': 0, 'pad': 0}) fig.update_layout(hovermode = 'x') fig.update_layout(showlegend=True, legend=dict(x=1,y=0.85)) fig.update_layout(uirevision='constant') fig.update_layout(template='plotly_dark', plot_bgcolor='#272B30', paper_bgcolor='#272B30') fig.update_layout(title = "Prices and predictions") return(fig) app.run_server(mode='external', port = 8005)
8
5
63,823,964
2020-9-10
https://stackoverflow.com/questions/63823964/importerror-cannot-import-name-sysconfig-from-distutils-usr-lib-python3-8
I installed pip3 using sudo apt-get install python3-pip after that when I run the following command to install django sudo pip3 install django I get this error: Traceback (most recent call last): File "/usr/bin/pip3", line 9, in from pip import main File "/usr/lib/python3/dist-packages/pip/init.py", line 14, in from pip.utils import get_installed_distributions, get_prog File "/usr/lib/python3/dist-packages/pip/utils/init.py", line 23, in from pip.locations import ( File "/usr/lib/python3/dist-packages/pip/locations.py", line 9, in from distutils import sysconfig ImportError: cannot import name 'sysconfig' from 'distutils' (/usr/lib/python3.8/distutils/init.py) How do I fix this?
I have tried recently manually installing python3.9 version in my Ubuntu from 3.6 version using apt install python3.9. Then pip3 was broken. The issue is because distutils were not build for the 3.9 version. So in my case I ran apt install python3.9-distutils to resolve my issue. In your case make sure to modify 3.x version in distutils command.
33
75
63,871,252
2020-9-13
https://stackoverflow.com/questions/63871252/source-file-found-twice-error-with-mypy-0-780-in-python-for-vscode
In my python project, after upgrading mypy from 0.770 to 0.782 an error is received in files where there were previously no type errors: my_pkg_name\__init__.py: error: Source file found twice under different module names: 'top_pkg.my_pkg_name' and 'my_pkg_name' Found 1 error in 1 file (checked 1 source file) I'm pretty sure this is related to Issue #8944 on mypy and the way which vscode-python executes mypy on the open files. I've tried adding various mypy flags (e.g. --namespace-packages, --no-namespace-packages) but this did not resolve the issue. my_pkg_name does contain an __init__.py and so does top_pkg. With mypy==0.770 this was not a problem. Looking at the extension's output this is how mypy is invoked: > ~\.virtualenvs\OfflineSystem.38\Scripts\python.exe ` c:\Users\***\.vscode\extensions\ms-python.python-2020.8.108011\pythonFiles\pyvsc-run-isolated.py mypy ` --ignore-missing-imports --follow-imports=silent --show-column-numbers ` d:\***\top_pkg\my_pkg_name\sub_pkg\my_file.py Should change something in the mypy-related vscode settings for this to work?
I had a similar issue, but not via VSCode. The fix in my case was to remove an __init__.py file from a directory that was being included by adding it to the MYPYPATH, and so wasn't actually being treated as a module (so it shouldn't really have had the __init__.py file). You said you tried adding the --namespace-packages flag, but I think you would need --no-namespace-packages to disable the new checker which might be causing your problem.
40
32
63,833,593
2020-9-10
https://stackoverflow.com/questions/63833593/how-to-run-fastapi-uvicorn-in-google-colab
I am trying to run a "local" web app on Google Colab using FastAPI / Uvicorn like some of the Flask app sample code I've seen but cannot get it to work. Has anyone been able to do this? Appreciate it. Installed FastAPI & Uvicorn successfully !pip install FastAPI -q !pip install uvicorn -q Sample app from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} Run attempts #attempt 1 if __name__ == "__main__": uvicorn.run("/content/fastapi_002:app", host="127.0.0.1", port=5000, log_level="info") #attempt 2 #uvicorn main:app --reload !uvicorn "/content/fastapi_001.ipynb:app" --reload
You can use ngrok to export a port as an external url. Basically, ngrok takes something available/hosted on your localhost and exposes it to the internet with a temporary public URL. First install the dependencies !pip install fastapi nest-asyncio pyngrok uvicorn Create your app from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=['*'], allow_credentials=True, allow_methods=['*'], allow_headers=['*'], ) @app.get('/') async def root(): return {'hello': 'world'} Then run it down. import nest_asyncio from pyngrok import ngrok import uvicorn ngrok_tunnel = ngrok.connect(8000) print('Public URL:', ngrok_tunnel.public_url) nest_asyncio.apply() uvicorn.run(app, port=8000)
15
36
63,885,007
2020-9-14
https://stackoverflow.com/questions/63885007/implementation-of-kleptography-in-python-setup-attack
My task is to reproduce the plot below: It comes from this journal (pg 137-145) In this article, the authors describe a kleptographic attack called SETUP against Diffie-Hellman keys exchange. In particular, they write this algorithm: Now, in 2 the authors thought "Maybe we can implement honest DHKE and malicious DHKE, and then we compare the running time of the two algorithms". Then, the plot above was created. For this purpose, they say "We have implemented contaminated and uncontaminated versions of Diffie-Hellman protocols in ANSI C and linked with RSAREF 2.0 library using GNU C v 2.7 compiler. All tests were run on Linux system using a computer with a Pentium II processor (350 MHz) and 64 Mb memory. Computation time for a single protocol was measured in 10- 2s." I want to do the same, i.e. implement good and evil DH and compare the running time. This is the code I produced: import timeit #used to measure the running time of functions import matplotlib.pyplot as plt #plot the results import random import numpy as np import pyDH #library for Diffie-Hellman key exchange X= pyDH.DiffieHellman() #Eve's private key Y= X.gen_public_key() #Eve's public key #The three integers a,b,W embedded by Eve W=3 a=2 b=2 #Honest DH def public_key(): d1 = pyDH.DiffieHellman() return d1.gen_public_key() #Malicoius Diffie_Hellman (SETUP) #line 1-7 in the algorithm def mal_public_key(): d1 = pyDH.DiffieHellman().get_private_key() t=random.choice([0,1]) z1=pow(pyDH.DiffieHellman().g,d1-W*t,pyDH.DiffieHellman().p) z2=pow(Y,-a*d1-b,pyDH.DiffieHellman().p) z= z1*z2 % pyDH.DiffieHellman().p d2=hash(z) return pow(pyDH.DiffieHellman().g,d2,pyDH.DiffieHellman().p) #function that plot the results def plot(ntest=100000): times = [] times2=[] for i in range(ntest): #Running time HONEST Diffie-Hellman (worked two times = two key generations) elapse_time = timeit.timeit(public_key, number=2) #here I collect the times times += [int(round(elapse_time* pow(10, 2) ) )] # Running time MALICOIUS Diffie-Hellman elapse_time2 = timeit.timeit(mal_public_key, number= 1) times2 += [int(round(elapse_time2* pow(10, 2)) )] x_axis=[i for i in range(0,20)] #collect how many tests last i seconds y_axis = [times.count(i) for i in x_axis] y_axis2 = [times2.count(i) for i in x_axis] plt.plot(x_axis, y_axis, x_axis, y_axis2) plt.show() plot() where I used pyDH for honest Diffie-Hellman. This code gave me this figure: I think the blue line (honest DH) is ok but I'm a little bit suspicious about the orange line (SETUP DH) which is linked to this function: def mal_public_key(): #line 1-7 in the algorithm d1 = pyDH.DiffieHellman().get_private_key() t=random.choice([0,1]) z1=pow(pyDH.DiffieHellman().g,d1-W*t,pyDH.DiffieHellman().p) z2=pow(Y,-a*d1-b,pyDH.DiffieHellman().p) z= z1*z2 % pyDH.DiffieHellman().p d2 = hash(z) return pow(pyDH.DiffieHellman().g,d2,pyDH.DiffieHellman().p) Can the above function be considered as an "implementation" of SETUP attack against DH? Otherwise, what would you write? (any comments to the whole code will be really appreciated) In the article, one can read: "It is interesting that the curve representing the contaminated implementation has a small peak at the same value of computation time where the correct implementation has its only peak. [...] There are two different parts which occur every second call to device. The first one is identical to original [...] protocol and exactly this part is presented on the small peak. The disproportion between two peaks of curve representing contaminated implementation is clearly visible. The reason is that for practical usage after the first part of the protocol, (i.e. lines 1-3) device repeats the second part (i.e. lines 4-7) not once but many times." Can you explain this statement to me? In particular, why there is no small orange peak in my plot? Maybe the mal_public_key() function is bad. I'm working with Windows10 64bit, 8Gb RAM, AMD A10-8700P radeon R6, 10 compute cores 4C+6G 1.80GHz where I use Python 3.8. I know my computer should be better than the authors' one (I think). Maybe this can affect the results. However, here a similar experiment on an elliptic curve is showed and the plot is close to the original one (but, it's an elliptic curve). (P.S. I assumed that a=b=2 and W=3 because Young and Young don't say what these integers should be).
The problem is most easily understood using a concrete example: Alice has a device that generates Diffie-Hellman keys for her. On this device the malicious Diffie Hellman variant is implemented. Implementation of the malicious DH variant / SETUP The malicious DH variant is defined as follows, s. here, sec. 3.1: MDH1: For the first generated key pair the following applies: The private key c1 is a random value smaller than p-1. c1 is stored for later use. The public key is calculated according to m1 = gc1 mod p. The device provides Alice with the private key (c1) and the public key (m1). MDH2: For the second generated key pair the following applies: A random t is chosen (0 or 1). z2 is calculated according to z2 = g(c1 - Wt) * Y(-ac1 - b) mod p. The private key c2 is calculated according to H(z2). Here H is a cryptographic hash function. c2 is stored for later use. The public key is calculated according to m2 = gc2 mod p. The device provides Alice with the private key (c2) and the public key (m2). MDHi: What happens for the third and subsequent key pairs? The same algorithm is used as for the second generated key pair, i.e. for example for the third key exchange, c2 is now used instead of c1 and m2 is now used instead of m1, or in general if the i-th key pair ci, mi is generated: A random t is chosen (0 or 1). zi is calculated according to zi = g(ci-1 - Wt) * Y(-aci-1 - b) mod p. The private key ci is calculated according to H(zi). Here H is (the same) cryptographic hash function. ci is stored for later use. The public key is calculated according to mi = gci mod p. The device provides Alice with the private key (ci) and the public key (mi). Note that there are two categories of key exchange processes, MDH1 and MDHi, which will later play an important role in the discussion of timing behavior. Evaluation of the posted implementation of the malicious DH variant:SETUP (Secretly Embedded Trapdoor with Universal Protection) is not implemented by the implementation of the malicious DH variant posted in the question. SETUP establishes a relationship between two consecutive generated key pairs. This makes it possible to derive the private key of the last key generation from two such correlated public keys, which can be intercepted e.g. during the key exchange process. But for this, the private key must be passed between successive key generations to use it in the last key generation for establishing this relationship. This does not happen in the implementation, so that the required relationship cannot be achieved. From a more technical point of view, the implementation fails mainly because the cases MDH1 and MDHi are not implemented separately but together as a closed process. Closed in the sense that the private key is not stored between successive calls, so it cannot be passed on. Subsequent calls of the implementation therefore generate random key pairs that are not in the required relationship to each other. Also note that from the similar time behaviour (only similar, because e.g. the secondary peak is missing, which will be discussed below) of the posted implementation and the implementation used in the papers, of course no working implementation can be concluded. A working Python implementation of SETUP or the malicious Diffie-Hellman variant could look like this: import timeit import matplotlib.pyplot as plt import Crypto.Random.random import hashlib import pyDH DH = pyDH.DiffieHellman() xBytes = bytes.fromhex("000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f") X = int.from_bytes(xBytes, 'big') #Attacker's private key Y = pow(DH.g, X, DH.p) #Attacker's public key W = 3 a = 1 b = 2 ... privateKey = -1 def maliciousDH(): global privateKey DH = pyDH.DiffieHellman() if privateKey == -1: privateKeyBytes = Crypto.Random.get_random_bytes(32) privateKey = int.from_bytes(privateKeyBytes, 'big') publicKey = pow(DH.g, privateKey, DH.p) return publicKey else: t = Crypto.Random.random.choice([0,1]) z1 = pow(DH.g, privateKey - W*t, DH.p) z2 = pow(Y, -a*privateKey - b, DH.p) z = z1 * z2 % DH.p privateKey = hashVal(z) publicKey = pow(DH.g, privateKey, DH.p) return publicKey def hashVal(value): valBytes = value.to_bytes((value.bit_length() + 7) // 8, 'big') hashBytes = hashlib.sha256(valBytes).digest() hashInt = int.from_bytes(hashBytes, 'big') return hashInt Please note the following: The case privateKey == -1 corresponds to the generation of the first key pair (MDH1), the other case to the generation of the following key pairs (MDHi). The private key is stored in the global variable privateKey. W, a, b are constants that are known to the attacker. X, Y are the key pair of the attacker. Only the attacker as owner of the private key X can perform the attack. W, a, b are freely choosable constants, which should introduce randomness. W is odd by definition. Cryptographic functions are used to generate random data (from Crypto.Random, e.g. the private keys) and the hashs (SHA256 digest). pyDH is only used to generate p and g. The following function now generates 5 consecutive key pairs for Alice: def maliciousDHRepeated(nRepeats): for repeat in range(nRepeats): publicKey = maliciousDH() print('Key Exchange: {0}\nPublic Key: {1}\nPrivate Key: {2}\n'.format(repeat, publicKey, privateKey)) maliciousDHRepeated(5) The output looks e.g. as follows: Key Exchange: 0 Public Key: 18226633224055651343513608182055895594759078768444742995197429721573909831828316605245608159842524932769748407369962509403625808125978764850049011735149830412617126856825222066673989542531319225049606268752217216534778109596553167314895529287398326587713050976475410688145977311375672549266099133534202232996468144930213166214281451969286299514333332818247602266349875280576154902929160410595469062077684241858299388027340353827453534708956747631487004964946083413862389303833607835673755108949997895120758537057516467051311896742665758073078276178999259778767868295638521495976727377437778558494902010641893884127920 Private Key: 4392204374130125010330067842931188140034970327696589536054104764110713347126 Key Exchange: 1 Public Key: 30139618311151172765747180096035363693813051643690049553112194419098573739435580694888705607377666692401242533649667511485491747154556435118981839182970647673078490062996731957675595514634816595774261281319221404554602729724229286827390637649730469857732523498876684012366691655212568572203566445090111040033177144082954609583224066018767573710168898588215102016371545497586869795312982374868713234724720605552587419481534907792549991537554874489150528107800132171517459832877225822636558667670295657035332169649489708322429766192381544866291328725439248413336010141524449750548289234620983542492600882034426335715286 Private Key: 3611479293587046962518596774086804037937636733424448476968655857365061813747 Key Exchange: 2 Public Key: 15021809215915928817738182897850696714304022153200417823361919535871968087042467291587809018574692005905960913634048699743462124350711726491325639829348819265101140044881197573825573242439657057004277508875703449827687125018726500056235788271729552163855744357971593116349805054557752316498471702822698997323082247192241750099101807453692393170567790930805933977981635528696056267034337822347299945659257479795868510784724622533533407893475292593560877530083021377556080457647318869173210614687548861303039851452268391725700391477968193268054391569885481465079263633084038436082726915496351243387434434747413479966869 Private Key: 60238983934252145167590500466393092258324199134626435320227945202690746633424 Key Exchange: 3 Public Key: 10734077925995936749728900841226052313744498030619019606720177499132029904239020745125405126713523340876577377685679745194099270648038862447581601078565944941187694038253454951671644736332158734087472188874069332741118722988900754423479460535064533867667442756344440676583179886192206646721969399316522205542274029421077750159152806910322245234676026617311998560439487358561468993386759527957631649439920242228063598908755800970876077082845023854156477810356816239577567741067576206713910926615601025551542922545468685517450134977861564984442071615928397542549964474043544099258656296307792809119600776707470658907443 Private Key: 62940568050867023135180138841026300273520492550529251098760141281800354913131 Key Exchange: 4 Public Key: 2425486506974365273326155229800001628001265676036580545490312140179127686868492011994151785383963618955049941820322807563286674799447812191829716334313989776868220232473407985110168712017130778639844427996734182094558266926956379518534350404029678111523307272488057571760438620025027821267299005190378538083215345831756055838193240337363440449096741629258341463744397835411230218521658062737568519574165810330776930112569624066663275971997360960116063343238010922620695431389619027278076763449139206478745130163740678443228451977971659504896731844067323138748945493668050217811755122279988027033740720863980805941221 Private Key: 3330795034653139675928270510449092467425071094588264172648356254062467669676 Verification To verify the implementation, two tests are performed: Test 1: Are the generated key pairs Diffie-Hellman pairs? This can be verified by comparing the generated secrets, e.g. as follows (for Alice the key pair from exchange process 4 is taken): def determineSecrets(): # Bob's key pair DH = pyDH.DiffieHellman() privateKeyBob = DH.get_private_key() publicKeyBob = DH.gen_public_key() #Alice's key pair (from Key Exchange 4) privateKeyAlice = 3330795034653139675928270510449092467425071094588264172648356254062467669676 publicKeyAlice = 2425486506974365273326155229800001628001265676036580545490312140179127686868492011994151785383963618955049941820322807563286674799447812191829716334313989776868220232473407985110168712017130778639844427996734182094558266926956379518534350404029678111523307272488057571760438620025027821267299005190378538083215345831756055838193240337363440449096741629258341463744397835411230218521658062737568519574165810330776930112569624066663275971997360960116063343238010922620695431389619027278076763449139206478745130163740678443228451977971659504896731844067323138748945493668050217811755122279988027033740720863980805941221 #Secrets secretBob = pow(publicKeyAlice, privateKeyBob, DH.p) secretAlice = pow(publicKeyBob, privateKeyAlice, DH.p) print("Bob's secret: {0}\nAlices's secret: {1}\n".format(secretBob, secretAlice)) determineSecrets() The calculated secrets are identical: Bob's secret: 7003831476740338689134311867440050698619657722218522238000557307099433806548522159881608160975874841852430612290661550184838734726150744064473827597359598057583882560698588377500873394072081781357504452653998970161870108172814907873339750240946592215609078441859786431410312119968080615568505910664062291703601148542762668346870718638131670350107907779759989388216242619752036996919178837249552098220438246127095430336587506739324288803914290366560286806624611103226334708363046293511682782019638354540305524062643841864120561080971292493441027391819191342193393031588366711412191000779126089156632829354631140805980 Alices's secret: 7003831476740338689134311867440050698619657722218522238000557307099433806548522159881608160975874841852430612290661550184838734726150744064473827597359598057583882560698588377500873394072081781357504452653998970161870108172814907873339750240946592215609078441859786431410312119968080615568505910664062291703601148542762668346870718638131670350107907779759989388216242619752036996919178837249552098220438246127095430336587506739324288803914290366560286806624611103226334708363046293511682782019638354540305524062643841864120561080971292493441027391819191342193393031588366711412191000779126089156632829354631140805980 Test 2: Can the attacker determine Alice's private keys? The algorithm for deriving the keys is, s. here, sec. 3.1: Determination of r according to r = mi-1a * gb mod p Determination of u according to u = mi-1/rX mod p. If mi = gH(u) mod p, then the private key is ci = H(u) and end Determination of v according to v = u/gW mod p. If mi = gH(v) mod p, then the private key is ci = H(v) Apart from the constants W, a, b and the private key X, which the attacker knows all, only the public keys mi and mi-1 are needed to determine the private key ci. A possible implementation of this algorithm is: def stealPrivateKey(currentPublicKey, previousPublicKey): r = pow(previousPublicKey, a, DH.p) * pow(DH.g, b, DH.p) % DH.p u = previousPublicKey * pow(r, -X, DH.p) % DH.p if currentPublicKey == pow(DH.g, hashVal(u), DH.p): return hashVal(u) v = u * pow(DH.g, -W, DH.p) % DH.p if currentPublicKey == pow(DH.g, hashVal(v), DH.p): return hashVal(v) return -1 For verification the public keys from the key exchange processes 3 and 4 are used: previousPublicKey = 10734077925995936749728900841226052313744498030619019606720177499132029904239020745125405126713523340876577377685679745194099270648038862447581601078565944941187694038253454951671644736332158734087472188874069332741118722988900754423479460535064533867667442756344440676583179886192206646721969399316522205542274029421077750159152806910322245234676026617311998560439487358561468993386759527957631649439920242228063598908755800970876077082845023854156477810356816239577567741067576206713910926615601025551542922545468685517450134977861564984442071615928397542549964474043544099258656296307792809119600776707470658907443 currentPublicKey = 2425486506974365273326155229800001628001265676036580545490312140179127686868492011994151785383963618955049941820322807563286674799447812191829716334313989776868220232473407985110168712017130778639844427996734182094558266926956379518534350404029678111523307272488057571760438620025027821267299005190378538083215345831756055838193240337363440449096741629258341463744397835411230218521658062737568519574165810330776930112569624066663275971997360960116063343238010922620695431389619027278076763449139206478745130163740678443228451977971659504896731844067323138748945493668050217811755122279988027033740720863980805941221 currentPrivateKey = stealPrivateKey(currentPublicKey, previousPublicKey) print(currentPrivateKey) The result is 3330795034653139675928270510449092467425071094588264172648356254062467669676 and thus corresponds to the private key from key exchange 4. Analysis of time behaviour To compare the timing behavior of the malicious and standard DH variant, an implementation of the standard DH variant is required: SDHi: For all generated key pairs the following applies: The private key ci is a random value smaller than p-1. The public key is calculated according to mi = gci mod p. The device provides Alice with the private key (ci) and the public key (mi). with e.g. the following implementation: def standardDH(): DH = pyDH.DiffieHellman() privateKeyBytes = Crypto.Random.get_random_bytes(32) privateKey = int.from_bytes(privateKeyBytes, 'big') publicKey = pow(DH.g, privateKey, DH.p) return publicKey The comparison between malicious and standard DH variant is performed with the following implementation: def plot(nTests = 1000, nKeyExPerTest = 10): global privateKey timesStandardDH = [] timesMaliciousDH = [] for test in range(nTests): for keyExPerTest in range(nKeyExPerTest): elapseTimeStandardDH = timeit.timeit(standardDH, number = 1) timesStandardDH += [int(round(elapseTimeStandardDH * pow(10, 3) ) )] privateKey = -1 for keyExPerTest in range(nKeyExPerTest): elapseTimeMaliciousDH = timeit.timeit(maliciousDH, number = 1) timesMaliciousDH += [int(round(elapseTimeMaliciousDH * pow(10, 3)) )] x_axis=[i for i in range(0, 50)] y_axisStandardDH = [timesStandardDH.count(i) for i in x_axis] y_axisMaliciousDH = [timesMaliciousDH.count(i) for i in x_axis] plt.plot(x_axis, y_axisStandardDH, x_axis, y_axisMaliciousDH) plt.show() plot() The following applies here: nTests = 1000 tests are performed. For each test nKeyExPerTest = 10 key exchange processes are performed. privateKey = -1 ensures that each test starts again with MDH1. For each key exchange process the duration is measured and a frequency distribution of the duration is created. All measurements were performed with an Intel Core i7-6700 processor (3.40 GHz), Cores/Threads: 4/8, 16 GB RAM, NVIDIA Geforce GTX 960/4GB under Win10/64 bit and Python 3.8. The following two figures show this frequency distribution of the duration of the key exchange process: Left: x-axis: x 1000-1 sec, right: x-axis: x 10000-1 sec The figures correspond qualitatively to the figures posted from the papers: As expected, the main peak of the malicious Diffie-Hellman variant is at higher time values than the main peak of the standard DH variant. The ratio is similar at about 3. The malicious Diffie-Hellman variant has a secondary peak at the main peak of the standard DH variant. The secondary peak of the malicious DH variant is significantly smaller than the main peak of the malicious DH variant. Deviations in the waveform are probably due to different implementations, different hardware etc. Explanation of the secondary peak of the malicious DH variant: In the case of the malicious DH variant, there are two different cases, MDH1 and MDHi. MDH1 corresponds practically to the case of the standard DH variant SDHi. This is the reason why the secondary peak of the malicious and the main peak of the standard DH variant coincide. However, MDH1 and MDHi occur at different frequencies. E.g. for the tests, 10 key exchange processes per test were defined, i.e. the ratio of MDH1 to MDHi exchanges is 1:9, which explains the significantly smaller secondary peak relative to the main peak. The code could easily be changed to randomly determine the number of key exchange processes per test, which would be more realistic. But with a fixed number of key exchange processes the relations are easier to illustrate. Why does the secondary peak not appear in the implementation of the malicious DH variant posted in the question? This is because this implementation combines the cases MDH1 and MDHi and implements them as one case, so that only a single value is measured. Finally, here is a link to a helpful work of Eindhoven University of Technology on this topic.
18
11
63,856,340
2020-9-12
https://stackoverflow.com/questions/63856340/vs-code-cant-open-ipynb-file
Have everyone already had this problem, where VS Code keeps loading all the time and won't open a ipynb file? I've tried to use python 3.7 but same problem. Also tried to reinstall both VS Code and Anaconda, no success. Here is my environment data: VS Code version: 1.49.0 Python extension version:v2020.8.108011 OS and version: Ubuntu 20.04 Python version (& distribution if applicable, e.g. Anaconda): Anaconda python 3.8.3 Type of virtual environment used: using conda base environment Value of the python.languageServer setting: "Pylance" ipython version: 7.16.1 jedi version: 0.17.1 ipykernel version: 5.3.2
In their official GitHub page, they are tracking this issue already. There is also a solution (kind of) right now. You have to maximize the terminal panel below and then restore the panel size (basically max and min with the arrow button). Then the Notebook loads and everything works fine. :D The workaround was in this comment: https://github.com/microsoft/vscode-python/issues/13901#issuecomment-691625412 Not perfect but at least all the features are there and I can work with my notebooks again :)
20
3
63,829,991
2020-9-10
https://stackoverflow.com/questions/63829991/qt-qpa-plugin-could-not-load-the-qt-platform-plugin-xcb-in-even-though-it
I have installed gqcnn, Pyrep and autolab_core. After that, I executed the code that my coworker wrote and, it ran fine on his computer. However, I cannot run the code. The occurred error was python3.7/site-packages/cv2/qt/plugins/platforms" ... QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "xcb" ] }, "archreq": 0, "className": "QXcbIntegrationPlugin", "debug": false, "version": 331520 } In /home/bak/anaconda3/envs/pyrep/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so: Plugin uses incompatible Qt library (5.15.0) [release] "The plugin '/home/bak/anaconda3/envs/pyrep/lib/python3.7/site-packages/cv2/qt/plugins/platforms/libqxcb.so' uses incompatible Qt library. (5.15.0) [release]" not a plugin QFactoryLoader::QFactoryLoader() checking directory path "/home/bak/anaconda3/envs/pyrep/plugins/platforms" ... QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqeglfs.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqeglfs.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "eglfs" ] }, "className": "QEglFSIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("eglfs") QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqminimal.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqminimal.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "minimal" ] }, "className": "QMinimalIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("minimal") QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqminimalegl.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqminimalegl.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "minimalegl" ] }, "className": "QMinimalEglIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("minimalegl") QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqoffscreen.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqoffscreen.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "offscreen" ] }, "className": "QOffscreenIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("offscreen") QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqvnc.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqvnc.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "vnc" ] }, "className": "QVncIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("vnc") QFactoryLoader::QFactoryLoader() looking at "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqxcb.so" Found metadata in lib /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqxcb.so, metadata= { "IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3", "MetaData": { "Keys": [ "xcb" ] }, "className": "QXcbIntegrationPlugin", "debug": false, "version": 329991 } Got keys from plugin meta data ("xcb") QFactoryLoader::QFactoryLoader() checking directory path "/home/bak/anaconda3/envs/pyrep/bin/platforms" ... Cannot load library /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqxcb.so: (/home/bak/anaconda3/envs/pyrep/plugins/platforms/../../lib/libQt5XcbQpa.so.5: symbol _ZN11QFontEngine14bitmapForGlyphEj6QFixedRK10QTransform version Qt_5_PRIVATE_API not defined in file libQt5Gui.so.5 with link time reference) QLibraryPrivate::loadPlugin failed on "/home/bak/anaconda3/envs/pyrep/plugins/platforms/libqxcb.so" : "Cannot load library /home/bak/anaconda3/envs/pyrep/plugins/platforms/libqxcb.so: (/home/bak/anaconda3/envs/pyrep/plugins/platforms/../../lib/libQt5XcbQpa.so.5: symbol _ZN11QFontEngine14bitmapForGlyphEj6QFixedRK10QTransform version Qt_5_PRIVATE_API not defined in file libQt5Gui.so.5 with link time reference)" qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/bak/anaconda3/envs/pyrep/lib/python3.7/site-packages/cv2/qt/plugins" even though it was found. This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem. Available platform plugins are: eglfs, minimal, minimalegl, offscreen, vnc, xcb. If you want to see the full error in detail, please see here. '/pyrep/' in above is my anaconda environment name. It seems to be caused by qt. But I cannot fix this problem. What should I do to solve this problem?
Finally, I find the solution! https://github.com/stepjam/PyRep/issues/76 The problem was loading Qt in the conda environment. When I typed qmake -version, the terminal window showed me the qt in anaconda. After I followed the first answer at the above URL, I can fix the problem.
11
4
63,901,755
2020-9-15
https://stackoverflow.com/questions/63901755/customizing-the-flask-admin-row-actions
I want to add another button next to the edit and delete icons on flask admin list view. In addition, I want to send that row data to a route as a post request. I know that I have to edit the admin/model/list.html template, but I am not getting how to add this functionalities. Can you provide any guidance?
You need to define custom action buttons for your view. This process is not described in the Flask-Admin tutorial but it is mentioned in the API description. POST method If you need to create a button for a POST method you should implement a jinja2 macro like this delete_row action. It may look like this (I named the file "custom_row_actions.html"): {% macro copy_row(action, row_id, row) %} <form class="icon" method="POST" action="{{ get_url('.copy_view') }}"> <input type="hidden" name="row_id" value="{{ get_pk_value(row) }}"/> <button type="submit" title="{{ _gettext('Copy record') }}"> <span class="glyphicon glyphicon-copy"></span> </button> </form> {% endmacro %} Then you create a template for your list of records and import the macro library in it (I named it "my_list.html"): {% extends 'admin/model/list.html' %} {% import 'custom_row_actions.html' as custom_row_actions with context %} After that you have to make a couple of changes in your view: from flask_admin import expose from flask_admin.contrib.sqla.view import ModelView from flask_admin.model.template import TemplateLinkRowAction class MyView(ModelView): list_template = "my_list.html" # Override the default template column_extra_row_actions = [ # Add a new action button TemplateLinkRowAction("custom_row_actions.copy_row", "Copy Record"), ] @expose("/copy", methods=("POST",)) def copy_view(self): """The method you need to call""" GET method Creating a button for a GET method is much simpler. You don't need to override templates, just add an action to your view: from flask_admin import expose from flask_admin.contrib.sqla.view import ModelView from flask_admin.model.template import EndpointLinkRowAction class MyView(ModelView): column_extra_row_actions = [ # Add a new action button EndpointLinkRowAction("glyphicon glyphicon-copy", ".copy_view"), ] @expose("/copy", methods=("GET",)) def copy_view(self): """The method you need to call""" Glyphicons Glyphicons is the icon library which is bundled with the Bootstrap v3 library which is used by the Flask-Admin. You can use it if you chose this Bootstrap version on Flask-Admin initialization: from flask_admin import Admin admin = Admin(template_mode="bootstrap3") You can look at the available icons in the Bootsrap v3 documentation.
10
16
63,877,261
2020-9-14
https://stackoverflow.com/questions/63877261/how-to-group-a-dataframe-by-4-time-periods-and-key
I have a dataset that looks something like this: date area_key total_units timeatend starthour timedifference vps 2020-01-15 08:22:39 0 9603 2020-01-15 16:32:39 8 29400.0 0.32663265306122446 2020-01-13 08:22:07 0 10273 2020-01-13 16:25:08 8 28981.0 0.35447362064801075 2020-01-23 07:16:55 3 5175 2020-01-23 14:32:44 7 26149.0 0.19790431756472524 2020-01-15 07:00:06 1 838 2020-01-15 07:46:29 7 2783.0 0.3011139058569889 2020-01-15 08:16:01 1 5840 2020-01-15 12:41:16 8 15915.0 0.3669494187873076 That is then being computed into this to create a kmeans cluster. def cluster_Volume(inputData): start_tot = time.time() Volume = inputData.groupby(['Startdtm'])['vehiclespersec'].sum().unstack() ## 4 Clusters model = clstr.MiniBatchKMeans(n_clusters=5) model.fit(Volume.fillna(0)) Volume['kmeans_4'] = model.predict(Volume.fillna(0)) end_tot = time.time() print("Completed in " + str(end_tot-start_tot)) ## 8 Clusters start_tot = time.time() model = clstr.KMeans(n_clusters=8) model.fit(Volume.fillna(0)) Volume['kmeans_8'] = model.predict(Volume.fillna(0)) end_tot = time.time() print("Completed in " + str(end_tot-start_tot)) ## Looking at hourly distribution. start_tot = time.time() Volume_Hourly = Volume.reset_index().set_index(['Startdtm']) Volume_Hourly['hour'] = Volume_Hourly.index.hour end_tot = time.time() print("Completed in " + str(end_tot-start_tot)) return Volume, Volume_Hourly What I want to do is to make those clusters relate to both time periods and keys. With the time periods - 7 am to 10 am, and 4 pm to 6 pm, and 12 pm to 2 pm, with 6 pm to 12 am, 12 am to 7 am, and 10 am to 12 pm, 2 pm to 4 pm as other time periods. And with the keys - showing how each cluster differently in a programmatic way. Desired Result The desired result will have a table similar to below, but feel free to develop it in the best way you can think of. Time period meaning, say 1 would be before 6 am, 2 - 6 am to 9 am, 3 - 9 to 11, 4 - 11 to 14, etc.. but feel free to change it as suits - just my thoughts I've tried a few approaches to this using groupby, but it doesn't seem to work super well - would love some guidance here. Amazing response, wow cheers. Made me realise I was approaching this incorrectly, but still super valuable for fixing my approach. This data is the individual occurences as an example. DateTimeStamp VS_ID VS_Summary_Id Hostname Vehicle_speed Lane Length 11/01/2019 8:22 1 1 place_uno 65 2 71 11/01/2019 8:22 2 1 place_uno 59 1 375 11/01/2019 8:22 3 1 place_uno 59 1 389 11/01/2019 8:22 4 1 place_duo 59 1 832 11/01/2019 8:22 5 1 place_duo 52 1 409 To get volumes I need to aggregate over time in smaller volume blocks (15 second or 15 minute, will post code below). Then essentially same idea. An additional, and greedy question, would be - how would i interpolate speed into this measurement? i.e., large amounts of volumes, but low speeds, would be good to also cater for. Awesome amazing amazing stuff with those volume calculations per 15 seconds, i want to do the clustering ON those, as the summary table is way too broad based, but i think with what has been linked it should be fine to do that regardless if i rejig it a bit, i had realised the summary table was too broad and therefore doing the time clustering wasn't going to work unless i used this one, so the k-means over time is best with the new data, and the average speed of that cluster is to allow speed to be considered if that makes sense Thanks again amazing help, will be doing this to fit into below code, but yeah forgot to link it and it could help make it more specific and valuable. Thanks guys!
First Data (Note: the further parts relates to the updates) Data is very limited, probably due to the complexity to simplify it, so I shall make some assumptions and write this as generic as possible, so you can customize it fast to your needs. Assumptions: You want to group by hours-windows ("hour_code") the data (therefore parameterized what data is grouped by as group_divide_set_by_column) For each hours-windows ("hour_code"), you want to cluster by location using K means algorithm Doing so allows you to investigate the clusters of vehicles for each hour-window separately, and learn what clustered areas are more active and need attention. Notes: Location column (although noted) is missing and required for the K-means algorithm (I used HostName_key but it's just a dummy so code would run, it not necessarily meaningful). Generally speaking, the K-means algorithm is for spaces with euclidean distance (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.) Here are some sources for k-means Python examples that are useful to further customize it: 1 2 3 4. Code: Let's define a function, which given a dataframe group-divides it by a given column, group_divide_set_by_column. This would allow us to group-divide by 'hour_code', and then cluster by location. def create_clusters_by_group(df, group_divide_set_by_column='hour_code', clusters_number_list=[2, 3]): # Divide et by hours divide_df_by_hours(df) lst_df_by_groups = {f'{group_divide_set_by_column}_{i}': d for i, (g, d) in enumerate(df.groupby(group_divide_set_by_column))} # For each group dataframe for group_df_name, group_df in lst_df_by_groups.items(): # Divide to desired amount of clusters for clusters_number in clusters_number_list: create_cluster(group_df, clusters_number) # Setting column types set_colum_types(group_df) return lst_df_by_groups The #1 function would use another function to convert hour to hour codes, in similar to how you phrased it: Time period meaning, say 1 would be before 6 am, 2 - 6 am to 9 am, 3 - 9 to 11, 4 - 11 to 14, etc.. def divide_df_by_hours(df): def get_hour_code(h, start_threshold=6, end_threshold=21, windows=3): """ Divide hours to groups: Hours: 1-5 => 1 6-8 => 2 9-11 => 3 12-14 => 4 15-17 => 5 18-20 => 6 21+ => 7 """ if h < start_threshold: return 1 elif h >= end_threshold: return (end_threshold // windows) return h // windows df['hour_code'] = df['starthour'].apply(lambda h : get_hour_code(h)) Moreover the #1 function would use the set_colum_types function that would convert columns to their matching types: def set_colum_types(df): types_dict = { 'Startdtm': 'datetime64[ns, Australia/Melbourne]', 'HostName_key': 'category', 'Totalvehicles': 'int32', 'Enddtm': 'datetime64[ns, Australia/Melbourne]', 'starthour': 'int32', 'timedelta': 'float', 'vehiclespersec': 'float', } for col, col_type in types_dict.items(): df[col] = df[col].astype(col_type) A dedicated timeit decorator is used to measure the time for each clustering, so boilerplate code is reduced Whole Code: import functools import pandas as pd from timeit import default_timer as timer import sklearn from sklearn.cluster import KMeans def timeit(func): @functools.wraps(func) def newfunc(*args, **kwargs): startTime = timer() func(*args, **kwargs) elapsedTime = timer() - startTime print('function [{}] finished in {} ms'.format( func.__name__, int(elapsedTime * 1000))) return newfunc def set_colum_types(df): types_dict = { 'Startdtm': 'datetime64[ns, Australia/Melbourne]', 'HostName_key': 'category', 'Totalvehicles': 'int32', 'Enddtm': 'datetime64[ns, Australia/Melbourne]', 'starthour': 'int32', 'timedelta': 'float', 'vehiclespersec': 'float', } for col, col_type in types_dict.items(): df[col] = df[col].astype(col_type) @timeit def create_cluster(df, clusters_number): # Create K-Means model model = KMeans(n_clusters=clusters_number, max_iter=600, random_state=9) # Fetch location # NOTE: Should be a *real* location, used another column as dummy location_df = df[['HostName_key']] kmeans = model.fit(location_df) # Divide to clusters df[f'kmeans_{clusters_number}'] = kmeans.labels_ def divide_df_by_hours(df): def get_hour_code(h, start_threshold=6, end_threshold=21, windows=3): """ Divide hours to groups: Hours: 1-5 => 1 6-8 => 2 9-11 => 3 12-14 => 4 15-17 => 5 18-20 => 6 21+ => 7 """ if h < start_threshold: return 1 elif h >= end_threshold: return (end_threshold // windows) return h // windows df['hour_code'] = df['starthour'].apply(lambda h : get_hour_code(h)) def create_clusters_by_group(df, group_divide_set_by_column='hour_code', clusters_number_list=[2, 3]): # Divide et by hours divide_df_by_hours(df) lst_df_by_groups = {f'{group_divide_set_by_column}_{i}': d for i, (g, d) in enumerate(df.groupby(group_divide_set_by_column))} # For each group dataframe for group_df_name, group_df in lst_df_by_groups.items(): # Divide to desired amount of clusters for clusters_number in clusters_number_list: create_cluster(group_df, clusters_number) # Setting column types set_colum_types(group_df) return lst_df_by_groups # Load data df = pd.read_csv('data.csv') # Print data print(df) # Create clusters lst_df_by_groups = create_clusters_by_group(df) # For each hostname-key dataframe for group_df_name, group_df in lst_df_by_groups.items(): print(f'Group {group_df_name} dataframe:') print(group_df) Example output: Startdtm HostName_key ... timedelta vehiclespersec 0 2020-01-15 08:22:39 0 ... 29400.0 0.326633 1 2020-01-13 08:22:07 2 ... 28981.0 0.354474 2 2020-01-23 07:16:55 3 ... 26149.0 0.197904 3 2020-01-15 07:00:06 4 ... 2783.0 0.301114 4 2020-01-15 08:16:01 1 ... 15915.0 0.366949 5 2020-01-16 08:22:39 2 ... 29400.0 0.326633 6 2020-01-14 08:22:07 2 ... 28981.0 0.354479 7 2020-01-25 07:16:55 4 ... 26149.0 0.197904 8 2020-01-17 07:00:06 1 ... 2783.0 0.301114 9 2020-01-18 08:16:01 1 ... 15915.0 0.366949 [10 rows x 7 columns] function [create_cluster] finished in 10 ms function [create_cluster] finished in 11 ms function [create_cluster] finished in 10 ms function [create_cluster] finished in 11 ms function [create_cluster] finished in 10 ms function [create_cluster] finished in 11 ms Group hour_code_0 dataframe: Startdtm HostName_key ... kmeans_2 kmeans_3 0 2020-01-15 08:22:39+11:00 0 ... 1 1 1 2020-01-13 08:22:07+11:00 2 ... 0 0 2 2020-01-23 07:16:55+11:00 3 ... 0 2 [3 rows x 10 columns] Group hour_code_1 dataframe: Startdtm HostName_key ... kmeans_2 kmeans_3 3 2020-01-15 07:00:06+11:00 4 ... 1 1 4 2020-01-15 08:16:01+11:00 1 ... 0 0 5 2020-01-16 08:22:39+11:00 2 ... 0 2 [3 rows x 10 columns] Group hour_code_2 dataframe: Startdtm HostName_key ... kmeans_2 kmeans_3 6 2020-01-14 08:22:07+11:00 2 ... 1 2 7 2020-01-25 07:16:55+11:00 4 ... 0 0 8 2020-01-17 07:00:06+11:00 1 ... 1 1 9 2020-01-18 08:16:01+11:00 1 ... 1 1 [4 rows x 10 columns] Update : Second Data So, this time will make things a little different, as the updated objective is to understand how many vehicles are at each place and their speed. Again, things are written with great care for generically for the ease of adaptation. First, we divide the data set to groups based on, their location which is inferred by Hostname (parameterized for customization as dividing_colum). def divide_df_by_column(df, dividing_colum='Hostname'): df_by_groups = {f'{dividing_colum}_{g}': d for i, (g, d) in enumerate(df.groupby(dividing_colum))} return df_by_groups Now, we arrange the data for each hostname (dividing_colum) group: def arrange_groups_df(lst_df_by_groups): df_by_intervaled_group = dict() # For each group dataframe for group_df_name, group_df in lst_df_by_groups.items(): df_by_intervaled_group[group_df_name] = arrange_data(group_df) return df_by_intervaled_group 2.1. We group by intervals of 15 minutes, and after each hostname area data is divided into time intervals, we aggregate the amount of vehicles to column volume and investigate the average speed to column average_speed. def group_by_interval(df): df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) intervaled_df = df.groupby([pd.Grouper(key=DATE_COLUMN_NAME, freq=INTERVAL_WINDOW)]).agg({'Vehicle_speed' : 'mean', 'Hostname' : 'count'}).rename(columns={'Vehicle_speed' : 'average_speed', 'Hostname' : 'volume'}) return intervaled_df def arrange_data(df): df = group_by_interval(df) return df The end result for stage #2 is that each hostname data is divided into time windows of 15 minutes, and we know how many vehicles have passed each time and what is their average speed. By this, we achieve the objective: An additional, and greedy question, would be - how would i interpolate speed into this measurement? i.e., large amounts of volumes, but low speeds, would be good to also cater for. Again, all costumizable using [TIME_INTERVAL_COLUMN_NAME, DATE_COLUMN_NAME, INTERVAL_WINDOW]. The whole code: import functools import numpy import pandas as pd TIME_INTERVAL_COLUMN_NAME = 'time_interval' DATE_COLUMN_NAME = 'DateTimeStamp' INTERVAL_WINDOW = '15Min' def round_time(df): # Setting date_column_name to be of dateime df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) # Grouping by interval df[TIME_INTERVAL_COLUMN_NAME] = df[DATE_COLUMN_NAME].dt.round(INTERVAL_WINDOW) def group_by_interval(df): df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) intervaled_df = df.groupby([pd.Grouper(key=DATE_COLUMN_NAME, freq=INTERVAL_WINDOW)]).agg({'Vehicle_speed' : 'mean', 'Hostname' : 'count'}).rename(columns={'Vehicle_speed' : 'average_speed', 'Hostname' : 'volume'}) return intervaled_df def arrange_data(df): df = group_by_interval(df) return df def divide_df_by_column(df, dividing_colum='Hostname'): df_by_groups = {f'{dividing_colum}_{g}': d for i, (g, d) in enumerate(df.groupby(dividing_colum))} return df_by_groups def arrange_groups_df(lst_df_by_groups): df_by_intervaled_group = dict() # For each group dataframe for group_df_name, group_df in lst_df_by_groups.items(): df_by_intervaled_group[group_df_name] = arrange_data(group_df) return df_by_intervaled_group # Load data df = pd.read_csv('data2.csv') # Print data print(df) # Divide by column df_by_groups = divide_df_by_column(df) # Arrange data for each group df_by_intervaled_group = arrange_groups_df(df_by_groups) # For each hostname-key dataframe for group_df_name, intervaled_group_df in df_by_intervaled_group.items(): print(f'Group {group_df_name} dataframe:') print(intervaled_group_df) Example Output: We can now get valuable results from measuring the volumes (amount of vehicles) and average speed, for each individual hostname area. DateTimeStamp VS_ID VS_Summary_Id Hostname Vehicle_speed Lane Length 0 11/01/2019 8:22 1 1 place_uno 65 2 71 1 11/01/2019 8:23 2 1 place_uno 59 1 375 2 11/01/2019 8:25 3 1 place_uno 59 1 389 3 11/01/2019 8:26 4 1 place_duo 59 1 832 4 11/01/2019 8:40 5 1 place_duo 52 1 409 Group Hostname_place_duo dataframe: average_speed volume DateTimeStamp 2019-11-01 08:15:00 59 1 2019-11-01 08:30:00 52 1 Group Hostname_place_uno dataframe: average_speed volume DateTimeStamp 2019-11-01 08:15:00 61 3 Appendix Created also a round_time function, which allows to round to time intervals, without grouping: def round_time(df): # Setting date_column_name to be of dateime df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) # Grouping by interval df[TIME_INTERVAL_COLUMN_NAME] = df[DATE_COLUMN_NAME].dt.round(INTERVAL_WINDOW) Third Update So this time we want to reduce the number of rows in the result. We change the way we group the data, not only based on interval but also for the day in week, the result would allow us investigate in how traffic behaves for each day of the week and it's 15-minutes intervals. The group_by_interval function is now changed to group on the concise inteval thus, will be called group_by_concised_interval. We shall call the combination of [day-in-week, hour-minute] as "consice interval", again this is configurable with CONCISE_INTERVAL_FORMAT. def group_by_concised_interval(df): df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) # Rounding time round_time(df) # Adding concised interval add_consice_interval_columns(df) intervaled_df = df.groupby([TIME_INTERVAL_CONCISE_COLUMN_NAME]).agg({'Vehicle_speed' : 'mean', 'Hostname' : 'count'}).rename(columns={'Vehicle_speed' : 'average_speed', 'Hostname' : 'volume'}) return intervaled_df 1.1. The group_by_concised_interval first rounds time to the given 15-minutes interval (configurable via INTERVAL_WINDOW) using the round_time method. 1.2. After creating the time intervals for each date, we apply the add_consice_interval_columns function that given the rounded to inteval time stamp, extracts the concise form. def add_consice_interval_columns(df): # Adding columns for time interval in day-in-week and hour-minute resolution df[TIME_INTERVAL_CONCISE_COLUMN_NAME] = df[TIME_INTERVAL_COLUMN_NAME].apply(lambda x: x.strftime(CONCISE_INTERVAL_FORMAT)) The whole code is: import functools import numpy import pandas as pd TIME_INTERVAL_COLUMN_NAME = 'time_interval' TIME_INTERVAL_CONCISE_COLUMN_NAME = 'time_interval_concise' DATE_COLUMN_NAME = 'DateTimeStamp' INTERVAL_WINDOW = '15Min' CONCISE_INTERVAL_FORMAT = '%A %H:%M' def round_time(df): # Setting date_column_name to be of dateime df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) # Grouping by interval df[TIME_INTERVAL_COLUMN_NAME] = df[DATE_COLUMN_NAME].dt.round(INTERVAL_WINDOW) def add_consice_interval_columns(df): # Adding columns for time interval in day-in-week and hour-minute resolution df[TIME_INTERVAL_CONCISE_COLUMN_NAME] = df[TIME_INTERVAL_COLUMN_NAME].apply(lambda x: x.strftime(CONCISE_INTERVAL_FORMAT)) def group_by_concised_interval(df): df[DATE_COLUMN_NAME] = pd.to_datetime(df[DATE_COLUMN_NAME]) # Rounding time round_time(df) # Adding concised interval add_consice_interval_columns(df) intervaled_df = df.groupby([TIME_INTERVAL_CONCISE_COLUMN_NAME]).agg({'Vehicle_speed' : 'mean', 'Hostname' : 'count'}).rename(columns={'Vehicle_speed' : 'average_speed', 'Hostname' : 'volume'}) return intervaled_df def arrange_data(df): df = group_by_concised_interval(df) return df def divide_df_by_column(df, dividing_colum='Hostname'): df_by_groups = {f'{dividing_colum}_{g}': d for i, (g, d) in enumerate(df.groupby(dividing_colum))} return df_by_groups def arrange_groups_df(lst_df_by_groups): df_by_intervaled_group = dict() # For each group dataframe for group_df_name, group_df in lst_df_by_groups.items(): df_by_intervaled_group[group_df_name] = arrange_data(group_df) return df_by_intervaled_group # Load data df = pd.read_csv('data2.csv') # Print data print(df) # Divide by column df_by_groups = divide_df_by_column(df) # Arrange data for each group df_by_intervaled_group = arrange_groups_df(df_by_groups) # For each hostname-key dataframe for group_df_name, intervaled_group_df in df_by_intervaled_group.items(): print(f'Group {group_df_name} dataframe:') print(intervaled_group_df) Output: Group Hostname_place_duo dataframe: average_speed volume time_interval_concise Friday 08:30 59 1 Friday 08:45 52 1 Group Hostname_place_uno dataframe: average_speed volume time_interval_concise Friday 08:15 65 1 Friday 08:30 59 2 So now we can easily figure out how traffic behaves in each day of the week at all available time intervals.
7
5
63,880,119
2020-9-14
https://stackoverflow.com/questions/63880119/numpy-create-array-of-the-max-of-consecutive-pairs-in-another-array
I have a numpy array: A = np.array([8, 2, 33, 4, 3, 6]) What I want is to create another array B where each element is the pairwise max of 2 consecutive pairs in A, so I get: B = np.array([8, 33, 33, 4, 6]) Any ideas on how to implement? Any ideas on how to implement this for more then 2 elements? (same thing but for consecutive n elements) Edit: The answers gave me a way to solve this question, but for the n-size window case, is there a more efficient way that does not require loops? Edit2: Turns out that the question is equivalent for asking how to perform 1d max-pooling of a list with a window of size n. Does anyone know how to implement this efficiently?
A loop-free solution is to use max on the windows created by skimage.util.view_as_windows: list(map(max, view_as_windows(A, (2,)))) [8, 33, 33, 4, 6] Copy/pastable example: import numpy as np from skimage.util import view_as_windows A = np.array([8, 2, 33, 4, 3, 6]) list(map(max, view_as_windows(A, (2,))))
15
8
63,813,922
2020-9-9
https://stackoverflow.com/questions/63813922/what-is-the-difference-between-aiosqlite-and-sqlite-in-multi-threaded-mode
I'm trying to asynchronously process multiple files, and processing each file requires some reads and writes to an SQLite database. I've been looking at some options, and I found the aiosqlite module here. However, I was reading the SQLite documentation here, and it says that it supports multi-threaded mode. In fact, the default mode is "serialized" which means it "can be safely used by multiple threads with no restriction." I don't understand what the difference is. The aiosqlite documentation says: aiosqlite allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. It does this by using a single, shared thread per connection. I get that there is a difference between aiosqlite and the "multi-threaded" mode on sqlite because the multi-threaded mode requires only one connection per thread, whereas in aiosqlite, you can reuse this single connection across multiple threads. But isn't this the same as serialized mode where it can be "used by multiple threads with no restriction"? Edit: My question right now is "Is my current understanding below is correct?": Sqlite in "serialized" mode can be used by multiple threads at one time, so this would be used if I used the threading module in python and spawned multiple threads. Here I have the options of either using a separate connection per thread or sharing the connection across multiple threads. aiosqlite is used with asyncio. So since asyncio has multiple coroutines that share one thread, aiosqlite also works with one thread. So I create one connection that I share among all the coroutines. Since aiosqlite is basically a wrapper for sqlite, I can combine the functionality of 1 and 2. So I can have multiple threads where each thread has an asyncio event loop with multiple coroutines. So the basic sqlite functionality will handle the multi-threading and the aiosqlite will handle the coroutines.
First of all about threads: Sqlite ... can be used by multiple threads at one time It will still be not the same time because of GIL, Threads are always running concurrently (not in parallel). The only thing that with GIL you don't know when thread will be interrupted. But asyncio allows you to switch between threads "manually" and on waiting for some IO operations (like database communication). Let me explain differences between different modes: Single-thread - creates single database connection without any mutexes or any other mechanisms to prevent multi-threading issues. Multi-thread - creates single shared database connection with mutexes that locks that connection for each operation/communication with database. Serialized - creates multiple database connections per thread. Answering questions in update: Yes Sqlite in "serialized" mode can be used by multiple threads at one time, so this would be used if I used the threading module in python and spawned multiple threads. Here I have the options of either using a separate connection per thread or sharing the connection across multiple threads. Yes, it will share a single connection between them. aiosqlite is used with asyncio. So since asyncio has multiple coroutines that share one thread, aiosqlite also works with one thread. So I create one connection that I share among all the coroutines Yes. Since aiosqlite is basically a wrapper for sqlite, I can combine the functionality of 1 and 2. So I can have multiple threads where each thread has an asyncio event loop with multiple coroutines. So the basic sqlite functionality will handle the multi-threading and the aiosqlite will handle the coroutines.
11
7
63,881,231
2020-9-14
https://stackoverflow.com/questions/63881231/prefect-modulenotfounderror-when-running-from-ui
I'm following the Prefect tutorial available at: https://docs.prefect.io/core/tutorial/01-etl-before-prefect.html. The code can be downloaded from the git: https://github.com/PrefectHQ/prefect/tree/master/examples/tutorial The tutorials have a dependency to aircraftlib which is a directory under tutorials. I can execute the Flows through the terminal with: python 02_etl_... and it executes perfectly! I've created a project, and added the Flow to that project. Through the Prefect Server UI I can run the Flow, but it fails with the error message: State Message: Failed to load and execute Flow's environment: ModuleNotFoundError("No module named 'aircraftlib'") How should I handle the dependency when executing the Flows through the Prefect Server UI?
This depends partially on the type of Flow Storage and Agent you are using. Since you are running with Prefect Server, I assume you are using Local Storage + a Local Agent; in this case, you need to make sure the aircraftlib directory is on your local importable Python PATH. There are a few ways of doing this: run your Prefect Agent in the tutorial directory; your Local Agent's path will then be inherited by the flows it submits manually add the tutorial/ directory to your global python path (I don't recommend this) add the tutorial/ directory to your Agent's path with the -p CLI flag; for example: prefect agent start -p ~/Developer/prefect/examples/tutorial (this is the approach I recommend)
10
17
63,906,805
2020-9-15
https://stackoverflow.com/questions/63906805/why-is-self-not-type-hinted-in-python
I've been looking into type hinting my code but noticed that Python programmers typically do not type hint self in their programs Even when I look at the docs, they do not seem to type hint self, see here. This is from version 3.10 post forward declarations def __init__(self, value: T, name: str, logger: Logger) -> None: I can understand why this is an issue before type annotations were introduced in 3.7 with Forward declarations More info here and here The reason this seems useful to me is mypy seems able to catch bugs with this problem example: from __future__ import annotations class Simple(object): def __init__(self: Simple): print(self.x) would return this from mypy mypy test.py test.py:5: error: "Simple" has no attribute "x" Found 1 error in 1 file (checked 1 source file) Which if you remove the type from self becomes Success: no issues found in 1 source file Is there a reason that self is not annotated or is this only convention? Are there trade offs I'm missing or is my annotation of self wrong for some reason?
mypy usually handles the type of self without needing an explicit annotation. You're running into a different problem - a method with no argument or return type annotations is not type-checked at all. For a method with no non-self arguments, you can avoid this by annotating self, but you can also avoid it by annotating the return type.
7
9
63,901,790
2020-9-15
https://stackoverflow.com/questions/63901790/celery-how-to-get-task-name-by-task-id
Celery - bottom line: I want to get the task name by using the task id (I don't have a task object) Suppose I have this code: res = chain(add.s(4,5), add.s(10)).delay() cache.save_task_id(res.task_id) And then in some other place: task_id = cache.get_task_ids()[0] task_name = get_task_name_by_id(task_id) #how? print(f'Some information about the task status of: {task_name}') I know I can get the task name if I have a task object, like here: celery: get function name by task id?. But I don't have a task object (perhaps it can be created by the task_id or by some other way? I didn't see anything related to that in the docs). In addition, I don't want to save in the cache the task name. (Suppose I have a very long chain/other celery primitives, I don't want to save all their names/task_ids. Just the last task_id should be enough to get all the information regarding all the tasks, using .parents, etc) I looked at all the relevant methods of AsyncResult and AsyncResult.Backend objects. The only thing that seemed relevant is backend.get_task_meta(task_id), but that doesn't contain the task name. Thanks in advance PS: AsyncResult.name always returns None: result = AsyncResult(task_id, app=celery_app) result.name #Returns None result.args #Also returns None
Finally found an answer. For anyone wondering: You can solve this by enabling result_extended = True in your celery config. Then: result = AsyncResult(task_id, app=celery_app) result.task_name #tasks.add
10
9
63,903,668
2020-9-15
https://stackoverflow.com/questions/63903668/beautiful-soup-extract-everything-between-two-tags
I am using BeautifulSoup to extract data from HTML files. I want to get all of the information between two tags. This means that if I have an HTML section like this: <h1></h1> Text <i>here</i> has no tag <div>This is in a div</div> <h1></h1> Then if I wanted all of the information between the first h1 and the second h1, the output would look like this: Text <i>here</i> has no tag <div>This is in a div</div> I've tried nextsibling loops, but there always seems to be a catch. Is there a command in beautifulsoup that simply pulls everything (Text, newlines, divs, special characters) that is between element "A" and element "B"?
One solution is to .extract() all content in front of first <h1> and after second <h1> tag: from bs4 import BeautifulSoup html_doc = ''' This I <b>don't</b> want <h1></h1> Text <i>here</i> has no tag <div>This is in a div</div> <h1></h1> This I <b>don't</b> want too ''' soup = BeautifulSoup(html_doc, 'html.parser') for c in list(soup.contents): if c is soup.h1 or c.find_previous('h1') is soup.h1: continue c.extract() for h1 in soup.select('h1'): h1.extract() print(soup) Prints: Text <i>here</i> has no tag <div>This is in a div</div>
7
4
63,892,211
2020-9-14
https://stackoverflow.com/questions/63892211/do-i-need-apt-get-update-and-upgrade-in-my-python-dockerfile
I have a very minimalist Dockerfile for my production Django application: FROM python:3.8 ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 RUN apt-get update && apt-get -y upgrade WORKDIR /app COPY requirements.txt ./ RUN pip install --upgrade pip && \ pip install -r requirements.txt COPY . . EXPOSE 8000 CMD [ "gunicorn", "api.wsgi:application", "--bind=0.0.0.0" ] Do I need to run apt-get update && apt-get -y upgrade? From my understanding, the two commands (1) download the latest listing of available packages and (2) upgrade already installed packages. Why does the official python docker image not do this already? If I don't need to run them in this minimalist Dockerfile, when do I need to run them? I've noticed they're commonly run when installing other packages.
The base Docker Hub Linux distribution images like ubuntu:18.04 actually update themselves fairly regularly: if you docker pull ubuntu:18.04, wait a week, and repeat it, you will get a newer image. You're somewhat dependent on intermediate images, like python:3.8, doing the same thing. It is unusual, but not unheard-of, to run apt-get update and similar "upgrade everything" commands in a Dockerfile; it is more common to either assume the base image is up-to-date already, or to have specific provenance requirements and build everything from scratch on top of a base distribution image. If you're using a major-version or minor-version image tag (python:3, python:3.8, python:3.8-buster) you're probably okay, so long as you make sure to update your base image to something listed on the Docker Hub image page periodically. If you're forcing a specific patch level (python:3.8.4) you're at some risk, since these images stop getting updates once a newer upstream version is released. You do need to run apt-get update if you need to install any OS-level packages, and for Docker layer-caching reasons you should do it in the same RUN command as the corresponding apt-get install # Doesn't usually have an "upgrade" RUN apt-get update \ && DEBIAN_FRONTEND=noninteractive \ apt-get install --no-install-recommends --assume-yes \ a-package \ another-package \ more-packages
19
18
63,891,547
2020-9-14
https://stackoverflow.com/questions/63891547/how-to-connect-amls-to-adls-gen-2
I would like to register a dataset from ADLS Gen2 in my Azure Machine Learning workspace (azureml-core==1.12.0). Given that service principal information is not required in the Python SDK documentation for .register_azure_data_lake_gen2(), I successfully used the following code to register ADLS gen2 as a datastore: from azureml.core import Datastore adlsgen2_datastore_name = os.environ['adlsgen2_datastore_name'] account_name=os.environ['account_name'] # ADLS Gen2 account name file_system=os.environ['filesystem'] adlsgen2_datastore = Datastore.register_azure_data_lake_gen2( workspace=ws, datastore_name=adlsgen2_datastore_name, account_name=account_name, filesystem=file_system ) However, when I try to register a dataset, using from azureml.core import Dataset adls_ds = Datastore.get(ws, datastore_name=adlsgen2_datastore_name) data = Dataset.Tabular.from_delimited_files((adls_ds, 'folder/data.csv')) I get an error Cannot load any data from the specified path. Make sure the path is accessible and contains data. ScriptExecutionException was caused by StreamAccessException. StreamAccessException was caused by AuthenticationException. 'AdlsGen2-ReadHeaders' for '[REDACTED]' on storage failed with status code 'Forbidden' (This request is not authorized to perform this operation using this permission.), client request ID <CLIENT_REQUEST_ID>, request ID <REQUEST_ID>. Error message: [REDACTED] | session_id=<SESSION_ID> Do I need the to enable the service principal to get this to work? Using the ML Studio UI, it appears that the service principal is required even to register the datastore. Another issue I noticed is that AMLS is trying to access the dataset here: https://adls_gen2_account_name.**dfs**.core.windows.net/container/folder/data.csv whereas the actual URI in ADLS Gen2 is: https://adls_gen2_account_name.**blob**.core.windows.net/container/folder/data.csv
According to this documentation,you need to enable the service principal. 1.you need to register your application and grant the service principal with Storage Blob Data Reader access. 2.try this code: adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(workspace=ws, datastore_name=adlsgen2_datastore_name, account_name=account_name, filesystem=file_system, tenant_id=tenant_id, client_id=client_id, client_secret=client_secret ) adls_ds = Datastore.get(ws, datastore_name=adlsgen2_datastore_name) dataset = Dataset.Tabular.from_delimited_files((adls_ds,'sample.csv')) print(dataset.to_pandas_dataframe()) Result:
10
13
63,888,136
2020-9-14
https://stackoverflow.com/questions/63888136/checking-if-a-blob-exist-in-python-azure
Since the new update of azure-storage-blob, the blockblobservice is depreciated How can I check that a blob exist ? This answer is not working with the new version of azure-storage-blob Faster Azure blob name search with python? I found this issue on GitHub : https://github.com/Azure/azure-sdk-for-python/issues/12744
Version 12.5.0 released on 2020-09-10 has now the exists method in the new SDK. For example, Sync: from azure.storage.blob import BlobClient blob = BlobClient.from_connection_string(conn_str="my_connection_string", container_name="mycontainer", blob_name="myblob") exists = blob.exists() print(exists) Async: import asyncio async def check(): from azure.storage.blob.aio import BlobClient blob = BlobClient.from_connection_string(conn_str="my_connection_string", container_name="mycontainer", blob_name="myblob") async with blob: exists = await blob.exists() print(exists)
8
16
63,889,627
2020-9-14
https://stackoverflow.com/questions/63889627/is-python-a-functional-programming-language-or-an-object-oriented-language
According to tutorialspoint.com, Python is a functional programming language. "Some of the popular functional programming languages include: Lisp, Python, Erlang, Haskell, Clojure, etc." https://www.tutorialspoint.com/functional_programming/functional_programming_introduction.htm But other sources say Python is an object-oriented programming language (you can create objects in Python). So is Python both? If so, if you're trying to program something that requires lots of mathematical computations, would Python still be a good choice (Since functional languages have concurrency, better syntax for math, and higher-level functions)?
Python, like many others, is a multi-paradigm language. You can use it as a fairly strictly imperative language, you can use it in a more object-oriented way, and you can use it in a more functional way. One important thing to note though is that functional is generally contrasted with imperative, object-oriented tends to exist at a different level and can be "layered over" a more imperative or a more functional core. However Python is largely an imperative and object oriented language: much of the builtins and standard library are really built around classes and objects, and it doesn't encourage the sort of thinking which functional languages generally drive the user to. In fact going through the (fairly terrible) list the article you link to provides, Python lands on the OOP side of more or less all of them: it doesn't use immutable data much (it's not really possible to define immutable types in pure python, most of the collections are mutable, and the ones which are not are not designed for functional updates) its execution model is very imperative it has limited support for parallel programming its functions very much do have side-effects flow control is absolutely not done using function calls it's not a language which encourages recursion execution order is very relevant and quite strictly defined Then again, much of the article is nonsense. If that is typical of that site, I'd recommend using something else. If so, if you're trying to program something very mathematical and computational, would Python still be a good choice Well Python is a pretty slow language in and of itself, but at the same time it has a very large and strong ecosystem of scientific libraries. It's probably not the premier language for abstract mathematics (it's rather bad at symbolic manipulation) but it tends to be a relatively good glue or prototyping tool. As functional languages are more suitable for mathematical stuff Not necessarily. But not knowing what you actually mean by "mathematical stuff" it's hard to judge. Do you mean symbolic manipulations? Statistics? Hard computations? Something else entirely?
7
23
63,838,078
2020-9-10
https://stackoverflow.com/questions/63838078/plotting-networkx-graph-how-to-change-node-position-instead-of-resetting-every
I'm working on a project where I need to create a preview of nx.Graph() which allows to change position of nodes dragging them with a mouse. My current code is able to redraw whole figure immediately after each motion of mouse if it's clicked on specific node. However, this increases latency significantly. How can I update only artists needed, it is, clicked node, its label text and adjacent edges instead of refreshing every artist of plt.subplots()? Can I at least get a reference to all the artists that need to be relocated? I started from a standard way of displaying a graph in networkx: import networkx as nx import matplotlib.pyplot as plt import numpy as np import scipy.spatial def refresh(G): plt.axis((-4, 4, -1, 3)) nx.draw_networkx_labels(G, pos = nx.get_node_attributes(G, 'pos'), bbox = dict(fc="lightgreen", ec="black", boxstyle="square", lw=3)) nx.draw_networkx_edges(G, pos = nx.get_node_attributes(G, 'pos'), width=1.0, alpha=0.5) plt.show() nodes = np.array(['A', 'B', 'C', 'D', 'E', 'F', 'G']) edges = np.array([['A', 'B'], ['A', 'C'], ['B', 'D'], ['B', 'E'], ['C', 'F'], ['C', 'G']]) pos = np.array([[0, 0], [-2, 1], [2, 1], [-3, 2], [-1, 2], [1, 2], [3, 2]]) G = nx.Graph() # IG = InteractiveGraph(G) #>>>>> add this line in the next step G.add_nodes_from(nodes) G.add_edges_from(edges) nx.set_node_attributes(G, dict(zip(G.nodes(), pos.astype(float))), 'pos') fig, ax = plt.subplots() # fig.canvas.mpl_connect('button_press_event', lambda event: IG.on_press(event)) # fig.canvas.mpl_connect('motion_notify_event', lambda event: IG.on_motion(event)) # fig.canvas.mpl_connect('button_release_event', lambda event: IG.on_release(event)) refresh(G) # >>>>> replace it with IG.refresh() in the next step In the next step I changed 5 line of previous script (4 is uncommented and 1 replaced) plus used InteractiveGraph instance to make it interactive: class InteractiveGraph: def __init__(self, G, node_pressed=None, xydata=None): self.G = G self.node_pressed = node_pressed self.xydata = xydata def refresh(self, show=True): plt.clf() nx.draw_networkx_labels(self.G, pos = nx.get_node_attributes(self.G, 'pos'), bbox = dict(fc="lightgreen", ec="black", boxstyle="square", lw=3)) nx.draw_networkx_edges(self.G, pos = nx.get_node_attributes(self.G, 'pos'), width=1.0, alpha=0.5) plt.axis('off') plt.axis((-4, 4, -1, 3)) fig.patch.set_facecolor('white') if show: plt.show() def on_press(self, event): if event.inaxes is not None and len(self.G.nodes()) > 0: nodelist, coords = zip(*nx.get_node_attributes(self.G, 'pos').items()) kdtree = scipy.spatial.KDTree(coords) self.xydata = np.array([event.xdata, event.ydata]) close_idx = kdtree.query_ball_point(self.xydata, np.sqrt(0.1)) i = close_idx[0] self.node_pressed = nodelist[i] def on_motion(self, event): if event.inaxes is not None and self.node_pressed: new_xydata = np.array([event.xdata, event.ydata]) self.xydata += new_xydata - self.xydata #print(d_xy, self.G.nodes[self.node_pressed]) self.G.nodes[self.node_pressed]['pos'] = self.xydata self.refresh(show=False) event.canvas.draw() def on_release(self, event): self.node_pressed = None Related sources: Event handling Optimized removal of closest node
To expand on my comment above, in netgraph, your example can be reproduced with import numpy as np import matplotlib.pyplot as plt; plt.ion() import networkx as nx import netgraph nodes = np.array(['A', 'B', 'C', 'D', 'E', 'F', 'G']) edges = np.array([['A', 'B'], ['A', 'C'], ['B', 'D'], ['B', 'E'], ['C', 'F'], ['C', 'G']]) pos = np.array([[0, 0], [-2, 1], [2, 1], [-3, 2], [-1, 2], [1, 2], [3, 2]]) G = nx.Graph() G.add_nodes_from(nodes) G.add_edges_from(edges) I = netgraph.InteractiveGraph(G, node_positions=dict(zip(nodes, pos)), node_labels=dict(zip(nodes,nodes)), node_label_bbox=dict(fc="lightgreen", ec="black", boxstyle="square", lw=3), node_size=12, ) # move stuff with mouse Regarding the code you wrote, the kd-tree is unnecessary if you have handles of all the artists. In general, matplotlib artists have a contains method, such that when you log button press events, you can simply check artist.contains(event) to find out if the button press occurred over the artist. Of course, if you use networkx to do the plotting, you can't get the handles in a nice, query-able form (ax.get_children() is neither) so that is not possible.
7
4
63,873,082
2020-9-13
https://stackoverflow.com/questions/63873082/converting-a-simple-python-requests-post-to-rust-reqwest
I'm trying to use parts of this Python script (taken from here) in a Rust program I'm writing. How can I construct a reqwest request with the same content? def login(login_url, username, password=None, token=None): """Log in to Kattis. At least one of password or token needs to be provided. Returns a requests.Response with cookies needed to be able to submit """ login_args = {'user': username, 'script': 'true'} if password: login_args['password'] = password if token: login_args['token'] = token response = requests.post(login_url, data=login_args, headers=_HEADERS) return response def submit(submit_url, cookies, problem, language, files, mainclass='', tag=''): """Make a submission. The url_opener argument is an OpenerDirector object to use (as returned by the login() function) Returns the requests.Result from the submission """ data = {'submit': 'true', 'submit_ctr': 2, 'language': language, 'mainclass': mainclass, 'problem': problem, 'tag': tag, 'script': 'true'} sub_files = [] for f in files: with open(f) as sub_file: sub_files.append(('sub_file[]', (os.path.basename(f), sub_file.read(), 'application/octet-stream'))) return requests.post(submit_url, data=data, files=sub_files, cookies=cookies, headers=_HEADERS) (check out the link above for the rest of the code) Currently I've got this (I'm not sure if cookies are handled) let config = get_config().await?; let mut default_headers = header::HeaderMap::new(); default_headers.insert( header::USER_AGENT, header::HeaderValue::from_static("kattis-cli-submit"), ); let client = reqwest::ClientBuilder::new() .default_headers(default_headers) .cookie_store(true) .build()?; // Login let login_map = serde_json::json!({ "user": config.username.as_str(), "script": "true", "token": config.token.as_str(), }); let login_response = client .post(&config.login_url) .header("Content-Type", "application/x-www-form-urlencoded") .json(&login_map) .send() .await?; println!("{:?}", login_response); // Make a submission let submission_map = serde_json::json!({ "submit": "true", "submit_ctr": "2", "language": language, "mainclass": problem, "problem": problem, "script": "true", }); println!("{}", &submission_map); let mut form = multipart::Form::new(); let mut sub_file = multipart::Part::text(submission).file_name(submission_filename); sub_file = sub_file.mime_str("application/octet-stream").unwrap(); form = form.part("sub_file[]", sub_file); let submission_response = client .post(&config.submit_url) .json(&submission_map) .multipart(form) // .build(); .send() .await? .text() .await?; let config = get_config().await?; let mut default_headers = header::HeaderMap::new(); default_headers.insert( header::USER_AGENT, header::HeaderValue::from_static("kattis-cli-submit"), ); let client = reqwest::ClientBuilder::new() .default_headers(default_headers) .cookie_store(true) .build()?; // Login let login_map = serde_json::json!({ "user": config.username.as_str(), "script": "true", "token": config.token.as_str(), }); let login_response = client .post(&config.login_url) .header("Content-Type", "application/x-www-form-urlencoded") .json(&login_map) .send() .await?; println!("{:?}", login_response); // Make a submission let submission_map = serde_json::json!({ "submit": "true", "submit_ctr": "2", "language": language, "mainclass": problem, "problem": problem, "script": "true", }); println!("{}", &submission_map); let mut form = multipart::Form::new(); let mut sub_file = multipart::Part::text(submission).file_name(submission_filename); sub_file = sub_file.mime_str("application/octet-stream").unwrap(); form = form.part("sub_file[]", sub_file); let submission_response = client .post(&config.submit_url) .json(&submission_map) .multipart(form) // .build(); .send() .await? .text() .await?; println!("Submission response:\n{:?}", submission_response); Which for reference spits out {"user": {"username": Some("[username]"), "token": Some("[token]")}, "kattis": {"loginurl": Some("https://open.kattis.com/login"), "hostname": Some("open.kattis.com"), "submissionurl": Some("https://open.kattis.com/submit"), "submissionsurl": Some("https://open.kattis.com/submissions")}} Response { url: "https://open.kattis.com/login", status: 200, headers: {"date": "Sun, 13 Sep 2020 14:19:15 GMT", "content-type": "text/html; charset=UTF-8", "transfer-encoding": "chunked", "connection": "keep-alive", "set-cookie": "__cfduid=d0417cc7406c8d91b8659327fff8d5d9a1600006752; expires=Tue, 13-Oct-20 14:19:12 GMT; path=/; domain=.kattis.com; HttpOnly; SameSite=Lax", "set-cookie": "EduSiteCookie=75f873b9-5442-45be-b442-be08f349e09c; path=/; domain=.kattis.com; secure; HttpOnly", "expires": "Thu, 19 Nov 1981 08:52:00 GMT", "cache-control": "no-store, no-cache, must-revalidate", "pragma": "no-cache", "cf-cache-status": "DYNAMIC", "cf-request-id": "05296ea065000015fc7ca80200000001", "expect-ct": "max-age=604800, report-uri=\"https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct\"", "server": "cloudflare", "cf-ray": "5d22807a39b015fc-ARN", "alt-svc": "h3-27=\":443\"; ma=86400, h3-28=\":443\"; ma=86400, h3-29=\":443\"; ma=86400"} } {"language":"C++","mainclass":"ants","problem":"ants","script":"true","submit":"true","submit_ctr":"2"} Submission response: "<!DOCTYPE html>\n\n\n<html lang=\"en\">\n<head>\n <meta charset=\"UTF-8\" >\n <meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <title>Log in or sign up for Kattis &ndash; Kattis, Kattis</title>\n\n <link href=\"//ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/base/jquery-ui.min.css\" rel=\"stylesheet\">\n\n <script src=\"//ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js\"></script>\n <script src=\"//ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js\"></script>\n\n <!-- Fonts/Icons -->\n <link href=\"//cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css\" rel=\"stylesheet\">\n\n <link href=\"//fonts.googleapis.com/css?family=Open+Sans:400,300,300italic,400italic,600,600italic,700,800,700italic,800italic%7CMerriweather:400,400italic,700\" rel=\"stylesheet\" type=\"text/css\">\n\n <!-- Bootstrap CSS -->\n <link href=\"//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.4.1/css/bootstrap.min.css\" rel=\"stylesheet\">\n\n <!-- Bootstrap datetimepicker CSS-->\n <link href=\"//cdnjs.cloudflare.com/ajax/libs/bootstrap-datetimepicker/4.17.47/css/bootstrap-datetimepicker.min.css\" rel=\"stylesheet\">\n\n <!-- DateRangePicker CSS -->\n <link href=\"//cdn.jsdelivr.net/npm/daterangepicker/daterangepicker.css\" rel=\"stylesheet\">\n\n <!-- Editable and Select2 -->\n <link href=\"//cdnjs.cloudflare.com/ajax/libs/select2/3.5.4/select2.css\" rel=\"stylesheet\">\n\n <link rel=\"shortcut icon\" href=\"/favicon\" />\n\n <!-- Own CSS -->\n <link rel=\"stylesheet\" href=\"/css/system.css?03bf93=\">\n <style type=\"text/css\">\n .header {\n background-color: rgb(240,176,52);\n }\n .header .main-nav > ul > li.current:before {\n border-bottom-color: rgb(240,176,52);\n }\n\n div.page-content.clearfix.above-everything.alert.alert-danger { color: #31708f; background: #d9edf7; border-color: #bce8f1; }\r\ndiv.page-content.clearfix.above-everything.alert.alert-danger div.main-content { padding-bottom: 0; }\r\n\n </style>\n\n <script type=\"text/javascript\">\n window.page_loaded_at = new Date();\n jQuery.noConflict();\n </script>\n\n <script type=\"text/javascript\">\n jQuery.ns = function (namespace) {\n var parts = namespace.split(\'.\');\n var last = window;\n for (var i = 0; i < parts.length; i++) {\n last = last[parts[i]] || (last[parts[i]] = {});\n }\n return last;\n };\n</script>\n <script>\njQuery.extend(jQuery.ns(\'Kattis.error\'), (function () {\n var messages = {\"INTERNAL_SERVER_ERROR\":\"Internal server error.\",\"ACCESS_DENIED\":\"Access denied.\",\"NOT_AUTHENTICATED\":\"Not authenticated.\",\"METHOD_NOT_ALLOWED\":\"Method not allowed.\",\"INVALID_JSON\":\"JSON cannot be decoded or encoded data is deeper than the recursion limit.\",\"BAD_CSRF_TOKEN\":\"Token does not match session\'s csrf_token\",\"SESSION_NAME_EMPTY\":\"Session\'s name must be non empty.\",\"SESSION_START_TIME_EMPTY\":\"Session\'s start time must be non empty.\",\"SESSION_START_TIME_PASSED\":\"Session\'s start time has already passed.\",\"SESSION_DURATION_EMPTY\":\"Session\'s duration must be non empty.\",\"SESSION_DURATION_NEGATIVE\":\"Session\'s duration must be a positive number.\",\"SESSION_DURATION_EXCEEDED\":\"Maximum duration for the session was exceeded.\",\"SESSION_ALREADY_STARTED\":\"The session has already started.\",\"SESSION_ALREADY_FINISHED\":\"The session is already finished.\",\"USER_CREATED_SESSION_DURATION_EXCEEDED\":\"Contest cannot be longer than 168 hours.\",\"INVALID_PROBLEM_SCORE\":\"Invalid problem score.\",\"INVALID_SESSION_SHORTNAME\":\"Invalid shortname for the session.\",\"INVALID_SESSION_CUTOFF\":\"Invalid cutoff for the session.\",\"INVALID_USER_NAME\":\"Invalid username or email.\",\"SESSION_NOT_FOUND\":\"No such session.\",\"COURSE_NOT_FOUND\":\"No such course.\",\"OFFERING_NOT_FOUND\":\"No such offering.\",\"TEACHER_NOT_FOUND\":\"No such teacher.\",\"TEACHER_CANNOT_REMOVE_SELF\":\"You may not remove yourself as a teacher unless you are an administrator.\",\"AUTHOR_NOT_FOUND\":\"No such author.\",\"JUDGE_NOT_FOUND\":\"No such judge.\",\"JUDGE_ALREADY_EXIST\":\"The user is already a judge.\",\"TEACHER_ALREADY_EXIST\":\"The user is already a teacher.\",\"PROBLEM_NOT_FOUND\":\"No such problem.\",\"TEAM_NOT_FOUND\":\"No such team.\",\"SESSION_PROBLEM_ALREADY_EXIST\":\"The problem has been already added to the session.\",\"SESSION_PROBLEM_DOES_NOT_EXIST\":\"The problem does not relate to the session.\",\"PROBLEM_INDEX_NEGATIVE\":\"Problem index must be non negative.\",\"AUTHOR_IS_CURRENT_TEAM_MEMBER\":\"The user you tried to add is already a member of the current team.\",\"AUTHOR_IS_ANOTHER_TEAM_MEMBER\":\"The user you tried to add is already a member of another team in the current session.\",\"AUTHOR_IS_JUDGE\":\"The user you tried to add is a judge.\",\"AUTHOR_IS_NOT_TEAM_MEMBER\":\"The user you tried to remove is not a team member.\",\"JUDGE_IS_TEAM_MEMBER\":\"The user you tried to add is a session team member or invitee.\",\"SESSION_PUBLISHING_DENIED\":\"You do not have permission to publish this session.\",\"CANNOT_PUBLISH_HISTORICAL_SESSION\":\"You cannot publish a session with a historical start time.\",\"INVALID_TEAM_NAME_TOO_LONG\":\"The team name you are trying to add is too long\",\"TEAM_NAME_IS_NOT_VISIBLE\":\"The team name you are trying to add is not visible\"};\n\n return {\n get_msg: function (error_code) {\n return messages[error_code];\n },\n\n show_msg: function (base_message, error_code) {\n if (error_code) {\n alert(base_message + \": \" + this.get_msg(error_code));\n } else {\n alert(base_message);\n }\n },\n\n show_xhr_msg: function (elem, jqXHR) {\n var base_message = elem.data(\'fail-msg\');\n var code = jqXHR.responseJSON && jqXHR.responseJSON.error &&\n jqXHR.responseJSON.error.code;\n this.show_msg(base_message, code);\n }\n }\n})());\n</script>\n\n \n\n <script type=\"text/javascript\">\nvar rumMOKey=\"a854f3a6dd7ee5e3b7d1641570b79c34\";\n(function(){\nif(window.performance && window.performance.timing && window.performance.navigation) {\n\tvar site24x7_rum_beacon=document.createElement(\'script\');\n\tsite24x7_rum_beacon.async=true;\n\tsite24x7_rum_beacon.setAttribute(\'src\',\'//static.site24x7rum.eu/beacon/site24x7rum-min.js?appKey=\'+rumMOKey);\n\tdocument.getElementsByTagName(\'head\')[0].appendChild(site24x7_rum_beacon);\n}\n})(window)\n</script>\n\n \n</head>\n\n<body class=\"page-master-layout \">\n\n\n<div id=\"wrapper\">\n <header class=\"header\">\n <div class=\"background\">\n \n <div class=\"wrap\">\n <div class=\"fl\">\n <a href=\"/\"><img class=\"logo logo-open\" src=\"/images/site-logo\" alt=\"\" /></a>\n <div class=\"title-wrapper\">\n <div class=\"header-title\">Kattis</div>\n <nav class=\"main-nav\">\n <ul>\n \n <li class=\"\"><a href=\"/problems\">Problems</a></li>\n \n <li class=\"\"><a href=\"/contests\">Contests</a></li>\n \n <li class=\"\"><a href=\"/ranklist\">Ranklists</a></li>\n \n <li class=\"\"><a href=\"/jobs\">Jobs</a></li>\n \n <li class=\"\"><a href=\"/help\">Help</a></li>\n \n </ul>\n </nav>\n </div>\n </div>\n <div class=\"user-side fr\">\n\n <nav class=\"user-nav\">\n <ul class=\"user-nav-ul\">\n <li>\n <form action=\"/search\" class=\"site-search\" method=\"GET\">\n <input type=\"text\" name=\"q\" placeholder=\"Search Kattis\" />\n <a href=\"#\">\n <i class=\"fa fa-search\"></i>\n </a>\n </form>\n </li>\n \n <li><a class=\"btn dark-bg\" href=\"/login\">Log in</a></li>\n </ul>\n\n </nav>\n\n </div>\n </div>\n </div>\n</header>\n\n <!--[if IE]> <div class=\"alert alert-warning\" role=\"alert\">\n <strong>You are using an outdated browser!</strong> Some features might not look or work like expected. Kattis supports the last two versions of major browsers. Please consider upgrading to a recent version! </div>\n <![endif]-->\n\n \n \n <div class=\"wrap\">\n <div id=\"messages\">\n \n <div class=\"alert alert-dismissible alert-info\">\n <button type=\"button\" class=\"close\" data-dismiss=\"alert\" aria-label=\"Close\">\n <span aria-hidden=\"true\">&times;</span>\n </button>\n <strong>The page you are trying to access requires you to be logged in.</strong>\n </div>\n </div>\n </div>\n \n \n \n\n <div class=\"wrap\">\n \n\n\n\n\n\n\n\n\n\n \n \n\n <div class=\"page-content boxed clearfix\">\n <section class=\"box clearfix main-content\">\n \n \n\t\n <div class=\"page-headline clearfix\">\n <div style=\"text-align:center\">\n <h1>Log in or sign up for Kattis</h1>\n </div>\n </div>\n\n <br />\n\n <div class=\"login\">\n <div class=\"login-left\">\n <img src=\"/images/kattis/judge.png?7f7dbf=\" alt=\"\" />\n </div>\n\n <div class=\"login-right\">\n\n\t\n <div class=\"login-methods\">\n\n \t\t \n <form action=\"/oauth/Azure\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"Azure\">\n\n <i class=\"fa fa-windows\"></i>\n \n Log in with Azure\n </button>\n </form>\n\n\t\t\t\t\t\t\t\t<br/> \n <form action=\"/oauth/Facebook\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"Facebook\">\n\n <i class=\"fa fa-facebook\"></i>\n \n Log in with Facebook\n </button>\n </form>\n\n\t\t\t\t\t\t\t\t<br/> \n <form action=\"/oauth/Github\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"Github\">\n\n <i class=\"fa fa-github\"></i>\n \n Log in with Github\n </button>\n </form>\n\n\t\t\t\t\t\t\t\t<br/> \n <form action=\"/oauth/Google\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"Google\">\n\n <i class=\"fa fa-google\"></i>\n \n Log in with Google\n </button>\n </form>\n\n\t\t\t\t\t\t\t\t<br/> \n <form action=\"/oauth/LinkedIn\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"LinkedIn\">\n\n <i class=\"fa fa-linkedin\"></i>\n \n Log in with LinkedIn\n </button>\n </form>\n\n\t\t\t\t\t\t\t\t<br/> \n\t\t\n\t\t\n <form action=\"/login/email\" method=\"GET\" style=\"display:inline-block\">\n <button class=\"email\">\n <i class=\"fa fa-envelope\"></i>\n Log in with e-mail </button>\n\n <input type=\"hidden\" name=\"todo\" value=\"redirect\" />\n </form>\n \n </div>\n\n\t<br/>\n\t<br/><a href=\"/login/more?todo=redirect\">More login methods</a>\t\n </div></div>\n\n\n </section>\n </div>\n </div>\n\n\n</div>\n\n\n<div id=\"footer\">\n <div class=\"container\">\n <div class=\"row\">\n <div class=\"footer-info col-md-2 \">\n \n </div>\n <div class=\"footer-powered col-md-8\">\n <h4>\n <a href=\"/rss/new-problems\"><i class=\"fa fa-rss-square\" style=\"color: orange\"></i>&nbsp;RSS feed for new problems</a> |\n Powered by&nbsp;Kattis | <a href=\"https://www.patreon.com/kattis\">Support Kattis on Patreon!</a>\n </h4>\n </div>\n </div>\n </div>\n</div>\n\n\n\n\n<script src=\"//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.4.1/js/bootstrap.min.js\"></script>\n<script src=\"//cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment.min.js\"></script>\n<script src=\"//cdnjs.cloudflare.com/ajax/libs/bootstrap-datetimepicker/4.17.47/js/bootstrap-datetimepicker.min.js\"></script>\n<script src=\"//cdn.jsdelivr.net/npm/daterangepicker/daterangepicker.min.js\"></script>\n<script src=\"//cdnjs.cloudflare.com/ajax/libs/select2/3.5.4/select2.min.js\"></script>\n<script src=\"//cdnjs.cloudflare.com/ajax/libs/raphael/2.2.8/raphael.min.js\"></script>\n<script src=\"/js/system.js?203d73=\" type=\"text/javascript\"></script>\n\n\n\n\n</body>\n</html>\n" There's some disparity in the POST requests, but I can't figure out exactly what. I also think I'm able to login with the first request, but I'm not entirely sure the cookies carry over. Is there a general way to rewrite the Python requests POST in Rust? Specifically I think I need the files part to be included.
You are not using it, but with requests you'd use a session object to handle cookie persistence. You already found the equivalent in reqwest; a ClientBuilder has a cookie store method which enables the same functionality. Use the builder configured with this to create both requests, and any cookies on one response then are passed on to the next request (following the normal rules for cookie domains, paths and flags). Next, the requests.post() method combines fields passed to files and data into a single multipart form request body. This does not post JSON data, don't use the RequestBuilder.json() method here. Just add those fields to the multipart request as a text field, using the Form.text() method. Your login function is also not sending JSON; a dictionary passed to data is handled as form fields instead. So this should work: use std::path::Path; use tokio::fs::File; // UA string to pass to ClientBuilder.user_agent let &'static user_agent = "kattis-cli-submit"; let config = get_config().await?; let client = reqwest::ClientBuilder::new() .user_agent(user_agent) .cookie_store(true) .build()?; // Login // could also use a HashMap let login_fields = [ ("user", config.username.as_str()), ("script", "true"), ("token", config.token.as_str()), ]; let login_response = client .post(&config.login_url) .form(&login_fields) .send() .await?; println!("{}", login_response); // Make a submission let mut form = reqwest::multipart::Form::new() .text("submit", "true") .text("submit_ctr", "2") .text("language", language) .text("mainclass", problem) .text("problem", problem) .text("script", "true"); // add a single file, and set the part filename to the base name of the file path let path = Path::new(submission_filename); let sub_file_contents = std::fs::read(path)?; let sub_file_part = reqwest::multipart::Part::bytes(sub_file_contents) .file_name(path.file_name().unwrap().to_string_lossy()) .mime_str("application/octet-stream")?; form = form.part("sub_file[]", sub_file_part); let submission_response = client .post(&config.submit_url) .multipart(form) .send() .await? .text() .await?; println!("Submission response:\n{}", submission_response); I've made use of the ClientBuilder.user_agent() method, rather than manually build a header map, to set the User-Agent string. Note that the code posts a single file, and reads the file contents into memory first; the multipart::Part::bytes() method produces a new Part that then is further configured by attaching the filename and the mimetype. I can heartily recommend that you try out posting to https://httpbin.org/post to see what exactly your code ends up sending, and compare that with the Python version. I’ve created repl.it demos of the code that use httpbin (with some adjustments to work without a config object, plus the code sets a cookie so we can verify that it is being propagated, uploades more than one file, and sets unique part names for the attached files so httpbin shows them properly): Python: https://repl.it/@mjpieters/so63873082-python Rust: https://repl.it/@mjpieters/so63873082-rust#so63873082/src/main.rs You can see there that the responses from httpbin are the same. The Python code reads each file into memory to post it; this is not that efficient and limits the file sizes that can be sent with this code. That's probably fine for this script, but for larger files you want to stream the file data straight from disk to the network socket as you send the form data: use std::path::Path; use tokio::fs::File; use tokio_util::codec::{BytesCodec, FramedRead}; let path = Path::new(submission_filename); // Create a Stream for the attached file, wrapped in a reqwest::Body let file = File::open(path).await?; let reader = FramedRead::new(file, BytesCodec::new()); let sub_file_part = reqwest::multipart::Part::stream(Body::wrap_stream(reader)) .file_name(path.file_name().unwrap().to_string_lossy()) .mime_str("application/octet-stream")?; form = form.part(part_name, sub_file_part); You can see this in action at https://repl.it/@mjpieters/so63873082-rust-streams#so63873082/src/main.rs
8
16
63,854,588
2020-9-11
https://stackoverflow.com/questions/63854588/test-with-fastapi-testclient-returns-422-status-code
I try to test an endpoint with the TestClient from FastAPI (which is the Scarlett TestClient basically). The response code is always 422 Unprocessable Entity. This is my current Code: from typing import Dict, Optional from fastapi import APIRouter from pydantic import BaseModel router = APIRouter() class CreateRequest(BaseModel): number: int ttl: Optional[float] = None @router.post("/create") async def create_users(body: CreateRequest) -> Dict: return { "msg": f"{body.number} Users are created" } As you can see I'm also passing the application/json header to the client to avoid a potential error. And this is my Test: from fastapi.testclient import TestClient from metusa import app def test_create_50_users(): client = TestClient(app) client.headers["Content-Type"] = "application/json" body = { "number": 50, "ttl": 2.0 } response = client.post('/v1/users/create', data=body) assert response.status_code == 200 assert response.json() == {"msg": "50 Users created"} I also found this error message in the Response Object b'{"detail":[{"loc":["body",0],"msg":"Expecting value: line 1 column 1 (char 0)","type":"value_error.jsondecode","ctx":{"msg":"Expecting value","doc":"number=50&ttl=2.0","pos":0,"lineno":1,"colno":1}}]}' Thank you for your support and time!
You don't need to set headers manualy. You can use json argument insteed of data in client.post method. def test_create_50_users(): client = TestClient(router) body = { "number": 50, "ttl": 2.0 } response = client.post('/create', json=body) If you still want to use data attribute, you need to use json.dumps def test_create_50_users(): client = TestClient(router) client.headers["Content-Type"] = "application/json" body = { "number": 50, "ttl": 2.0 } response = client.post('/create', data=json.dumps(body))
12
8
63,883,654
2020-9-14
https://stackoverflow.com/questions/63883654/typeerror-numpy-float64-object-is-not-callable-while-printing-f1-score
I am trying to run below code on Jupyter Notebook: lr = LogisticRegression(class_weight='balanced') lr.fit(X_train,y_train) y_pred = lr.predict(X_train) acc_log = round(lr.score(X_train, y_train) * 100, 2) prec_log = round(precision_score(y_train,y_pred) * 100,2) recall_log = round(recall_score(y_train,y_pred) * 100,2) f1_log = round(f1_score(y_train,y_pred) * 100,2) roc_auc_log = roc_auc_score(y_train,y_pred) When trying to execute this, I am getting the below error: --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-46-bcb2d9729eb6> in <module> 6 prec_log = round(precision_score(y_train,y_pred) * 100,2) 7 recall_log = round(recall_score(y_train,y_pred) * 100,2) ----> 8 f1_log = round(f1_score(y_train,y_pred) * 100,2) 9 roc_auc_log = roc_auc_score(y_train,y_pred) TypeError: 'numpy.float64' object is not callable Can't seem to figure out what I am doing wrong.
Somewhere in your code (not shown here), there is a line which says f1_score = ... (with the written type being numpy.float64) so you're overriding the method f1_score with a variable f1_score (which is not callable, hence the error message). Rename one of the two to resolve the error.
7
17
63,880,081
2020-9-14
https://stackoverflow.com/questions/63880081/how-to-convert-a-torch-tensor-into-a-byte-string
I'm trying to serialize a torch tensor using protobuf and it seems using BytesIO along with torch.save() doesn't work. I have tried: import torch import io x = torch.randn(size=(1,20)) buff = io.BytesIO() torch.save(x, buff) print(f'buffer: {buff.read()}') to no avail as it results in b'' in the output! How should I be going about this?
You need to seek to the beginning of the buffer before reading: import torch import io x = torch.randn(size=(1,20)) buff = io.BytesIO() torch.save(x, buff) buff.seek(0) # <-- this is what you were missing print(f'buffer: {buff.read()}') gives you this magnificent output: buffer: b'PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00\x12\x00archive/data.pklFB\x0e\x00ZZZZZZZZZZZZZZ\x80\x02ctorch._utils\n_rebuild_tensor_v2\nq\x00((X\x07\x00\x00\x00storageq\x01ctorch\nFloatStorage\nq\x02X\x0f\x00\x00\x00140417054790352q\x03X\x03\x00\x00\x00cpuq\x04K\x14tq\x05QK\x00K\x01K\x14\x86q\x06K\x14K\x01\x86q\x07\x89ccollections\nOrderedDict\nq\x08)Rq\ttq\nRq\x0b.PK\x07\x08\xf3\x08u\x13\xa8\x00\x00\x00\xa8\x00\x00\x00PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1c\x00\x0e\x00archive/data/140417054790352FB\n\x00ZZZZZZZZZZ\xba\xf3x?\xb5\xe2\xc4=)R\x89\xbfM\x08\x19\xbfo%Y\xbf\x05\xc0_\xbf\x03N4\xbe\xdd_ \xc0&\xc4\xb5?\xa7\xfd\xc4?f\xf1$?Ll\xa6?\xee\x8e\x80\xbf\x88Uq?.<\xd8?{\x08\xb2?\xb3\xa3\xba>q\xcd\xbc?\xba\xe3h\xbd\xcan\x11\xc0PK\x07\x08A\xf3\xdc>P\x00\x00\x00P\x00\x00\x00PK\x03\x04\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0f\x003\x00archive/versionFB/\x00ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ3\nPK\x07\x08\xd1\x9egU\x02\x00\x00\x00\x02\x00\x00\x00PK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\xf3\x08u\x13\xa8\x00\x00\x00\xa8\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00archive/data.pklPK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00A\xf3\xdc>P\x00\x00\x00P\x00\x00\x00\x1c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf8\x00\x00\x00archive/data/140417054790352PK\x01\x02\x00\x00\x00\x00\x08\x08\x00\x00\x00\x00\x00\x00\xd1\x9egU\x02\x00\x00\x00\x02\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xa0\x01\x00\x00archive/versionPK\x06\x06,\x00\x00\x00\x00\x00\x00\x00\x1e\x03-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\xc5\x00\x00\x00\x00\x00\x00\x00\x12\x02\x00\x00\x00\x00\x00\x00PK\x06\x07\x00\x00\x00\x00\xd7\x02\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00PK\x05\x06\x00\x00\x00\x00\x03\x00\x03\x00\xc5\x00\x00\x00\x12\x02\x00\x00\x00\x00'
10
19
63,813,378
2020-9-9
https://stackoverflow.com/questions/63813378/how-to-json-normalize-a-column-in-pandas-with-empty-lists-without-losing-record
I am using pd.json_normalize to flatten the "sections" field in this data into rows. It works fine except for rows where the "sections" is an empty list. This ID gets completely ignored and is missing from the final flattened dataframe. I need to make sure that I have at least one row per unique ID in the data (some IDs may have many rows up to one row per unique ID, per unique section_id, question_id, and answer_id as I unnest more fields in the data): {'_id': '5f48f708fe22ca4d15fb3b55', 'created_at': '2020-08-28T12:22:32Z', 'sections': []}] Sample data: sample = [{'_id': '5f48bee4c54cf6b5e8048274', 'created_at': '2020-08-28T08:23:00Z', 'sections': [{'comment': '', 'type_fail': None, 'answers': [{'comment': 'stuff', 'feedback': [], 'value': 10.0, 'answer_type': 'default', 'question_id': '5e59599c68369c24069630fd', 'answer_id': '5e595a7c3fbb70448b6ff935'}, {'comment': 'stuff', 'feedback': [], 'value': 10.0, 'answer_type': 'default', 'question_id': '5e598939cedcaf5b865ef99a', 'answer_id': '5e598939cedcaf5b865ef998'}], 'score': 20.0, 'passed': True, '_id': '5e59599c68369c24069630fe', 'custom_fields': []}, {'comment': '', 'type_fail': None, 'answers': [{'comment': '', 'feedback': [], 'value': None, 'answer_type': 'not_applicable', 'question_id': '5e59894f68369c2398eb68a8', 'answer_id': '5eaad4e5b513aed9a3c996a5'}, {'comment': '', 'feedback': [], 'value': None, 'answer_type': 'not_applicable', 'question_id': '5e598967cedcaf5b865efe3e', 'answer_id': '5eaad4ece3f1e0794372f8b2'}, {'comment': "stuff", 'feedback': [], 'value': 0.0, 'answer_type': 'default', 'question_id': '5e598976cedcaf5b865effd1', 'answer_id': '5e598976cedcaf5b865effd3'}], 'score': 0.0, 'passed': True, '_id': '5e59894f68369c2398eb68a9', 'custom_fields': []}]}, {'_id': '5f48f708fe22ca4d15fb3b55', 'created_at': '2020-08-28T12:22:32Z', 'sections': []}] Tests: df = pd.json_normalize(sample) df2 = pd.json_normalize(df.to_dict(orient="records"), meta=["_id", "created_at"], record_path="sections", record_prefix="section_") At this point I am now missing a row for ID "5f48f708fe22ca4d15fb3b55" which I still need. df3 = pd.json_normalize(df2.to_dict(orient="records"), meta=["_id", "created_at", "section__id", "section_score", "section_passed", "section_type_fail", "section_comment"], record_path="section_answers", record_prefix="") Can I alter this somehow to make sure that I get one row per ID at minimum? I'm dealing with millions of records and don't want to realize later that some IDs were missing from my final data. The only solution I can think of is to normalize each dataframe and then left join it to the original dataframe again.
The best way to resolve the issue, is fix the dict If sections is an empty list, fill it with [{'answers': [{}]}] for i, d in enumerate(sample): if not d['sections']: sample[i]['sections'] = [{'answers': [{}]}] df = pd.json_normalize(sample) df2 = pd.json_normalize(df.to_dict(orient="records"), meta=["_id", "created_at"], record_path="sections", record_prefix="section_") # display(df2) section_comment section_type_fail section_answers section_score section_passed section__id section_custom_fields _id created_at 0 NaN [{'comment': 'stuff', 'feedback': [], 'value': 10.0, 'answer_type': 'default', 'question_id': '5e59599c68369c24069630fd', 'answer_id': '5e595a7c3fbb70448b6ff935'}, {'comment': 'stuff', 'feedback': [], 'value': 10.0, 'answer_type': 'default', 'question_id': '5e598939cedcaf5b865ef99a', 'answer_id': '5e598939cedcaf5b865ef998'}] 20.0 True 5e59599c68369c24069630fe [] 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 1 NaN [{'comment': '', 'feedback': [], 'value': None, 'answer_type': 'not_applicable', 'question_id': '5e59894f68369c2398eb68a8', 'answer_id': '5eaad4e5b513aed9a3c996a5'}, {'comment': '', 'feedback': [], 'value': None, 'answer_type': 'not_applicable', 'question_id': '5e598967cedcaf5b865efe3e', 'answer_id': '5eaad4ece3f1e0794372f8b2'}, {'comment': 'stuff', 'feedback': [], 'value': 0.0, 'answer_type': 'default', 'question_id': '5e598976cedcaf5b865effd1', 'answer_id': '5e598976cedcaf5b865effd3'}] 0.0 True 5e59894f68369c2398eb68a9 [] 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 2 NaN NaN [{}] NaN NaN NaN NaN 5f48f708fe22ca4d15fb3b55 2020-08-28T12:22:32Z df3 = pd.json_normalize(df2.to_dict(orient="records"), meta=["_id", "created_at", "section__id", "section_score", "section_passed", "section_type_fail", "section_comment"], record_path="section_answers", record_prefix="") # display(df3) comment feedback value answer_type question_id answer_id _id created_at section__id section_score section_passed section_type_fail section_comment 0 stuff [] 10.0 default 5e59599c68369c24069630fd 5e595a7c3fbb70448b6ff935 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 5e59599c68369c24069630fe 20 True NaN 1 stuff [] 10.0 default 5e598939cedcaf5b865ef99a 5e598939cedcaf5b865ef998 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 5e59599c68369c24069630fe 20 True NaN 2 [] NaN not_applicable 5e59894f68369c2398eb68a8 5eaad4e5b513aed9a3c996a5 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 5e59894f68369c2398eb68a9 0 True NaN 3 [] NaN not_applicable 5e598967cedcaf5b865efe3e 5eaad4ece3f1e0794372f8b2 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 5e59894f68369c2398eb68a9 0 True NaN 4 stuff [] 0.0 default 5e598976cedcaf5b865effd1 5e598976cedcaf5b865effd3 5f48bee4c54cf6b5e8048274 2020-08-28T08:23:00Z 5e59894f68369c2398eb68a9 0 True NaN 5 NaN NaN NaN NaN NaN NaN 5f48f708fe22ca4d15fb3b55 2020-08-28T12:22:32Z NaN NaN NaN NaN NaN
8
8
63,873,363
2020-9-13
https://stackoverflow.com/questions/63873363/how-to-use-log-scale-for-the-axes-of-a-seaborn-relplot
I tried drawing a relplot with log scaled axes. Making use of previous answers, I tried: import matplotlib.pyplot as plt import seaborn as sns f, ax = plt.subplots(figsize=(7, 7)) ax.set(xscale="log", yscale="log") tips = sns.load_dataset("tips") sns.relplot(x="total_bill", y="tip", hue='smoker', data=tips) plt.show() However the axes were not changed in the result. How can I remedy this?
You can use scatterplot and dont forget to mention your axes in your plot import matplotlib.pyplot as plt import seaborn as sns f, ax = plt.subplots(figsize=(7, 7)) tips = sns.load_dataset("tips") ax.set(xscale="log", yscale="log") sns.scatterplot(x="total_bill", y="tip", hue='smoker', data=tips,ax=ax) plt.show() Edit - relplot is a figure-level function and does not accept the ax= paramter
12
1
63,872,530
2020-9-13
https://stackoverflow.com/questions/63872530/change-specific-values-in-dataframe-if-one-cell-in-a-row-is-null
I have the following dataframe in pandas: >>>name food beverage age 0 Ruth Burger Cola 23 1 Dina Pasta water 19 2 Joel Tuna water 28 3 Daniel null soda 30 4 Tomas null cola 10 I want to put condistion that if value in food column is null, the age and beverage will change into ' ' (blank as well), I have wrote this code for that: if df[(df['food'].isna())]: df['beverage']=' ' df['age']=' ' but I keep getting error: ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). I have playes with the location of the ([ but didn't help, what do I do wrong?
Try with mask df[['beverage','age']] = df[['beverage','age']].mask(df['food'].isna(),'') df Out[86]: name food beverage age 0 Ruth Burger Cola 23 1 Dina Pasta water 19 2 Joel Tuna water 28 3 Daniel NaN 4 Tomas NaN
11
8
63,853,813
2020-9-11
https://stackoverflow.com/questions/63853813/how-to-create-routes-with-fastapi-within-a-class
So I need to have some routes inside a class, but the route methods need to have the self attr (to access the class' attributes). However, FastAPI then assumes self is its own required argument and puts it in as a query param This is what I've got: app = FastAPI() class Foo: def __init__(y: int): self.x = y @app.get("/somewhere") def bar(self): return self.x However, this returns 422 unless you go to /somewhere?self=something. The issue with this, is that self is then str, and thus useless. I need some way that I can still access self without having it as a required argument.
For creating class-based views you can use @cbv decorator from fastapi-utils. The motivation of using it: Stop repeating the same dependencies over and over in the signature of related endpoints. Your sample could be rewritten like this: from fastapi import Depends, FastAPI from fastapi_utils.cbv import cbv from fastapi_utils.inferring_router import InferringRouter def get_x(): return 10 app = FastAPI() router = InferringRouter() # Step 1: Create a router @cbv(router) # Step 2: Create and decorate a class to hold the endpoints class Foo: # Step 3: Add dependencies as class attributes x: int = Depends(get_x) @router.get("/somewhere") def bar(self) -> int: # Step 4: Use `self.<dependency_name>` to access shared dependencies return self.x app.include_router(router)
40
18
63,869,134
2020-9-13
https://stackoverflow.com/questions/63869134/converting-tensorflow-tensor-into-numpy-array
Problem Description I am trying to write a custom loss function in TensorFlow 2.3.0. To calculate the loss, I need the y_pred parameter to be converted to a numpy array. However, I can't find a way to convert it from <class 'tensorflow.python.framework.ops.Tensor'> to numpy array, even though there seem to TensorFlow functions to do so. Code Example def custom_loss(y_true, y_pred): print(type(y_pred)) npa = y_pred.make_ndarray() ... if __name__ == '__main__': ... model.compile(loss=custom_loss, optimizer="adam") model.fit(x=train_data, y=train_data, epochs=10) gives the error message: AttributeError: 'Tensor' object has no attribute 'make_ndarray after printing the type of the y_pred parameter: <class 'tensorflow.python.framework.ops.Tensor'> What I have tried so far Looking for a solution I found this seems to be a common issue and there a couple of suggestions, but they did not work for me so far: 1. " ... so just call .numpy() on the Tensor object.": How can I convert a tensor into a numpy array in TensorFlow? so I tried: def custom_loss(y_true, y_pred): npa = y_pred.numpy() ... giving me AttributeError: 'Tensor' object has no attribute 'numpy' 2. "Use tensorflow.Tensor.eval() to convert a tensor to an array": How to convert a TensorFlow tensor to a NumPy array in Python so I tried: def custom_loss(y_true, y_pred): npa = y_pred.eval(session=tf.compat.v1.Session()) ... giving me one of the longest trace of error messages I ever have seen with the core being: InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: You must feed a value for placeholder tensor 'functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource' with dtype resource [[node functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource (defined at main.py:303) ]] [[functional_1/cropping2d/strided_slice/_1]] (1) Invalid argument: You must feed a value for placeholder tensor 'functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource' with dtype resource [[node functional_1/conv2d_2/BiasAdd/ReadVariableOp/resource (defined at main.py:303) ]] also having to call TensorFlow Compatibility Functions from Version 1.x does not feel very future-proof, so I do not like this approach too much anyhow. 3. Looking at the TensorFlow Docs there seemed to be the function I needed just waiting: tf.make_ndarray Create a numpy ndarray from a tensor. so I tried: def custom_loss(y_true, y_pred): npa = tf.make_ndarray(y_pred) ... giving me AttributeError: 'Tensor' object has no attribute 'tensor_shape' Looking at the example in the TF documentation they use this on a proto_tensor, so I tried converting to a proto first: def custom_loss(y_true, y_pred): proto_tensor = tf.make_tensor_proto(y_pred) npa = tf.make_ndarray(proto_tensor) ... but already the tf.make_tensor_proto(y_pred) raises the error: TypeError: Expected any non-tensor type, got a tensor instead. Also trying to make a const tensor first gives the same error: def custom_loss(y_true, y_pred): a = tf.constant(y_pred) proto_tensor = tf.make_tensor_proto(a) npa = tf.make_ndarray(proto_tensor) ... There are many more posts around this but it seems they are all coming back to these three basic ideas. Looking forward to your suggestions!
y_pred.numpy() works in TF 2 but AttributeError: 'Tensor' object has no attribute 'make_ndarray indicates that there are parts of your code that you are not running in Eager mode as you would otherwise not have a Tensor object but an EagerTensor. To enable Eager Mode, put this at the beginning of your code before anything in the graph is built: tf.config.experimental_run_functions_eagerly(True) Second, when you compile your model, add this parameter: model.compile(..., run_eagerly=True, ...) Now you're executing in Eager Mode and all variables actually hold values that you can both print and work with. Be aware that switching to Eager mode might require additional adjustments to your code (see here for an overview).
10
9
63,816,481
2020-9-9
https://stackoverflow.com/questions/63816481/faster-method-for-creating-spatially-correlated-noise
In my current project, I am interested in calculating spatially correlated noise for a large model grid. The noise should be strongly correlated over short distances, and uncorrelated over large distances. My current approach uses multivariate Gaussians with a covariance matrix specifying the correlation between all cells. Unfortunately, this approach is extremely slow for large grids. Do you have a recommendation of how one might generate spatially correlated noise more efficiently? (It doesn't have to be Gaussian) import scipy.stats import numpy as np import scipy.spatial.distance import matplotlib.pyplot as plt # Create a 50-by-50 grid; My actual grid will be a LOT larger X,Y = np.meshgrid(np.arange(50),np.arange(50)) # Create a vector of cells XY = np.column_stack((np.ndarray.flatten(X),np.ndarray.flatten(Y))) # Calculate a matrix of distances between the cells dist = scipy.spatial.distance.pdist(XY) dist = scipy.spatial.distance.squareform(dist) # Convert the distance matrix into a covariance matrix correlation_scale = 50 cov = np.exp(-dist**2/(2*correlation_scale)) # This will do as a covariance matrix # Sample some noise !slow! noise = scipy.stats.multivariate_normal.rvs( mean = np.zeros(50**2), cov = cov) # Plot the result plt.contourf(X,Y,noise.reshape((50,50)))
Faster approach: Generate spatially uncorrelated noise. Blur with Gaussian filter kernel to make noise spatially correlated. Since the filter kernel is rather large, it is a good idea to use a convolution method based on Fast Fourier Transform. import numpy as np import scipy.signal import matplotlib.pyplot as plt # Compute filter kernel with radius correlation_scale (can probably be a bit smaller) correlation_scale = 50 x = np.arange(-correlation_scale, correlation_scale) y = np.arange(-correlation_scale, correlation_scale) X, Y = np.meshgrid(x, y) dist = np.sqrt(X*X + Y*Y) filter_kernel = np.exp(-dist**2/(2*correlation_scale)) # Generate n-by-n grid of spatially correlated noise n = 50 noise = np.random.randn(n, n) noise = scipy.signal.fftconvolve(noise, filter_kernel, mode='same') plt.contourf(np.arange(n), np.arange(n), noise) plt.savefig("fast.png") Sample output of this method: Sample output of slow method from question: Image size vs running time:
7
7
63,804,883
2020-9-9
https://stackoverflow.com/questions/63804883/including-and-distributing-third-party-libraries-with-a-python-c-extension
I'm building a C Python extension which makes use of a "third party" library— in this case, one that I've built using a separate build process and toolchain. Call this library libplumbus.dylib. Directory structure would be: grumbo/ include/ plumbus.h lib/ libplumbus.so grumbo.c setup.py My setup.py looks approximately like: from setuptools import Extension, setup native_module = Extension( 'grumbo', define_macros = [('MAJOR_VERSION', '1'), ('MINOR_VERSION', '0')], sources = ['grumbo.c'], include_dirs = ['include'], libraries = ['plumbus'], library_dirs = ['lib']) setup( name = 'grumbo', version = '1.0', ext_modules = [native_module] ) Since libplumbus is an external library, when I run import grumbo I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: dlopen(/path/to/grumbo/grumbo.cpython-37m-darwin.so, 2): Library not loaded: lib/libplumbus.dylib Referenced from: /path/to/grumbo/grumbo.cpython-37m-darwin.so Reason: image not found What's the simplest way to set things up so that libplumbus is included with the distribution and properly loaded when grumbo is imported? (Note that this should work with a virtualenv). I have tried adding lib/libplumbus.dylib to package_data, but this doesn't work, even if I add -Wl,-rpath,@loader_path/grumbo/lib to the Extension's extra_link_args.
The goal of this post is to have a setup.py which would create a source distribution. That means after running python setup.py sdist the resulting dist/grumbo-1.0.tar.gz could be used for installation via pip install grumbo-1.0.tar.gz We will start for a setup.py for Linux/MacOS, but then tweak to make it work for Windows as well. The first step is to get the additional data (includes/library) into the distribution. I'm not sure it is really impossible to add data for a module, but setuptools offers functionality to add data for packages, so let's make a package from your module (which is probably a good idea anyway). The new structure of package grumbo looks as follows: src/ grumbo/ __init__.py # empty grumbo.c include/ plumbus.h lib/ libplumbus.so setup.py and changed setup.py: from setuptools import setup, Extension, find_packages native_module = Extension( name='grumbo.grumbo', sources = ["src/grumbo/grumbo.c"], ) kwargs = { 'name' : 'grumbo', 'version' : '1.0', 'ext_modules' : [native_module], 'packages':find_packages(where='src'), 'package_dir':{"": "src"}, } setup(**kwargs) It doesn't do much yet, but at least our package can be found by setuptools. The build fails, because the includes are missing. Now let's add the needed includes from the include-folder to the distribution via package-data: ... kwargs = { ..., 'package_data' : { 'grumbo': ['include/*.h']}, } ... With that our include-files are copied to the source distribution. However because it will be build "somewhere" we don't know yet, adding include_dirs = ['include'] to the Extension definition just doesn't cut it. There must be a better way (and less brittle) to find the right include path, but that is what I came up with: ... import os import sys import sysconfig def path_to_build_folder(): """Returns the name of a distutils build directory""" f = "{dirname}.{platform}-{version[0]}.{version[1]}" dir_name = f.format(dirname='lib', platform=sysconfig.get_platform(), version=sys.version_info) return os.path.join('build', dir_name, 'grumbo') native_module = Extension( ..., include_dirs = [os.path.join(path_to_build_folder(),'include')], ) ... Now, the extension is built, but cannot be yet loaded because it is not linked against shared-object libplumbus.so and thus some symbols are unresolved. Similar to the header files, we can add our library to the distribution: kwargs = { ..., 'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so']}, } ... and add the right lib-path for the linker: ... native_module = Extension( ... libraries = ['plumbus'], library_dirs = [os.path.join(path_to_build_folder(), 'lib')], ) ... Now, we are almost there: the extension is built an put into site-packages/grumbo/ the extension depends on libplumbus.so as can be seen with help of ldd libplumbus.so is put into site-packages/grumbo/lib However, we still cannot import the extension, as import grumbo.grumbo leads to ImportError: libplumbus.so: cannot open shared object file: No such file or directory because the loader cannot find the needed shared object which resides in the folder .\lib relative to our extension. We could use rpath to "help" the loader: ... native_module = Extension( ... extra_link_args = ["-Wl,-rpath=$ORIGIN/lib/."], ) ... And now we are done: >>> import grumbo.grumbo # works! Also building and installing a wheel should work: python setup.py bdist_wheel and then: pip install grumbo-1.0-xxxx.whl The first mile stone is achieved. Now we extend it, so it works other platforms as well. Same source distribution for Linux and Macos: To be able to install the same source distribution on Linux and MacOS, both versions of the shared library (for Linux and MacOS) must be present. An option is to add a suffix to the names of shared objects: e.g. having libplumbus.linux.so and libplumbis.macos.so. The right shared object can be picked in the setup.py depending on the platform: ... import platform def pick_library(): my_system = platform.system() if my_system == 'Linux': return "plumbus.linux" if my_system == 'Darwin': return "plumbus.macos" if my_system == 'Windows': return "plumbus" raise ValueError("Unknown platform: " + my_system) native_module = Extension( ... libraries = [pick_library()], ... ) Tweaking for Windows: On Windows, dynamic libraries are dlls and not shared objects, so there are some differences that need to be taken into account: when the C-extension is built, it needs plumbus.lib-file, which we need to put into the lib-subfolder. when the C-extension is loaded during the run time, it needs plumbus.dll-file. Windows has no notion of rpath, thus we need to put the dll right next to the extension, so it can be found (see also this SO-post for more details). That means the folder structure should be as follows: src/ grumbo/ __init__.py grumbo.c plumbus.dll # needed for Windows include/ plumbus.h lib/ libplumbus.linux.so # needed on Linux libplumbus.macos.so # needed on Macos plumbus.lib # needed on Windows setup.py There are also some changes in the setup.py. First, extending the package_data so dll and lib are picked up: ... kwargs = { ... 'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so', 'lib/*.lib', '*.dll', # for windows ]}, } ... Second, rpath can only be used on Linux/MacOS, thus: def get_extra_link_args(): if platform.system() == 'Windows': return [] else: return ["-Wl,-rpath=$ORIGIN/lib/."] native_module = Extension( ... extra_link_args = get_extra_link_args(), ) That it! The complete setup file (you might want to add macro-definition or similar, which I've skipped): from setuptools import setup, Extension, find_packages import os import sys import sysconfig def path_to_build_folder(): """Returns the name of a distutils build directory""" f = "{dirname}.{platform}-{version[0]}.{version[1]}" dir_name = f.format(dirname='lib', platform=sysconfig.get_platform(), version=sys.version_info) return os.path.join('build', dir_name, 'grumbo') import platform def pick_library(): my_system = platform.system() if my_system == 'Linux': return "plumbus.linux" if my_system == 'Darwin': return "plumbus.macos" if my_system == 'Windows': return "plumbus" raise ValueError("Unknown platform: " + my_system) def get_extra_link_args(): if platform.system() == 'Windows': return [] else: return ["-Wl,-rpath=$ORIGIN/lib/."] native_module = Extension( name='grumbo.grumbo', sources = ["src/grumbo/grumbo.c"], include_dirs = [os.path.join(path_to_build_folder(),'include')], libraries = [pick_library()], library_dirs = [os.path.join(path_to_build_folder(), 'lib')], extra_link_args = get_extra_link_args(), ) kwargs = { 'name' : 'grumbo', 'version' : '1.0', 'ext_modules' : [native_module], 'packages':find_packages(where='src'), 'package_dir':{"": "src"}, 'package_data' : { 'grumbo': ['include/*.h', 'lib/*.so', 'lib/*.lib', '*.dll', # for windows ]}, } setup(**kwargs)
7
14
63,858,511
2020-9-12
https://stackoverflow.com/questions/63858511/using-threads-in-combination-with-asyncio
I was looking for a way to spawn different threads (in my actual program the number of threads can change during execution) to perform a endless-running operation which would block my whole application for (at worst) a couple of seconds during their run. Because of this, I'm using the standard thread class and asyncio (because other parts of my program are using it). This seems to work good and according to this thread it seems to be okay, however when searching for asynchronous threading and asyncio I'm often stumbling across the suggestion of using ProcessPoolExecutor (e. g. in this stackoverflow post). Now I'm wondering, if the following way is really good practice (or even dangerous)? class Scanner: def __init__(self): # Start a new Scanning Thread self.scan_thread = Thread(target=self.doScan, args=()) self.scan_thread.start() def doScan(self): print("Started scanning") loop = asyncio.new_event_loop() loop.run_until_complete(self.connection()) print("Stopped scanning") list_of_scanner = [] list_of_scanner.append(Scanner()) list_of_scanner.append(Scanner()) Background: I started questioning this myself, because my program started crashing when spawning threads, mostly with the error message RuntimeError: Task <Task pending ...> attached to a different loop. I know that this is not directly linked to the example I gave you, but I guess I started messing up my asyncio coroutines by using these threads. Edit For clarification I want to add, why I'm using this weird construct of asyncio and threads. I'm using this parts of the project hbldh/bleak The part which would run as a thread is basically this: async def connection(): x = await client.is_connected() async with BleakClient(address, loop=loop) as client: while x: x = await client.is_connected() log.info("Connected: {0}".format(x)) What is endlessScan() doing? The name is a bit misleading and it's called different in my code (I've now changed that now). The new name is connection() The whole purpose is to establish a link to Bluetooth Devices and basically listen to incoming data (like we would do when using sockets) This means that loop.run_until_complete(self.connection()) will NEVER exit, unless the Bluetooth devices disconnects. Why can't I create one single event loop? As said, when established a link, this function runs endlessly. Each connected device runs such an endless loop. I want to do this in background. My main application should never have to wait for the routine to finish and must be responsive under all circumstances. This for me justified the usage of threads in combination with asyncio Edit 2: Added my testing code based on @user4815162342 suggestion. The execution seems to work fine. import asyncio from threading import Thread, Event, Lock import random class Scanner: def __init__(self, id, loop): print("INIT'D %s" % id) self.id = id self.submit_async(self.update_raw_data(), loop) self.raw_data = "" self.event = Event() self.data_lock = Lock() @property def raw_data(self): with self.data_lock: return self._raw_data @raw_data.setter def raw_data(self, raw_data): self._raw_data = raw_data def submit_async(self, awaitable, loop): return asyncio.run_coroutine_threadsafe(awaitable, loop) async def update_raw_data(self): while True: with self.data_lock: self._raw_data = random.random() print("Waken up %s with %s" % (self.id, self._raw_data)) await asyncio.sleep(self.id) def _start_async(): loop = asyncio.new_event_loop() t = Thread(target=loop.run_forever) t.daemon = True t.start() return loop _loop = _start_async() def stop_async(): _loop.call_soon_threadsafe(_loop.stop) ble_devices = [Scanner(1, _loop), Scanner(2, _loop), Scanner(4, _loop)] # This code never executes... for dev in ble_devices: print(dev.raw_data)
I would recommend creating a single event loop in a background thread and have it service all your async needs. It doesn't matter that your coroutines never end; asyncio is perfectly capable of executing multiple such functions in parallel. For example: def _start_async(): loop = asyncio.new_event_loop() threading.Thread(target=loop.run_forever).start() return loop _loop = start_async() # Submits awaitable to the event loop, but *doesn't* wait for it to # complete. Returns a concurrent.futures.Future which *may* be used to # wait for and retrieve the result (or exception, if one was raised) def submit_async(awaitable): return asyncio.run_coroutine_threadsafe(awaitable, _loop) def stop_async(): _loop.call_soon_threadsafe(_loop.stop) With these tools in place (and possibly in a separate module), you can do things like this: class Scanner: def __init__(self): submit_async(self.connection()) # ... # ... What about the advice to use ProcessPoolExecutor? Those apply to running CPU-bound code in parallel processes to avoid the GIL. If you are actually running async code, you shouldn't care about ProcessPoolExecutor. What about the advice to use ThreadPoolExecutor? A ThreadPoolExecutor is simply a thread pool useful for classic multi-threaded applications. In Python it is used primarily to make the program more responsive, not to make it faster. It allows you to run CPU-bound or blocking code in parallel with interactive code with neither getting starved. It won't make things faster due to the GIL.
7
12
63,862,118
2020-9-12
https://stackoverflow.com/questions/63862118/what-is-the-meaning-of-s-in-python
I see like this %(asctime)s in the logging module What is the meaning of %()s instead of %s? I only know %s means "string" and I can't find other information about %()s on the internet.
This is a string formatting feature when using the % form of Python string formatting to insert values into a string. The case you're looking at allows named values to be taken from a dictionary by providing the dictionary and specifying keys into that dictionary in the format string. Here's an example: values = {'city': 'San Francisco', 'state': 'California'} s = "I live in %(city)s, %(state)s" % values print(s) Result: I live in San Francisco, California
15
16
63,856,540
2020-9-12
https://stackoverflow.com/questions/63856540/how-to-check-to-make-sure-all-items-in-a-list-are-of-a-certain-type
I want to enforce that all items in a list are of type x. What would be the best way to do this? Currently I am doing an assert like the following: a = [1,2,3,4,5] assert len(a) == len([i for i in a if isinstance(i, int)]) Where int is the type I'm trying to enforce here. Is there a better way to do this?
I think you are making it a little too complex. You can just use all(): a = [1,2,3,4,5] assert all(isinstance(i, int) for i in a) a = [1,2,3,4,5.5] assert all(isinstance(i, int) for i in a) # AssertionError
17
13
63,853,854
2020-9-11
https://stackoverflow.com/questions/63853854/python-requirements-txt-specify-module-with-two-version-ranges
I'd like to specify the versions of tensorflow in a Python module. The agreeable versions are: (version >= 1.14.0 and version < 2.0) or (version >= 2.2) Does anyone know how to express this strange situation in a requirements.txt file? I believe there's a syntax for forbidding specific versions of a module, but I haven't been able to find it...
From PEP 440 Version Specifiers: tensorflow >=1.14.0,!=2.0.*,!= 2.1.* The comma , represents a logical and. Note that requirements.txt files are used for pinning a deployment, I would generally only expect to ever see == specifiers used in those files.
7
10
63,840,851
2020-9-11
https://stackoverflow.com/questions/63840851/compare-current-row-value-to-previous-row-values
I have login history data from User A for a day. My requirement is that at any point in time the User A can have only one valid login. As in the samples below, the user may have attempted to login successfully multiple times, while his first session was still active. So, any logins that happened during the valid session needs to be flagged as duplicate. Example 1: In the first sample data below, while the user was still logged in from 00:12:38 to 01:00:02 (index 0), there is another login from the user at 00:55:14 to 01:00:02 (index 1). Similarly, if we compare index 2 and 3, we can see that the record at index 3 is duplicate login as per requirement. start_time end_time 0 00:12:38 01:00:02 1 00:55:14 01:00:02 2 01:00:02 01:32:40 3 01:00:02 01:08:40 4 01:41:22 03:56:23 5 18:58:26 19:16:49 6 20:12:37 20:52:49 7 20:55:16 22:02:50 8 22:21:24 22:48:50 9 23:11:30 00:00:00 Expected output: start_time end_time isDup 0 00:12:38 01:00:02 0 1 00:55:14 01:00:02 1 2 01:00:02 01:32:40 0 3 01:00:02 01:08:40 1 4 01:41:22 03:56:23 0 5 18:58:26 19:16:49 0 6 20:12:37 20:52:49 0 7 20:55:16 22:02:50 0 8 22:21:24 22:48:50 0 9 23:11:30 00:00:00 0 These duplicate records need to be updated to 1 at column isDup. Example 2: Another sample of data as below. Here, while the user was still logged in between 13:36:10 and 13:50:16, there were 3 additional sessions too that needs to be flagged. start_time end_time 0 13:32:54 13:32:55 1 13:36:10 13:50:16 2 13:37:54 13:38:14 3 13:46:38 13:46:45 4 13:48:59 13:49:05 5 13:50:16 13:50:20 6 14:03:39 14:03:49 7 15:36:20 15:36:20 8 15:46:47 15:46:47 Expected output: start_time end_time isDup 0 13:32:54 13:32:55 0 1 13:36:10 13:50:16 0 2 13:37:54 13:38:14 1 3 13:46:38 13:46:45 1 4 13:48:59 13:49:05 1 5 13:50:16 13:50:20 0 6 14:03:39 14:03:49 0 7 15:36:20 15:36:20 0 8 15:46:47 15:46:47 0 What's the efficient way to compare the start time of the current record with previous records?
Map the time like values in columns start_time and end_time to pandas TimeDelta objects and subtract 1 seconds from the 00:00:00 timedelta values in end_time column. c = ['start_time', 'end_time'] s, e = df[c].astype(str).apply(pd.to_timedelta).to_numpy().T e[e == pd.Timedelta(0)] += pd.Timedelta(days=1, seconds=-1) Then for each pair of start_time and end_time in the dataframe df mark the corresponding duplicate intervals using numpy broadcasting: m = (s[:, None] >= s) & (e[:, None] <= e) np.fill_diagonal(m, False) df['isDupe'] = (m.any(1) & ~df[c].duplicated(keep=False)).view('i1') # example 1 start_time end_time isDupe 0 00:12:38 01:00:02 0 1 00:55:14 01:00:02 1 2 01:00:02 01:32:40 0 3 01:00:02 01:08:40 1 4 01:41:22 03:56:23 0 5 18:58:26 19:16:49 0 6 20:12:37 20:52:49 0 7 20:55:16 22:02:50 0 8 22:21:24 22:48:50 0 9 23:11:30 00:00:00 0 # example 2 start_time end_time isDupe 0 13:32:54 13:32:55 0 1 13:36:10 13:50:16 0 2 13:37:54 13:38:14 1 3 13:46:38 13:46:45 1 4 13:48:59 13:49:05 1 5 13:50:16 13:50:20 0 6 14:03:39 14:03:49 0 7 15:36:20 15:36:20 0 8 15:46:47 15:46:47 0
8
2
63,851,453
2020-9-11
https://stackoverflow.com/questions/63851453/typeerror-singleton-array-arraytrue-cannot-be-considered-a-valid-collection
I want split the dataset that I have into test/train while also ensuring that the distribution of classified labels are same in both test/train. To do this I am using the stratify option but it is throwing an error as follows: X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) Error message: TypeError Traceback (most recent call last) in 19 20 ---> 21 X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) 22 23 ~/anaconda3/lib/python3.8/site-packages/sklearn/model_selection/_split.py in train_test_split(*arrays, **options) 2150 random_state=random_state) 2151 -> 2152 train, test = next(cv.split(X=arrays[0], y=stratify)) 2153 2154 return list(chain.from_iterable((_safe_indexing(a, train), ~/anaconda3/lib/python3.8/site-packages/sklearn/model_selection/_split.py in split(self, X, y, groups) 1744 to an integer. 1745 """ -> 1746 y = check_array(y, ensure_2d=False, dtype=None) 1747 return super().split(X, y, groups) 1748 ~/anaconda3/lib/python3.8/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs) 71 FutureWarning) 72 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---> 73 return f(**kwargs) 74 return inner_f 75 ~/anaconda3/lib/python3.8/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 647 648 if ensure_min_samples > 0: --> 649 n_samples = _num_samples(array) 650 if n_samples < ensure_min_samples: 651 raise ValueError("Found array with %d sample(s) (shape=%s) while a" ~/anaconda3/lib/python3.8/site-packages/sklearn/utils/validation.py in _num_samples(x) 194 if hasattr(x, 'shape') and x.shape is not None: 195 if len(x.shape) == 0: --> 196 raise TypeError("Singleton array %r cannot be considered" 197 " a valid collection." % x) 198 # Check that shape is returning an integer or default to len TypeError: Singleton array array(True) cannot be considered a valid collection. When I try to do this without the stratify option it does not give me an error. I thought that this was because my Y labels don't have the minimum number of samples required to distribute the labels evenly between test/train but: pp.pprint(Counter(Y_values)) gives: Counter({13: 1084, 1: 459, 7: 364, 8: 310, 38: 295, 15: 202, 4: 170, 37: 105, 3: 98, 0: 85, 24: 79, 20: 78, 35: 76, 2: 75, 12: 74, 39: 72, 22: 71, 9: 63, 26: 59, 11: 55, 18: 55, 32: 53, 19: 53, 33: 53, 5: 52, 30: 42, 29: 42, 25: 41, 10: 39, 23: 38, 21: 38, 6: 38, 27: 37, 14: 36, 36: 36, 34: 34, 28: 33, 17: 31, 31: 30, 16: 30})
Per the sklearn documentation: stratifyarray-like, default=None If not None, data is split in a stratified fashion, using this as the class labels. Thus, it does not accept a boolean value like True or False, but the class labels themselves. So, you need to change: X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = True) to: X_full_train, X_full_test, Y_full_train, Y_full_test = train_test_split(X_values_full, Y_values, test_size = 0.33, random_state = 42, stratify = Y_values)
20
45
63,823,043
2020-9-10
https://stackoverflow.com/questions/63823043/custom-names-for-pytest-parametrized-tests
I've got a pytest test that's parametrized with a @pytest.mark.parametrize decorator using a custom function load_test_cases() that loads the test cases from a yaml file. class SelectTestCase: def __init__(self, test_case): self.select = test_case['select'] self.expect = test_case['expect'] def __str__(self): # also tried __repr__() # Attempt to print the 'select' attribute in "pytest -v" output return self.select def load_test_cases(path): with open(path, "rt") as f: test_cases = yaml.safe_load_all(f) return [ SelectTestCase(test_case) for test_case in test_cases ] @pytest.mark.parametrize("test_case", load_test_cases("tests/select-test-cases.yaml")) def test_select_prefixes(test_case): # .. run the test It works well except that the tests when run with pytest -v are displayed with test_case0, test_case1, etc parameters. tests/test_resolver.py::test_select_prefixes[test_case0] PASSED [ 40%] tests/test_resolver.py::test_select_prefixes[test_case1] PASSED [ 60%] tests/test_resolver.py::test_select_prefixes[test_case2] PASSED [ 80%] tests/test_resolver.py::test_select_prefixes[test_case3] PASSED [100%] I would love to see the select attribute displayed instead, e.g. tests/test_resolver.py::test_select_prefixes["some query"] PASSED [ 40%] tests/test_resolver.py::test_select_prefixes["another query"] PASSED [ 60%] I tried to add __str__() and __repr__() methods to the SelectTestCase class but it didn't make any difference. Any idea how to do it?
You can define how your parametrized test names look using the ids parameter. This can be a list of strings, or a function that takes the current parameter as argument and returns the ID to be shown in the test name. So, in your case it is sufficient to use str as that function, as you have already implemented __str__ for the parameters (which are of type SelectTestCase). If you just write: @pytest.mark.parametrize("test_case", load_test_cases("tests/select-test-cases.yaml"), ids=str) def test_select_prefixes(test_case): # .. run the test you will get the desired behavior, e.g. tests/test_resolver.py::test_select_prefixes[some query] PASSED [ 40%] tests/test_resolver.py::test_select_prefixes[another query] PASSED [ 60%] apart from the apostrophes (which you can add by adapting __str__ accordingly).
7
10
63,826,328
2020-9-10
https://stackoverflow.com/questions/63826328/torch-nn-functional-vs-torch-nn-pytorch
While adding loss in Pytorch, I have the same function in torch.nn.Functional as well as in torch.nn. what is the difference ? torch.nn.CrossEntropyLoss() and torch.nn.functional.cross_entropy
Putting same text from PyTorch discussion forum @Alban D has given answer to similar question. F.cross entropy vs torch.nn.Cross_Entropy_Loss There isn’t much difference for losses. The main difference between the nn.functional.xxx and the nn.Xxx is that one has a state and one does not. This means that for a linear layer for example, if you use the functional version, you will need to handle the weights yourself (including passing them to the optimizer or moving them to the gpu) while the nn.Xxx version will do all of that for you with .parameters() or .to(device). For loss functions, as no parameters are needed (in general), you won’t find much difference. Except for example, if you use cross entropy with some weighting between your classes, using the nn.CrossEntropyLoss() module, you will give your weights only once while creating the module and then use it. If you were using the functional version, you will need to pass the weights every single time you will use it.
14
16
63,821,633
2020-9-10
https://stackoverflow.com/questions/63821633/pandas-version-is-not-updated-after-installing-a-new-version-on-databricks
I am trying to solve a problem of pandas when I run python3.7 code on databricks. The error is: ImportError: cannot import name 'roperator' from 'pandas.core.ops' (/databricks/python/lib/python3.7/site-packages/pandas/core/ops.py) the pandas version: pd.__version__ 0.24.2 I run from pandas.core.ops import roperator well on my laptop with pandas 0.25.1 So, I tried to upgrade pandas on databricks. %sh pip uninstall -y pandas Successfully uninstalled pandas-1.1.2 %sh pip install pandas==0.25.1 Collecting pandas==0.25.1 Downloading pandas-0.25.1-cp37-cp37m-manylinux1_x86_64.whl (10.4 MB) Requirement already satisfied: python-dateutil>=2.6.1 in /databricks/conda/envs/databricks-ml/lib/python3.7/site-packages (from pandas==0.25.1) (2.8.0) Requirement already satisfied: numpy>=1.13.3 in /databricks/conda/envs/databricks-ml/lib/python3.7/site-packages (from pandas==0.25.1) (1.16.2) Requirement already satisfied: pytz>=2017.2 in /databricks/conda/envs/databricks-ml/lib/python3.7/site-packages (from pandas==0.25.1) (2018.9) Requirement already satisfied: six>=1.5 in /databricks/conda/envs/databricks-ml/lib/python3.7/site-packages (from python-dateutil>=2.6.1->pandas==0.25.1) (1.12.0) Installing collected packages: pandas ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. mlflow 1.8.0 requires alembic, which is not installed. mlflow 1.8.0 requires prometheus-flask-exporter, which is not installed. mlflow 1.8.0 requires sqlalchemy<=1.3.13, which is not installed. sklearn-pandas 2.0.1 requires numpy>=1.18.1, but you'll have numpy 1.16.2 which is incompatible. sklearn-pandas 2.0.1 requires pandas>=1.0.5, but you'll have pandas 0.25.1 which is incompatible. sklearn-pandas 2.0.1 requires scikit-learn>=0.23.0, but you'll have scikit-learn 0.20.3 which is incompatible. sklearn-pandas 2.0.1 requires scipy>=1.4.1, but you'll have scipy 1.2.1 which is incompatible. Successfully installed pandas-0.25.1 When I run: import pandas as pd pd.__version__ it is still: 0.24.2 Did I missed something ? thanks
It's really recommended to install libraries via cluster initialization script. The %sh command is executed only on the driver node, but not on the executor nodes. And it also doesn't affect Python instance that is already running. The correct solution will be to use dbutils.library commands, like this: dbutils.library.installPyPI("pandas", "1.0.1") dbutils.library.restartPython() this will install library to all places, but it will require restarting of the Python to pickup new libraries. Also, although it's possible to specify only package name, it's recommended to specify version explicitly, as some of the library version may not be compatible with runtime. Also, consider usage of the newer runtimes where library versions are already updated - check the release notes for runtimes to figure out the library versions installed out of the box. For newer Databricks runtimes you can use new magic commands: %pip and %conda to install dependencies. See the documentation for more details.
9
8
63,821,179
2020-9-10
https://stackoverflow.com/questions/63821179/extract-images-from-pdf-in-high-resolution-with-python
I have managed to extract images from several PDF pages with the below code, but the resolution is quite low. Is there a way to adjust that? import fitz pdffile = "C:\\Users\\me\\Desktop\\myfile.pdf" doc = fitz.open(pdffile) for page_index in range(doc.pageCount): page = doc.loadPage(page_index) pix = page.getPixmap() output = "image_page_" + str(page_index) + ".jpg" pix.writePNG(output) I have also tried using the code here and updated if pix.n < 5" to "if pix.n - pix.alpha < 4 but this didn't output any images in my case.
As stated in this issue for PyMuPDF, you have to use a matrix: issue on Github. The example given is: zoom = 2 # zoom factor mat = fitz.Matrix(zoom, zoom) pix = page.getPixmap(matrix = mat, <...>) Indicated in the issue is also that the default resolution is 72 dpi if you don't use a matrix which likely explains your getting low resolution.
13
16
63,820,683
2020-9-9
https://stackoverflow.com/questions/63820683/with-pre-commit-how-to-use-some-hooks-before-commit-and-others-before-push
Some hooks can take a while to run, and I would like to run those before I push, but not before each particular commit (for example, pylint can be a bit slow). I've seen the following: Question: Using hooks at different stages mesos-commits mailing list archives Feature request: pre-commit or pre-push only hooks But it's still not clear to be how I'm supposed to set this up. Here is what I have tried: default_stages: [commit] repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v3.1.0 hooks: - id: end-of-file-fixer - id: trailing-whitespace - repo: https://github.com/psf/black rev: 19.10b0 hooks: - id: black stages: [push] From that I'm expecting the first couple of hooks to run before a commit (which they do), but I'm expecting black to run before pushing, which it doesn't. To test that I have created the following file: """This is a docstring.""" print('this should be formatted') Which is certainly not being formatted by black.
your configuration is correct, except that the whitespace hooks in pre-commit/pre-commit-hooks set stages themselves so they won't be affected by default_stages adjusting your configuration slightly: repos: - repo: https://github.com/pre-commit/pre-commit-hooks rev: v3.1.0 hooks: - id: end-of-file-fixer stages: [commit] - id: trailing-whitespace stages: [commit] - repo: https://github.com/psf/black rev: 19.10b0 hooks: - id: black stages: [push] next you'll need to make sure both of the hook scripts are installed You can install both the pre-commit and pre-push commit at the same time using: pre-commit install --hook-type pre-commit --hook-type pre-push or you can run them separately: pre-commit install # installs .git/hooks/pre-commit pre-commit install --hook-type pre-push # installs .git/hooks/pre-push note that the second command comes directly from the documentation on using pre-push disclaimer: I'm the author of pre-commit and pre-commit-hooks
25
38
63,818,045
2020-9-9
https://stackoverflow.com/questions/63818045/python-frozen-dataclass-immutable-with-object-setattr
I used namedtuples for immutable data structures until I came across dataclasses, which I prefer in my use-cases (not relevant to the question). Now I learned that they are not immutable! At least not strictly speaking. setattr(frozen_dc_obj, "prop", "value") raises an exception. ok. But why does object.__setattr__(frozen_dc_obj,..) work? Compared to namedtuple, where it raises an exception! from collections import namedtuple from dataclasses import dataclass NTTest = namedtuple("NTTest", "id") nttest = NTTest(1) setattr(nttest, "id", 2) # Exception object.__setattr__(nttest, "id", 2) # Exception @dataclass(frozen=True) class DCTest: id: int dctest = DCTest(1) setattr(dctest, "id", 2) # Exception object.__setattr__(dctest, "id", 2) # WORKS
namedtuple defines __slots__ = () and hence you can't set any attribute (it doesn't have a __dict__). Frozen dataclasses on the other hand perform a manual check in their __setattr__ method and raise an exception if it's a frozen instance. Compare the following: >>> class Foo: ... __slots__ = () ... >>> f = Foo() >>> f.__dict__ # doesn't exist, so object.__setattr__ won't work Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Foo' object has no attribute '__dict__' >>> @dataclass(frozen=True) ... class Bar: ... pass ... >>> b = Bar() >>> b.__dict__ # this exists, so object.__setattr__ works {}
7
6
63,808,915
2020-9-9
https://stackoverflow.com/questions/63808915/is-there-any-way-to-define-a-python-function-with-leading-optional-arguments
As we know, optional arguments must be at the end of the arguments list, like below: def func(arg1, arg2, ..., argN=default) I saw some exceptions in the PyTorch package. For example, we can find this issue in torch.randint. As it is shown, it has a leading optional argument in its positional arguments! How could be possible? Docstring: randint(low=0, high, size, \*, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor How can we define a function in a similar way as above?
A single function is not allowed to have only leading optional parameters: 8.6. Function definitions [...] If a parameter has a default value, all following parameters up until the “*” must also have a default value — this is a syntactic restriction that is not expressed by the grammar. Note this excludes keyword-only parameters, which never receive arguments by position. If desired, one can emulate such behaviour by manually implementing the argument to parameter matching. For example, one can dispatch based on arity, or explicitly match variadic arguments. def leading_default(*args): # match arguments to "parameters" *_, low, high, size = 0, *args print(low, high, size) leading_default(1, 2) # 0, 1, 2 leading_default(1, 2, 3) # 1, 2, 3 A simple form of dispatch achieves function overloading by iterating signatures and calling the first matching one. import inspect class MatchOverload: """Overload a function via explicitly matching arguments to parameters on call""" def __init__(self, base_case=None): self.cases = [base_case] if base_case is not None else [] def overload(self, call): self.cases.append(call) return self def __call__(self, *args, **kwargs): failures = [] for call in self.cases: try: inspect.signature(call).bind(*args, **kwargs) except TypeError as err: failures.append(str(err)) else: return call(*args, **kwargs) raise TypeError(', '.join(failures)) @MatchOverload def func(high, size): print('two', 0, high, size) @func.overload def func(low, high, size): print('three', low, high, size) func(1, 2, size=3) # three 1 2 3 func(1, 2) # two 0 1 2 func(1, 2, 3, low=4) # TypeError: too many positional arguments, multiple values for argument 'low'
7
2
63,809,051
2020-9-9
https://stackoverflow.com/questions/63809051/how-to-disable-pylint-warnings-and-messages-in-visual-studio-code
I am using a Mac and programming with Python on VS Code. After installing pylint, I had a bunch of warnings and messages. How do I disable these? I know about adding some lines to the pylintrc file, but I don't know where to find it or how to create it on a Mac.
Fully disable the linting Here is a link that explain how to do it : Disable Linting on VsCode. To do so, type Command + Shift + P (or Ctrl + Shift + P on PC) in VsCode. This will open a command prompt at the top of the window. Then type the command Python: Enable Linting, and select off. Another option is to choose no linter. To do so, open the command prompt with Command + Shift + P (or Ctrl + Shift + P on PC), type Python: Select Linter, and choose the option Disable Linting. Disable warnings, but keep errors : If you want to keep the errors, but disable only the warnings, you can also configure pylint directly from VsCode. Go to the menu File -> Preferences -> Settings (Or open directly with Command + , or Ctrl + ,). Then in the search box at the top of the window, search for pylint Args. Click on the button Add item and add the line --disable=W.
21
29
63,756,623
2020-9-5
https://stackoverflow.com/questions/63756623/how-to-remove-or-hide-y-axis-ticklabels-from-a-plot
I made a plot that looks like this I want to turn off the ticklabels along the y axis. And to do that I am using plt.tick_params(labelleft=False, left=False) And now the plot looks like this. Even though the labels are turned off the scale 1e67 still remains. Turning off the scale 1e67 would make the plot look better. How do I do that?
seaborn is used to draw the plot, but it's just a high-level API for matplotlib. The functions called to remove the y-axis labels and ticks are matplotlib methods. After creating the plot, use .set(). .set(yticklabels=[]) should remove tick labels. This doesn't work if you use .set_title(), but you can use .set(title='') Do not use sns.boxplot(...).set(xticklabels=[]) because, while this works, the object type is changed from matplotlib.axes._axes.Axes for sns.boxplot(...), to list. .set(ylabel=None) should remove the axis label. .tick_params(left=False) will remove the ticks. Similarly, for the x-axis: How to remove or hide x-axis labels from a plot Tested in python 3.11, pandas 1.5.2, matplotlib 3.6.2, seaborn 0.12.1 Example 1 import seaborn as sns import matplotlib.pyplot as plt # load data exercise = sns.load_dataset('exercise') pen = sns.load_dataset('penguins') # create figures fig, ax = plt.subplots(2, 1, figsize=(8, 8)) # plot data g1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0]) g2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1]) plt.show() Remove Labels fig, ax = plt.subplots(2, 1, figsize=(8, 8)) g1 = sns.boxplot(x='time', y='pulse', hue='kind', data=exercise, ax=ax[0]) g1.set(yticklabels=[]) # remove the tick labels g1.set(title='Exercise: Pulse by Time for Exercise Type') # add a title g1.set(ylabel=None) # remove the axis label g2 = sns.boxplot(x='species', y='body_mass_g', hue='sex', data=pen, ax=ax[1]) g2.set(yticklabels=[]) g2.set(title='Penguins: Body Mass by Species for Gender') g2.set(ylabel=None) # remove the y-axis label g2.tick_params(left=False) # remove the ticks plt.tight_layout() plt.show() Example 2 import numpy as np import matplotlib.pyplot as plt import pandas as pd # sinusoidal sample data sample_length = range(1, 1+1) # number of columns of frequencies rads = np.arange(0, 2*np.pi, 0.01) data = np.array([(np.cos(t*rads)*10**67) + 3*10**67 for t in sample_length]) df = pd.DataFrame(data.T, index=pd.Series(rads.tolist(), name='radians'), columns=[f'freq: {i}x' for i in sample_length]) df.reset_index(inplace=True) # plot fig, ax = plt.subplots(figsize=(8, 8)) ax.plot('radians', 'freq: 1x', data=df) # or skip the previous two lines and plot df directly # ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False) Remove Labels # plot fig, ax = plt.subplots(figsize=(8, 8)) ax.plot('radians', 'freq: 1x', data=df) # or skip the previous two lines and plot df directly # ax = df.plot(x='radians', y='freq: 1x', figsize=(8, 8), legend=False) ax.set(yticklabels=[]) # remove the tick labels ax.tick_params(left=False) # remove the ticks
13
27
63,738,389
2020-9-4
https://stackoverflow.com/questions/63738389/pandas-sampling-from-a-dataframe-according-to-a-target-distribution
I have a Pandas DataFrame containing a dataset D of instances drawn from a distribution x. x may be a uniform distribution for example. Now, I want to draw n samples from D, sampled according to some new target_distribution, such as a gaussian, that is in general different than x. How can I do this efficiently? Right now, I sample a value x, subset D such that it contains all x +- eps and sample from that. But this is quite slow when the datasets get bigger. People must have come up with a better solution. Maybe the solution is already good but could be implemented more efficiently? I could split x into strata, which would be faster, but is there a solution without this? My current code, which works fine but is slow (1 min for 30k/100k, but I have 200k/700k or so.) import numpy as np import pandas as pd import numpy.random as rnd from matplotlib import pyplot as plt from tqdm import tqdm n_target = 30000 n_dataset = 100000 x_target_distribution = rnd.normal(size=n_target) # In reality this would be x_target_distribution = my_dataset["x"].sample(n_target, replace=True) df = pd.DataFrame({ 'instances': np.arange(n_dataset), 'x': rnd.uniform(-5, 5, size=n_dataset) }) plt.hist(df["x"], histtype="step", density=True) plt.hist(x_target_distribution, histtype="step", density=True) def sample_instance_with_x(x, eps=0.2): try: return df.loc[abs(df["x"] - x) < eps].sample(1) except ValueError: # fallback if no instance possible return df.sample(1) df_sampled_ = [sample_instance_with_x(x) for x in tqdm(x_target_distribution)] df_sampled = pd.concat(df_sampled_) plt.hist(df_sampled["x"], histtype="step", density=True) plt.hist(x_target_distribution, histtype="step", density=True)
Rather than generating new points and finding a closest neighbor in df.x, define the probability that each point should be sampled according to your target distribution. You can use np.random.choice. A million points are sampled from df.x in a second or so for a gaussian target distribution like this: x = np.sort(df.x) f_x = np.gradient(x)*np.exp(-x**2/2) sample_probs = f_x/np.sum(f_x) samples = np.random.choice(x, p=sample_probs, size=1000000) sample_probs is the key quantity, as it can be joined back to the dataframe or used as an argument to df.sample, e.g.: # sample df rows without replacement df_samples = df["x"].sort_values().sample( n=1000, weights=sample_probs, replace=False, ) The result of plt.hist(samples, bins=100, density=True): We can also try gaussian distributed x, uniform target distribution x = np.sort(np.random.normal(size=100000)) f_x = np.gradient(x)*np.ones(len(x)) sample_probs = f_x/np.sum(f_x) samples = np.random.choice(x, p=sample_probs, size=1000000) The tails would look more uniform if we increased the bin size; this is an artifact that D is sparse at the edges. comments This approach basically computes the probability of sampling any x_i as the span of x associated with x_i and the probability density in the neighborhood: prob(x_i) ~ delta_x*rho(x_i) A more robust treatment would be to integrate rho over the span delta_x associated with each x_i. Also note that there will be error if the delta_x term is ignored, as can be seen below. It would be much worse if the original x_i wasn't approximately uniformly sampled:
8
11
63,793,662
2020-9-8
https://stackoverflow.com/questions/63793662/how-to-give-a-pydantic-list-field-a-default-value
I want to create a Pydantic model in which there is a list field, which left uninitialized has a default value of an empty list. Is there an idiomatic way to do this? For Python's built-in dataclass objects you can use field(default_factory=list), however in my own experiments this seems to prevent my Pydantic models from being pickled. A naive implementation might be, something like this: from pydantic import BaseModel class Foo(BaseModel): defaulted_list_field: Sequence[str] = [] # Bad! But we all know not to use a mutable value like the empty-list literal as a default. So what's the correct way to give a Pydantic list-field a default value?
For pydantic you can use mutable default value, like: class Foo(BaseModel): defaulted_list_field: List[str] = [] f1, f2 = Foo(), Foo() f1.defaulted_list_field.append("hey!") print(f1) # defaulted_list_field=['hey!'] print(f2) # defaulted_list_field=[] It will be handled correctly (deep copy) and each model instance will have its own empty list. See "Fields with non-hashable default values" from the documentation. Pydantic also has default_factory parameter. In the case of an empty list, the result will be identical, it is rather used when declaring a field with a default value, you may want it to be dynamic (i.e. different for each model). from typing import List from pydantic import BaseModel, Field from uuid import UUID, uuid4 class Foo(BaseModel): defaulted_list_field: List[str] = Field(default_factory=list) uid: UUID = Field(default_factory=uuid4)
78
120
63,763,375
2020-9-6
https://stackoverflow.com/questions/63763375/python3-sqlalchemy-delete-duplicates
I'm using SQLAlchemy to manage a database and I'm trying to delete all rows that contain duplicates. The table has an id (primary key) and domain name. Example: ID| Domain 1 | example-1.com 2 | example-2.com 3 | example-1.com In this case I want to delete 1 instance of example-1.com. Sometimes I will need to delete more than 1 but in general the database should not have a domain more than once and if it does, only the first row should be kept and the others should be deleted.
Assuming your model looks something like this: import sqlalchemy as sa from sqlalchemy import orm Base = orm.declarative_base() class Domain(Base): __tablename__ = 'domain_names' id = sa.Column(sa.Integer, primary_key=True) domain = sa.Column(sa.String) Then you can delete the duplicates like this: # Create a query that identifies the row for each domain with the lowest id inner_q = session.query(sa.func.min(Domain.id)).group_by(Domain.domain) aliased = sa.alias(inner_q) # Select the rows that do not match the subquery q = session.query(Domain).filter(~Domain.id.in_(aliased)) # Delete the unmatched rows (SQLAlchemy generates a single DELETE statement from this loop) for domain in q: session.delete(domain) session.commit() # Show remaining rows for domain in session.query(Domain): print(domain) print() If you are not using the ORM, the core equivalent is: meta = sa.MetaData() domains = sa.Table('domain_names', meta, autoload=True, autoload_with=engine) inner_q = sa.select([sa.func.min(domains.c.id)]).group_by(domains.c.domain) aliased = sa.alias(inner_q) with engine.connect() as conn: conn.execute(domains.delete().where(~domains.c.id.in_(aliased))) This answer is based on the SQL provided in this answer. There are other ways of deleting duplicates, which you can see in the other answers on the link, or by googling "sql delete duplicates" or similar.
6
4
63,753,584
2020-9-5
https://stackoverflow.com/questions/63753584/django-rest-framework-list-object-has-no-attribute-values
I have the code and error stacktrace below. I am trying to access localhost:8000/fundamentals/ but I get the error 'list' object has no attribute 'values' error web_1 | Traceback (most recent call last): web_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 47, in inner web_1 | response = get_response(request) web_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 202, in _get_response web_1 | response = response.render() web_1 | File "/usr/local/lib/python3.7/site-packages/django/template/response.py", line 105, in render web_1 | self.content = self.rendered_content web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/response.py", line 70, in rendered_content web_1 | ret = renderer.render(self.data, accepted_media_type, context) web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/renderers.py", line 724, in render web_1 | context = self.get_context(data, accepted_media_type, renderer_context) web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/renderers.py", line 655, in get_context web_1 | raw_data_post_form = self.get_raw_data_form(data, view, 'POST', request) web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/renderers.py", line 563, in get_raw_data_form web_1 | data = serializer.data.copy() web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 562, in data web_1 | ret = super().data web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 264, in data web_1 | self._data = self.get_initial() web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 412, in get_initial web_1 | for field in self.fields.values() web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 413, in <listcomp> web_1 | if not field.read_only web_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 412, in get_initial web_1 | for field in self.fields.values() web_1 | AttributeError: 'list' object has no attribute 'values' web_1 | [05/Sep/2020 11:42:59] "GET /fundamentals/ HTTP/1.1" 500 99118 models/fundamentals.py 7 class Fundamentals(models.Model): 8 balance_sheet = models.ForeignKey(BalanceSheet, on_delete=models.CASCADE) 9 ticker = models.ForeignKey(Stock, on_delete=models.CASCADE) 10 slug = models.SlugField(default="", editable=False) 11 12 def save(self, *args, **kwargs): 13 value = self.ticker 14 self.slug = slugify(value, allow_unicode=True) 15 super().save(*args, **kwargs) 16 17 def __str__(self): 18 return {f"{self.ticker} fundamentals"} 19 20 class Meta: 21 verbose_name = "fundamentals" 22 verbose_name_plural = "fundamentals" views.py 13 class FundamentalsViewSet(viewsets.ModelViewSet): 14 queryset = Fundamentals.objects.all() 15 serializer_class = FundamentalsSerializer 16 # lookup_url_kwarg = "ticker" 17 # lookup_field = "ticker__iexact" 18 19 def get_balance_sheets(self, requests, *args, **kwargs): 20 bs_qs = BalanceSheet.objects.filter(ticker=self.get_object()) 21 serializer = BalanceSheetSerializer(bs_qs) 22 return Response(serializer.data) serializers.py 307 class BalanceSheetSerializer(serializers.ModelSerializer): 308 assets = AssetsSerializer() 309 liab_and_stock_equity = LiabAndStockEquitySerializer() 310 311 fields = [ 312 "ticker", 313 "periodicity", 314 "assets", 315 "liab_and_stock_equity", 316 "end_date", 317 "start_date", 318 ] 321 class FundamentalsSerializer(serializers.ModelSerializer): 322 balance_sheet = BalanceSheetSerializer() 323 324 class Meta: 325 model = Fundamentals 326 fields = ["balance_sheet"] urls.py 17 router = DefaultRouter() 18 router.register(r"fundamentals", views.FundamentalsViewSet) 19 urlpatterns = router.urls
The issue here is with the BalanceSheetSerializer. The fields must be defined within class Meta instead of defining it as class variable. class BalanceSheetSerializer(serializers.ModelSerializer): class Meta: fields = [your_fields]
9
22
63,710,551
2020-9-2
https://stackoverflow.com/questions/63710551/how-to-format-the-y-or-x-axis-labels-in-a-seaborn-facetgrid
I want to format y-axis labels in a seaborn FacetGrid plot, with a number of decimals, and/or with some text added. import seaborn as sns import matplotlib.pyplot as plt sns.set(style="ticks") exercise = sns.load_dataset("exercise") g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) #g.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{:,.2f}'.format(x) + 'K')) #g.set(xticks=['a','try',0.5]) g.yaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{:,.2f}'.format(x) + 'K')) plt.show() Inspired from How to format seaborn/matplotlib axis tick labels from number to thousands or Millions? (125,436 to 125.4K) ax.xaxis.set_major_formatter(ticker.FuncFormatter(lambda x, pos: '{:,.2f}'.format(x) + 'K')) It results in the following error. AttributeError: 'FacetGrid' object has no attribute 'xaxis'
xaxis and yaxis are attributes of the plot axes, for a seaborn.axisgrid.FacetGrid type. In the linked answer, the type is matplotlib.axes._subplots.AxesSubplot p in the lambda expression is the tick label number. seaborn: Building structured multi-plot grids matplotlib: Creating multiple subplots Tested and working with the following versions: matplotlib v3.3.4 seaborn v0.11.1 import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import matplotlib.ticker as tkr sns.set(style="ticks") # load data exercise = sns.load_dataset("exercise") # plot data g = sns.catplot(x="time", y="pulse", hue="kind", col="diet", data=exercise) # format the labels with f-strings for ax in g.axes.flat: ax.yaxis.set_major_formatter(tkr.FuncFormatter(lambda y, p: f'{y:.2f}: Oh baby, baby')) ax.xaxis.set_major_formatter(tkr.FuncFormatter(lambda x, p: f'{x}: Is that your best')) As noted in a comment by Patrick FitzGerald, the following code, without using tkr.FuncFormatter, also works to generate the previous plot. See matplotlib.axis.Axis.set_major_formatter # format the labels with f-strings for ax in g.axes.flat: ax.yaxis.set_major_formatter(lambda y, p: f'{y:.2f}: Oh baby, baby') ax.xaxis.set_major_formatter(lambda x, p: f'{x}: Is that your best')
13
22
63,792,528
2020-9-8
https://stackoverflow.com/questions/63792528/boxplot-custom-width-in-seaborn
I am trying to plot boxplots in seaborn whose widths depend upon the log of the value of x-axis. I am creating the list of widths and passing it to the widths=widths parameter of seaborn.boxplot. However, I am getting that raise ValueError(datashape_message.format("widths")) ValueError: List of boxplot statistics and `widths` values must have same the length When I debugged and checked there is just one dict in boxplot statistics, whereas I have 8 boxplots. Cannot Exactly figure out where the problem lies. I am using pandas data frame and seaborn for plotting.
Seaborn's boxplot doesn't seem to understand the widths= parameter. Here is a way to create a boxplot per x value via matplotlib's boxplot which does accept the width= parameter. The code below supposes the data is organized in a panda's dataframe. from matplotlib import pyplot as plt import numpy as np import pandas as pd import seaborn as sns df = pd.DataFrame({'x': np.random.choice([1, 3, 5, 8, 10, 30, 50, 100], 500), 'y': np.random.normal(750, 20, 500)}) xvals = np.unique(df.x) positions = range(len(xvals)) plt.boxplot([df[df.x == xi].y for xi in xvals], positions=positions, showfliers=False, boxprops={'facecolor': 'none'}, medianprops={'color': 'black'}, patch_artist=True, widths=[0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]) means = [np.mean(df[df.x == xi].y) for xi in xvals] plt.plot(positions, means, '--k*', lw=2) # plt.xticks(positions, xvals) # not needed anymore, as the xticks are set by the swarmplot sns.swarmplot('x', 'y', data=df) plt.show() A related question asked how to set the box's widths depending on group size. The widths can be calculated as some maximum width multiplied by each group's size compared to the size of the largest group. from matplotlib import pyplot as plt import numpy as np import pandas as pd import seaborn as sns y_true = np.random.normal(size=100) y_pred = y_true + np.random.normal(size=100) df = pd.DataFrame({'y_true': y_true, 'y_pred': y_pred}) df['y_true_bin'] = pd.cut(df['y_true'], range(-3, 4)) sns.set() fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5)) sns.boxplot(x='y_true_bin', y='y_pred', data=df, color='lightblue', ax=ax1) bins, groups = zip(*df.groupby('y_true_bin')['y_pred']) lengths = np.array([len(group) for group in groups]) max_width = 0.8 ax2.boxplot(groups, widths=max_width * lengths / lengths.max(), patch_artist=True, boxprops={'facecolor': 'lightblue'}) ax2.set_xticklabels(bins) ax2.set_xlabel('y_true_bin') ax2.set_ylabel('y_pred') plt.tight_layout() plt.show()
9
5
63,748,542
2020-9-4
https://stackoverflow.com/questions/63748542/convert-docx-bytestream-to-pdf-bytestream-python
I currently have a program that generates a .docx document using the python-docx library. Upon completing the building of the .docx file I save it into a Bytestream as so file_stream = io.BytesIO() document.save(file_stream) file_stream.seek(0) Now, I need to convert this word document into a PDF. I have looked at a few different libraries for conversion such as docx2pdf or even doing it manually using comtypes as so import sys import os import comtypes.client wdFormatPDF = 17 in_file = "Input_file_path.docx" out_file = "output_file_path.pdf" word = comtypes.client.CreateObject('Word.Application') doc = word.Documents.Open(in_file) doc.SaveAs(out_file, FileFormat=wdFormatPDF) doc.Close() word.Quit() The problem is, I need to do this conversion in memory and cannot physically save the DOCX or the PDF to the machine. Every converter I've seen requires a filepath to the physical document on the machine and I do not have that. Is there a way I can convert the DOCX filestream into a PDF stream just in memory? Thanks
This method is a little convoluted, but it works entirely in memory, and you get the option to add custom CSS to style the final document. Convert the DOCX bytestream to HTML using mammoth, and the resulting HTML to PDF using pdfkit. Here's an example # create a dummy docx file from docx import Document document = Document() document.add_paragraph('Lorem ipsum dolor sit amet.') # create a bytestream import io file_stream = io.BytesIO() document.save(file_stream) file_stream.seek(0) # convert the docx to html import mammoth result = mammoth.convert_to_html(file_stream) # >>> result.value # >>> '<p>Lorem ipsum dolor sit amet.</p>' # convert html to pdf import pdfkit pdf = pdfkit.from_string(result.value) If you want to output the stream to a file, just do with open('test.pdf','wb') as file: file.write(pdf)
8
7
63,757,304
2020-9-5
https://stackoverflow.com/questions/63757304/resizing-video-using-opencv-and-saving-it
I'm trying to re-size the video using opencv and then save it back to my system.The code works and does not give any error but output video file is corrupted. The fourcc I am using is mp4v works well with .mp4 but still the output video is corrupted. Need Help. import numpy as np import cv2 import sys import re vid="" if len(sys.argv)==3: vid=sys.argv[1] compress=int(sys.argv[2]) else: print("File not mentioned or compression not given") exit() if re.search('.mp4',vid): print("Loading") else: exit() cap = cv2.VideoCapture(0) ret, frame = cap.read() def rescale_frame(frame, percent=75): width = int(frame.shape[1] * percent/ 100) height = int(frame.shape[0] * percent/ 100) dim = (width, height) return cv2.resize(frame, dim, interpolation =cv2.INTER_AREA) FPS= 15.0 FrameSize=(frame.shape[1], frame.shape[0]) fourcc = cv2.VideoWriter_fourcc(*'mp4v') out = cv2.VideoWriter('Video_output.mp4', fourcc, FPS, FrameSize, 0) while(cap.isOpened()): ret, frame = cap.read() # check for successfulness of cap.read() if not ret: break rescaled_frame=rescale_frame(frame,percent=compress) # Save the video out.write(rescaled_frame) cv2.imshow('frame',rescaled_frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() out.release() cv2.destroyAllWindows()
The problem is the VideoWriter initialization. You initialized: out = cv2.VideoWriter('Video_output.mp4', fourcc, FPS, FrameSize, 0) The last parameter 0 means, isColor = False. You are telling, you are going to convert frames to the grayscale and then saves. But there is no conversion in your code. Also, you are resizing each frame in your code based on compress parameter. If I use the default compress parameter: cap = cv2.VideoCapture(0) if cap.isOpened(): ret, frame = cap.read() rescaled_frame = rescale_frame(frame) (h, w) = rescaled_frame.shape[:2] fourcc = cv2.VideoWriter_fourcc(*'mp4v') writer = cv2.VideoWriter('Video_output.mp4', fourcc, 15.0, (w, h), True) else: print("Camera is not opened") Now we have initialized the VideoWriter with the desired dimension. Full Code: import time import cv2 def rescale_frame(frame_input, percent=75): width = int(frame_input.shape[1] * percent / 100) height = int(frame_input.shape[0] * percent / 100) dim = (width, height) return cv2.resize(frame_input, dim, interpolation=cv2.INTER_AREA) cap = cv2.VideoCapture(0) if cap.isOpened(): ret, frame = cap.read() rescaled_frame = rescale_frame(frame) (h, w) = rescaled_frame.shape[:2] fourcc = cv2.VideoWriter_fourcc(*'mp4v') writer = cv2.VideoWriter('Video_output.mp4', fourcc, 15.0, (w, h), True) else: print("Camera is not opened") while cap.isOpened(): ret, frame = cap.read() rescaled_frame = rescale_frame(frame) # write the output frame to file writer.write(rescaled_frame) cv2.imshow("Output", rescaled_frame) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break cv2.destroyAllWindows() cap.release() writer.release() Possible Question: I don't want to change my VideoWriter parameters, what should I do? Answer: Then you need to change your frames, to the gray image: while cap.isOpened(): # grab the frame from the video stream and resize it to have a # maximum width of 300 pixels ret, frame = cap.read() frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
6
9
63,752,613
2020-9-5
https://stackoverflow.com/questions/63752613/asyncio-improperly-warns-about-streams-objects-are-garbage-collected-call-stre
I was implementing asynchronous MySQL query execution using python3.8's inbuilt asyncio package and an installed aiomysql package. Even though I have closed properly all the open cursor and connection, an error message keep on appearing on my console as follows. An open stream object is being garbage collected; call "stream.close()" explicitly. A summary of the code is given below... #db.py import asyncio class AsyncMysqlSession: def __init__(self, loop, db_settings=DEFAULTDB): self.db_settings = db_settings self.loop = loop async def __aenter__(self): self.conn = await aiomysql.connect(host=self.db_settings['HOST'], port=self.db_settings['PORT'], user=self.db_settings['USER'], password=self.db_settings['PASSWORD'], db=self.db_settings['NAME'], loop=self.loop) self.cursor = await self.conn.cursor(aiomysql.cursors.DictCursor) return self async def __aexit__(self, exception, value, traceback): await self.cursor.close() self.conn.close() async def query(self, sql, *args): await self.cursor.execute(sql, values) await self.conn.commit() rows = await self.cursor.fetchall() return list(rows) async def aiomysql_query(sql, *args): """ Mysql asynchronous connection wrapper """ loop = asyncio.get_event_loop() async with AsyncMysqlSession(loop) as mysql: db_result = await mysql.query(sql, *args) return db_result aiomysql_query is imported in another file #views.py import asyncio ..... async def main(): ..... ..... await aiomysql_query(sql1, *args1) await aiomysql_query(sql2, *args2) ..... asyncio.run(main()) .... Am I doing something wrong here (?) or is it improperly shows the error message?. Any lead to resolve this issue will be appreciated... TIA!!
It seems like you may have just forgotten to close the event loop—in addition to await conn.wait_closed(), which @VPfB advised above. You must close the event loop when manually using lower level method calls such as asyncio.get_event_loop(). Specifically, self.loop.close() must be called. #db.py import asyncio class AsyncMysqlSession: def __init__(self, loop, db_settings=DEFAULTDB): self.db_settings = db_settings self.loop = loop async def __aenter__(self): self.conn = await aiomysql.connect(host=self.db_settings['HOST'], port=self.db_settings['PORT'], user=self.db_settings['USER'], password=self.db_settings['PASSWORD'], db=self.db_settings['NAME'], loop=self.loop) self.cursor = await self.conn.cursor(aiomysql.cursors.DictCursor) return self async def __aexit__(self, exception, value, traceback): await self.cursor.close() self.conn.close() self.loop.close() async def query(self, sql, *args): await self.cursor.execute(sql, values) await self.conn.commit() rows = await self.cursor.fetchall() return list(rows) async def aiomysql_query(sql, *args): """ Mysql asynchronous connection wrapper """ loop = asyncio.get_event_loop() async with AsyncMysqlSession(loop) as mysql: db_result = await mysql.query(sql, *args) return db_result References https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.shutdown_asyncgens https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_event_loop
7
1
63,737,969
2020-9-4
https://stackoverflow.com/questions/63737969/how-to-find-pid-of-a-process-by-python
friends: I am running a script in Linux: I can use the ps command get the process. ps -ef | grep "python test09.py&" but, how can I know the pid of the running script by given key word python test09.py& using python code? EDIT-01 I mean, I want to use the python script to find the running script python test09.py&'s pid. EDIT-02 When I run anali's method I will get this error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/_psosx.py", line 363, in catch_zombie yield File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/_psosx.py", line 429, in cmdline return cext.proc_cmdline(self.pid) ProcessLookupError: [Errno 3] No such process (originated from sysctl) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "test11.py", line 29, in <module> print(get_pids_by_script_name('test09.py')) File "test11.py", line 15, in get_pids_by_script_name cmdline = proc.cmdline() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/__init__.py", line 694, in cmdline return self._proc.cmdline() File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/_psosx.py", line 342, in wrapper return fun(self, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/_psosx.py", line 429, in cmdline return cext.proc_cmdline(self.pid) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 77, in __exit__ self.gen.throw(type, value, traceback) File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/site-packages/psutil/_psosx.py", line 376, in catch_zombie raise AccessDenied(proc.pid, proc._name) psutil.AccessDenied: psutil.AccessDenied (pid=1)
If you just want the pid of the current script, then use os.getpid: import os pid = os.getpid() However, below is an example of using psutil to find the pids of python processes running a named python script. This could include the current process, but the main use case is for examining other processes, because for the current process it is easier just to use os.getpid as shown above. sleep.py #!/usr/bin/env python import time time.sleep(100) get_pids.py import os import psutil def get_pids_by_script_name(script_name): pids = [] for proc in psutil.process_iter(): try: cmdline = proc.cmdline() pid = proc.pid except psutil.NoSuchProcess: continue if (len(cmdline) >= 2 and 'python' in cmdline[0] and os.path.basename(cmdline[1]) == script_name): pids.append(pid) return pids print(get_pids_by_script_name('sleep.py')) Running it: $ chmod +x sleep.py $ cp sleep.py other.py $ ./sleep.py & [3] 24936 $ ./sleep.py & [4] 24937 $ ./other.py & [5] 24938 $ python get_pids.py [24936, 24937]
6
8
63,779,259
2020-9-7
https://stackoverflow.com/questions/63779259/enviroment-variables-in-pyenv-virtualenv
I have created a virtual environment with pyenv virtualenv 3.5.9 projectname for developing a django project. How can I set environment variables for my code to use? I tried to add the environment variable DATABASE_USER in /Users/developer/.pyenv/versions/projectname/bin/activate like this: export DATABASE_USER="dbuser" When I tried to echo $DATABASE_USER an empty string gets printed. Tried to install zsh-autoenv And now I can echo $DATABASE_USER and get the value set in the .autoenv.zsh file. But I can't seem to get the environment variable to be available to my django code: If I try to os.getenv('DATABASE_USER', '') in the python shell inside the virtualenv, I get '' What could be wrong? Is the zsh-autoenv variables just available for the zsh shell and not python manage.py shell ?
I was wondering a similar thing, and I stumbled across a reddit thread where someone else had asked the same question, and eventually followed up noting some interesting finds. As you noticed, pyenv doesn't seem to actually use the bin/activate file. They didn't say what the activation method is, but like you, adding environment variables there yielded no results. In the end, they wound up installing autoenv, which bills itself as directory-based environments. It allows you to create an .env file in your directory, and when you cd to that directory, it runs the .env file. You can use it for environment variables, or you could add anything else to it. I noticed on the autoenv page that they say you should probably use direnv instead, as it has better features and is higher quality software. Neither of these are Python or pyenv specific, and if you call your python code from outside of the directory, they may not work. Since you're using pyenv, you're probably running your code from within the directory anyway, so I think there's a good chance either one could work.
7
4
63,775,893
2020-9-7
https://stackoverflow.com/questions/63775893/how-to-get-an-amazon-ecr-container-uri-for-a-specific-model-image-in-sagemaker
I want to know if it's possible to get an Amazon ECR container URI for a specific image programmatically (using AWS CLI or Python). For example, if I need the URL for the latest linear-learner (built-in model) image for the eu-central-1 region. Expected result: 664544806723.dkr.ecr.eu-central-1.amazonaws.com/linear-learner:latest EDIT: I have found the solution with get_image_uri. It looks like this function will be depreceated and I don't know how to use ImageURIProvider instead.
The newer versions of SageMaker SDK have a more centralized API for getting the URIs: import sagemaker sagemaker.image_uris.retrieve("linear-learner", "eu-central-1") which gives the expected result: 664544806723.dkr.ecr.eu-central-1.amazonaws.com/linear-learner:1
6
4
63,783,154
2020-9-7
https://stackoverflow.com/questions/63783154/how-to-type-hint-a-matplotlib-axes-subplots-axessubplots-object-in-python3
I was wondering how is the "best" way to type-hint the axis-object of matplotlib-subplots. running from matplotlib import pyplot as plt f, ax = plt.subplots() print(type(ax)) returns <class 'matplotlib.axes._subplots.AxesSubplot'> and running from matplotlib import axes print(type(axes._subplots)) print(type(axes._subplots.AxesSubplot)) yields <class 'module'> AttributeError: module 'matplotlib.axes._subplots' has no attribute 'AxesSubplots' So far a solution for type-hinting that works is as follows: def multi_rocker( axy: type(plt.subplots()[1]), y_trues: np.ndarray, y_preds: np.ndarray, ): """ One-Vs-All ROC-curve: """ fpr = dict() tpr = dict() roc_auc = dict() n_classes = y_trues.shape[1] wanted = list(range(n_classes)) for i,x in enumerate(wanted): fpr[i], tpr[i], _ = roc_curve(y_trues[:, i], y_preds[:, i]) roc_auc[i] = round(auc(fpr[i], tpr[i]),2) extra = 0 for i in range(n_classes): axy.plot(fpr[i], tpr[i],) return And the problem with it is that it isn't clear enough for code-sharing
As described in Type hints for context manager : import matplotlib.pyplot as plt def plot_func(ax: plt.Axes): ...
31
34
63,702,536
2020-9-2
https://stackoverflow.com/questions/63702536/jupyter-starting-a-kernel-in-a-docker-container
I want to switch my notebook easily between different kernels. One use case is to quickly test a piece of code in tensorflow 2, 2.2, 2.3, and there are many similar use cases. However I prefer to define my environments as dockers these days, rather than as different (conda) environments. Now I know that you can start jupyter in a container, but that it not what I want. I would like to just click Kernel > use kernel > TF 2.2 (docker), and let jupyter connect to a kernel running in this container. Is something like that around? I have used livy to connect to remote spark kernels via ssh, so it feels like this should be possible.
Full disclosure: I'm the author of Dockernel. By using Dockernel Put the following in a file called Dockerfile, in a separate directory. FROM python:3.7-slim-buster RUN pip install --upgrade pip ipython ipykernel CMD python -m ipykernel_launcher -f $DOCKERNEL_CONNECTION_FILE Then issue the following commands: docker build --tag my-docker-image /path/to/the/dockerfile/dir pip install dockernel dockernel install my-docker-image You should now see "my-docker-image" option when creating a new notebook in Jupyter. Manually It is possible to do this kind of thing without much additional implementation/tooling, it just requires a bit of manual work: Use the following Dockerfile: FROM python:3.7-slim-buster RUN pip install --upgrade pip ipython ipykernel Build the image using docker build --tag my-docker-image . Create a directory for your kernelspec, e.g. ~/.local/share/jupyter/kernels/docker_test (%APPDATA%\jupyter\kernels\docker_test on Windows) Put the following kernelspec into kernel.json file in the directory you created (Windows users might need to change argv a bit) { "argv": [ "/usr/bin/docker", "run", "--network=host", "-v", "{connection_file}:/connection-spec", "my-docker-image", "python", "-m", "ipykernel_launcher", "-f", "/connection-spec" ], "display_name": "docker-test", "language": "python" } Jupyter should now be able spin up a container using the docker image specified above.
11
19
63,728,800
2020-9-3
https://stackoverflow.com/questions/63728800/how-to-deal-with-different-state-space-size-in-reinforcement-learning
I'm working in A2C reinforcement learning where my environment has an increasing and decreasing in the number of agents. As a result of the increasing and decreasing the number of agents, the state space will also change. I have tried to solve the problem of changing the state space this way: If the state space exceeds the maximum state space that selected as n_input, the excess state space will be selected by np.random.choice where random choice provides a way of creating random samples from the state space after converting the state space into probabilities. If the state space is less than the maximum state I padded the state space with zeros. def get_state_new(state): n_features = n_input-len(get_state(env)) # print("state",len(get_state(env))) p = np.array(state) p = np.exp(p) if p.sum() != 1.0: p = p * (1. / p.sum()) if len(get_state(env)) > n_input: statappend = np.random.choice(state, size=n_input, p=p) # print(statappend) else: statappend = np.zeros(n_input) statappend[:state.shape[0]] = state return statappend It works but the results are not as expected and I don't know if this correct or not. My question Are there any reference papers that deal with such a problem and how to deal with the changing of state space?
I solve the problem using different solutions but I found that the encoding is the best solution for my problem Select the model with pre-estimate maximum state space and If the state space is less than the maximum state, we padded the state space with zeros Consider only the state of the agents itself without any sharing of the other state. As the paper [1] mentioned that the extra connected autonomous vehicles (CAVs) are not included in the state and if they are less than the max CAVs, the state is padded with zeros. We can select how many agents that we can share their state adding to the agent’s state. Encode the state where it will help us to process the input and compress the information into a fixed length. In the encoder, every cell in the LSTM layer or RNN with Gated Recurrent Units (GRU) returns a hidden state (Ht) and cell state (E’t). For the encoder, I use the Neural machine translation with attention code class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.enc_units, return_sequences=True, return_state=True, recurrent_initializer='glorot_uniform') def call(self, x, hidden): x = self.embedding(x) output, state = self.gru(x, initial_state = hidden) return output, state def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.enc_units)) LSTM zero paddings and mask where we pad the state with a special value to be masked (skipped) later. If we pad without masking, the padded value will be regarded as actual value, thus, it becomes noise in the state [2-4]. 1- Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, C., ... & Bayen, A. M. (2018, October). Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning (pp. 399-409) 2- Kochkina, E., Liakata, M., & Augenstein, I. (2017). Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm. arXiv preprint arXiv:1704.07221. 3- Ma, L., & Liang, L. (2020). Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length. arXiv preprint arXiv:2008.03609. 4- How to feed LSTM with different input array sizes? 5- Zhao, X., Xia, L., Zhang, L., Ding, Z., Yin, D., & Tang, J. (2018, September). Deep reinforcement learning for page-wise recommendations. In Proceedings of the 12th ACM Conference on Recommender Systems (pp. 95-103).
10
2
63,724,890
2020-9-3
https://stackoverflow.com/questions/63724890/how-can-i-install-python-3-9-from-the-anaconda-prompt
Python 3.9.0rc1 has been released today, according to the official website. Is there a way I can use it in an Anaconda environment? I tried conda create --name python39 python==3.9 But it says: ERROR: Could not find a version that satisfies the requirement python==3.9 (from versions: none) ERROR: No matching distribution found for python==3.9 Edit: closing as duplicate rules out questions with no answer, and the self-accepted answer to the suggested duplicate does not answer the question. It says "use another distribution channel instead".
It's preferable to update Conda before installing Python 3.9: conda update -n base -c defaults conda Then install a Python 3.9 environment. This works now: conda create --name python39 python==3.9
7
6
63,775,936
2020-9-7
https://stackoverflow.com/questions/63775936/keras-no-good-way-to-stop-and-resume-training
After a lot of research, it seems like there is no good way to properly stop and resume training using a Tensorflow 2 / Keras model. This is true whether you are using model.fit() or using a custom training loop. There seem to be 2 supported ways to save a model while training: Save just the weights of the model, using model.save_weights() or save_weights_only=True with tf.keras.callbacks.ModelCheckpoint. This seems to be preferred by most of the examples I've seen, however it has a number of major issues: The optimizer state is not saved, meaning training resumption will not be correct. Learning rate schedule is reset - this can be catastrophic for some models. Tensorboard logs go back to step 0 - making logging essentually useless unless complex workarounds are implemented. Save the entire model, optimizer, etc. using model.save() or save_weights_only=False. The optimizer state is saved (good) but the following issues remain: Tensorboard logs still go back to step 0 Learning rate schedule is still reset (!!!) It is impossible to use custom metrics. This doesn't work at all when using a custom training loop - custom training loops use a non-compiled model, and saving/loading a non-compiled model doesn't seem to be supported. The best workaround I've found is to use a custom training loop, manually saving the step. This fixes the tensorboard logging, and the learning rate schedule can be fixed by doing something like keras.backend.set_value(model.optimizer.iterations, step). However, since a full model save is off the table, the optimizer state is not preserved. I can see no way to save the state of the optimizer independently, at least without a lot of work. And messing with the LR schedule as I've done feels messy as well. Am I missing something? How are people out there saving/resuming using this API?
tf.keras.callbacks.experimental.BackupAndRestore API for resuming training from interruptions has been added for tensorflow>=2.3. It works great in my experience. Reference: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/experimental/BackupAndRestore
7
5
63,723,514
2020-9-3
https://stackoverflow.com/questions/63723514/userwarning-fixedformatter-should-only-be-used-together-with-fixedlocator
I have used for a long time small subroutines to format axes of charts I'm plotting. A couple of examples: def format_y_label_thousands(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:,.0f}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) def format_y_label_percent(): # format y-axis tick labels formats ax = plt.gca() label_format = '{:.1%}' ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) However, after an update to matplotlib yesterday, I get the following warning when calling any of these two functions: UserWarning: FixedFormatter should only be used together with FixedLocator ax.set_yticklabels([label_format.format(x) for x in ax.get_yticks().tolist()]) What is the reason for such a warning? I couldn't figure it out looking into matplotlib's documentation.
WORKAROUND: The way to avoid the warning is to use FixedLocator (that is part of matplotlib.ticker). Below I show a code to plot three charts. I format their axes in different ways. Note that the "set_ticks" silence the warning, but it changes the actual ticks locations/labels (it took me some time to figure out that FixedLocator uses the same info but keeps the ticks locations intact). You can play with the x/y's to see how each solution might affect the output. import matplotlib as mpl import matplotlib.pyplot as plt import numpy as np import matplotlib.ticker as mticker mpl.rcParams['font.size'] = 6.5 x = np.array(range(1000, 5000, 500)) y = 37*x fig, [ax1, ax2, ax3] = plt.subplots(1,3) ax1.plot(x,y, linewidth=5, color='green') ax2.plot(x,y, linewidth=5, color='red') ax3.plot(x,y, linewidth=5, color='blue') label_format = '{:,.0f}' # nothing done to ax1 as it is a "control chart." # fixing yticks with "set_yticks" ticks_loc = ax2.get_yticks().tolist() ax2.set_yticks(ax1.get_yticks().tolist()) ax2.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing yticks with matplotlib.ticker "FixedLocator" ticks_loc = ax3.get_yticks().tolist() ax3.yaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_yticklabels([label_format.format(x) for x in ticks_loc]) # fixing xticks with FixedLocator but also using MaxNLocator to avoid cramped x-labels ax3.xaxis.set_major_locator(mticker.MaxNLocator(3)) ticks_loc = ax3.get_xticks().tolist() ax3.xaxis.set_major_locator(mticker.FixedLocator(ticks_loc)) ax3.set_xticklabels([label_format.format(x) for x in ticks_loc]) fig.tight_layout() plt.show() OUTPUT CHARTS: Obviously, having a couple of idle lines of code like the one above (I'm basically getting the yticks or xticks and setting them again) only adds noise to my program. I would prefer that the warning was removed. However, look into some of the "bug reports" (from links on the comments above/below; the issue is not actually a bug: it is an update that is generating some issues), and the contributors that manage matplotlib have their reasons to keep the warning. OLDER VERSION OF MATPLOTLIB: If you use your Console to control critical outputs of your code (as I do), the warning messages might be problematic. Therefore, a way to delay having to deal with the issue is to downgrade matplotlib to version 3.2.2. I use Anaconda to manage my Python packages, and here is the command used to downgrade matplotlib: conda install matplotlib=3.2.2 Not all listed versions might be available. For instance, couldn't install matplotlib 3.3.0 although it is listed on matplotlib's releases page: https://github.com/matplotlib/matplotlib/releases
118
79
63,760,734
2020-9-6
https://stackoverflow.com/questions/63760734/valueerror-input-0-of-layer-sequential-is-incompatible-with-the-layer-expect
I'm working in a project that isolate vocal parts from an audio. I'm using the DSD100 dataset, but for doing tests I'm using the DSD100subset dataset from I only use the mixtures and the vocals. I'm basing this work on this article First I process the audios to extract a spectrogram and put it on a list, with all the audios forming four lists (trainMixed, trainVocals, testMixed, testVocals). Like this: def to_spec(wav, n_fft=1024, hop_length=256): return librosa.stft(wav, n_fft=n_fft, hop_length=hop_length) def prepareData(filename, sr=22050, hop_length=256, n_fft=1024): audio_wav = librosa.load(filename, sr=sr, mono=True, duration=30)[0] audio_spec=to_spec(audio_wav, n_fft=n_fft, hop_length=hop_length) audio_spec_mag = np.abs(audio_spec) maxVal = np.max(audio_spec_mag) return audio_spec_mag, maxVal # FOR EVERY LIST (trainMixed, trainVocals, testMixed, testVocals) trainMixed = [] trainMixedNum = 0 for (root, dirs, files) in walk('./Dev-subset-mix/Dev/'): for d in dirs: filenameMix = './Dev-subset-mix/Dev/'+d+'/mixture.wav' spec_mag, maxVal = prepareData(filenameMix, n_fft=1024, hop_length=256) trainMixed.append(spec_mag/maxVal) Next i build the model: import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D from keras.optimizers import SGD from keras.layers.advanced_activations import LeakyReLU model = Sequential() model.add(Conv2D(16, (3,3), padding='same', input_shape=(513, 25, 1))) model.add(LeakyReLU()) model.add(Conv2D(16, (3,3), padding='same')) model.add(LeakyReLU()) model.add(MaxPooling2D(pool_size=(3,3))) model.add(Dropout(0.25)) model.add(Conv2D(16, (3,3), padding='same')) model.add(LeakyReLU()) model.add(Conv2D(16, (3,3), padding='same')) model.add(LeakyReLU()) model.add(MaxPooling2D(pool_size=(3,3))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(64)) model.add(LeakyReLU()) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss=keras.losses.binary_crossentropy, optimizer=sgd, metrics=['accuracy']) And run the model: model.fit(trainMixed, trainVocals,epochs=10, validation_data=(testMixed, testVocals)) But I'm getting this result: ValueError: in user code: /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:806 train_function * return step_function(self, iterator) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:796 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1211 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2585 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2945 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:789 run_step ** outputs = model.train_step(data) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:747 train_step y_pred = self(x, training=True) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:976 __call__ self.name) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:158 assert_input_compatibility ' input tensors. Inputs received: ' + str(inputs)) ValueError: Layer sequential_1 expects 1 inputs, but it received 2 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=(None, 2584) dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=(None, 2584) dtype=float32>] I am new to this topic, thanks for the help provided in advance.
It's probably an issue with specifying input data to Keras' fit() function. I would recommend using a tf.data.Dataset as input to fit() like so: import tensorflow as tf train_data = tf.data.Dataset.from_tensor_slices((trainMixed, trainVocals)) valid_data = tf.data.Dataset.from_tensor_slices((testMixed, testVocals)) model.fit(train_data, epochs=10, validation_data=valid_data) You can then also use functions like shuffle() and batch() on the TF datasets. EDIT: It also seems like your input shapes are incorrect. The input_shape you specified for the first conv layer is (513, 25, 1), so the input should be a batch tensor of shape (batch_size, 513, 25, 1), whereas you're inputting the shape (batch_size, 2584). So you'll need to reshape and probably cut your inputs to the specified shape, or specify a new shape.
13
12
63,743,839
2020-9-4
https://stackoverflow.com/questions/63743839/infinite-scroll-bar-is-not-working-with-django
It has been a long time when I asked this question and still didn't get an answers. I am trying to add infinite scroll down with Django but it is not working fine with the following code. I just paginating post by 10 and then its just showing me loading icon .it is not working when i am scrolling down. Can you guys figure it out what is wrong here ? views.py class PostListView(ListView): model = Post context_object_name = 'post_list' paginate_by = 10 def get_queryset(self): return Post.objects.filter(create_date__lte=timezone.now()).order_by('-create_date') postlist.html {% extends 'base.html' %} {% block content %} <div class="container"> <div class="row infinite-container"> {% for post in post_list%} <div class="col-md-6 infinite-item"> <div class="card mb-4 shadow-sm"> <img class="img-thumbnail" src="{{post.image.url}}"/> <div class="card-body"> <h5>{{post.title}}</h5> <p class="card-text"> {{post.description|truncatewords:20}} </p> </div> </div> </div> {% endfor %} </div> {% if page_obj.has_next %} true #this is showing me true it also means that it has next page. <a class="infinite-more-link" href="?page={{page_obj.next_page_number}}"></a> {% endif %} <div class="d-flex justify-content-center" style="display:none;"> <div class="spinner-border" role="status"> <span class="sr-only">Loading...</span> </div> </div> </div> <script src="/static/js/jquery-2.2.4.min.js"></script> <script src="/static/js/jquery.waypoints.min.js"></script> <script src="/static/js/infinite.min.js"></script> <script> var infinite = new Waypoint.Infinite({ element: $('.infinite-container')[0], handler: function(direction) { }, offset: 'bottom-in-view', onBeforePageLoad: function () { $('.spinner-border').show(); }, onAfterPageLoad: function () { $('.spinner-border').hide(); } }); </script> {% endblock content %} if more information is require than tell me in a comment session i will update my question with that information.
I was missing loading the static so load that by adding {% load static%} below the content block <script src="{% static '/static/js/jquery-2.2.4.min.js'%}"></script> <script src="{% static '/static/js/jquery.waypoints.min.js'%}"></script> <script src="{% static '/static/js/infinite.min.js'%}"></script>
6
4
63,705,803
2020-9-2
https://stackoverflow.com/questions/63705803/merge-related-words-in-nlp
I'd like to define a new word which includes count values from two (or more) different words. For example: Words Frequency 0 mom 250 1 2020 151 2 the 124 3 19 82 4 mother 81 ... ... ... 10 London 6 11 life 6 12 something 6 I would like to define mother as mom + mother: Words Frequency 0 mother 331 1 2020 151 2 the 124 3 19 82 ... ... ... 9 London 6 10 life 6 11 something 6 This is a way to alternative define group of words having some meaning (at least for my purpose). Any suggestion would be appreciated.
UPDATE 10-21-2020 I decided to build a Python module to handle the tasks that I outlined in this answer. The module is called wordhoard and can be downloaded from pypi I have attempted to use Word2vec and WordNet in projects where I needed to determine the frequency of a keyword (e.g. healthcare) and the keyword's synonyms (e.g., wellness program, preventive medicine). I found that most NLP libraries didn't produce the results that I needed, so I decided to build my own dictionary with custom keywords and synonyms. This approached has worked for both analyzing and classification text in multiple projects. I'm sure that someone that is versed in NLP technology might have a more robust solution, but the one below is similar ones that have worked for me time and time again. I coded my answer to match the Words Frequency data you had in your question, but it can be modified to use any keyword and synonyms dataset. import string # Python Dictionary # I manually created these word relationship - primary_word:synonyms word_relationship = {"father": ['dad', 'daddy', 'old man', 'pa', 'pappy', 'papa', 'pop'], "mother": ["mamma", "momma", "mama", "mammy", "mummy", "mommy", "mom", "mum"]} # This input text is from various poems about mothers and fathers input_text = 'The hand that rocks the cradle also makes the house a home. It is the prayers of the mother ' \ 'that keeps the family strong. When I think about my mum, I just cannot help but smile; The beauty of ' \ 'her loving heart, the easy grace in her style. I will always need my mom, regardless of my age. She ' \ 'has made me laugh, made me cry. Her love will never fade. If I could write a story, It would be the ' \ 'greatest ever told. I would write about my daddy, For he had a heart of gold. For my father, my friend, ' \ 'This to me you have always been. Through the good times and the bad, Your understanding I have had.' # converts the input text to lowercase and splits the words based on empty space. wordlist = input_text.lower().split() # remove all punctuation from the wordlist remove_punctuation = [''.join(ch for ch in s if ch not in string.punctuation) for s in wordlist] # list for word frequencies wordfreq = [] # count the frequencies of a word for w in remove_punctuation: wordfreq.append(remove_punctuation.count(w)) word_frequencies = (dict(zip(remove_punctuation, wordfreq))) word_matches = [] # loop through the dictionaries for word, frequency in word_frequencies.items(): for keyword, synonym in word_relationship.items(): match = [x for x in synonym if word == x] if word == keyword or match: match = ' '.join(map(str, match)) # append the keywords (mother), synonyms(mom) and frequencies to a list word_matches.append([keyword, match, frequency]) # used to hold the final keyword and frequencies final_results = {} # list comprehension to obtain the primary keyword and its frequencies synonym_matches = [(keyword[0], keyword[2]) for keyword in word_matches] # iterate synonym_matches and output total frequency count for a specific keyword for item in synonym_matches: if item[0] not in final_results.keys(): frequency_count = 0 frequency_count = frequency_count + item[1] final_results[item[0]] = frequency_count else: frequency_count = frequency_count + item[1] final_results[item[0]] = frequency_count print(final_results) # output {'mother': 3, 'father': 2} Other Methods Below are some other methods and their out-of-box output. NLTK WORDNET In this example, I looked up the synonyms for the word 'mother.' Note that WordNet does not have the synonyms 'mom' or 'mum' linked to the word mother. These two words are within my sample text above. Also note that the word 'father' is listed as a synonym for 'mother.' from nltk.corpus import wordnet synonyms = [] word = 'mother' for synonym in wordnet.synsets(word): for item in synonym.lemmas(): if word != synonym.name() and len(synonym.lemma_names()) > 1: synonyms.append(item.name()) print(synonyms) ['mother', 'female_parent', 'mother', 'fuss', 'overprotect', 'beget', 'get', 'engender', 'father', 'mother', 'sire', 'generate', 'bring_forth'] PyDictionary In this example, I looked up the synonyms for the word 'mother' using PyDictionary, which queries synonym.com. The synonyms in this example include the words 'mom' and 'mum.' This example also includes additional synonyms that WordNet did not generate. BUT, PyDictionary also produced a synonym list for 'mum.' Which has nothing to do with the word 'mother.' It seems that PyDictionary pulled this list from the adjective section of the page instead of the noun section. It's hard for a computer to distinguish between the adjective mum and the noun mum. from PyDictionary import PyDictionary dictionary_mother = PyDictionary('mother') print(dictionary_mother.getSynonyms()) # output [{'mother': ['mother-in-law', 'female parent', 'supermom', 'mum', 'parent', 'mom', 'momma', 'para I', 'mama', 'mummy', 'quadripara', 'mommy', 'quintipara', 'ma', 'puerpera', 'surrogate mother', 'mater', 'primipara', 'mammy', 'mamma']}] dictionary_mum = PyDictionary('mum') print(dictionary_mum.getSynonyms()) # output [{'mum': ['incommunicative', 'silent', 'uncommunicative']}] Some of the other possible approaches are using the Oxford Dictionary API or querying thesaurus.com. Both these methods also have pitfalls. For instance the Oxford Dictionary API requires an API key and a paid subscription based on query numbers. And thesaurus.com is missing potential synonyms that could be useful in grouping words. https://www.thesaurus.com/browse/mother synonyms: mom, parent, ancestor, creator, mommy, origin, predecessor, progenitor, source, child-bearer, forebearer, procreator UPDATE Producing a precise synonym lists for each potential word in your corpus is hard and will require a multiple prong approach. The code below using WordNet and PyDictionary to create a superset of synonyms. Like all the other answers, this combine methods also leads to some over counting of word frequencies. I've been trying to reduce this over-counting by combining key and value pairs within my final dictionary of synonyms. The latter problem is much harder than I anticipated and might require me to open my own question to solve. In the end, I think that based on your use case you need to determine, which approach works best and will likely need to combine several approaches. Thanks for posting this question, because it allowed me to look at other methods for solving a complex problem. from string import punctuation from nltk.corpus import stopwords from nltk.corpus import wordnet from PyDictionary import PyDictionary input_text = """The hand that rocks the cradle also makes the house a home. It is the prayers of the mother that keeps the family strong. When I think about my mum, I just cannot help but smile; The beauty of her loving heart, the easy grace in her style. I will always need my mom, regardless of my age. She has made me laugh, made me cry. Her love will never fade. If I could write a story, It would be the greatest ever told. I would write about my daddy, For he had a heart of gold. For my father, my friend, This to me you have always been. Through the good times and the bad, Your understanding I have had.""" def normalize_textual_information(text): # split text into tokens by white space token = text.split() # remove punctuation from each token table = str.maketrans('', '', punctuation) token = [word.translate(table) for word in token] # remove any tokens that are not alphabetic token = [word.lower() for word in token if word.isalpha()] # filter out English stop words stop_words = set(stopwords.words('english')) # you could add additional stops like this stop_words.add('cannot') stop_words.add('could') stop_words.add('would') token = [word for word in token if word not in stop_words] # filter out any short tokens token = [word for word in token if len(word) > 1] return token def generate_word_frequencies(words): # list to hold word frequencies word_frequencies = [] # loop through the tokens and generate a word count for each token for word in words: word_frequencies.append(words.count(word)) # aggregates the words and word_frequencies into tuples and coverts them into a dictionary word_frequencies = (dict(zip(words, word_frequencies))) # sort the frequency of the words from low to high sorted_frequencies = {key: value for key, value in sorted(word_frequencies.items(), key=lambda item: item[1])} return sorted_frequencies def get_synonyms_internet(word): dictionary = PyDictionary(word) synonym = dictionary.getSynonyms() return synonym words = normalize_textual_information(input_text) all_synsets_1 = {} for word in words: for synonym in wordnet.synsets(word): if word != synonym.name() and len(synonym.lemma_names()) > 1: for item in synonym.lemmas(): if word != item.name(): all_synsets_1.setdefault(word, []).append(str(item.name()).lower()) all_synsets_2 = {} for word in words: word_synonyms = get_synonyms_internet(word) for synonym in word_synonyms: if word != synonym and synonym is not None: all_synsets_2.update(synonym) word_relationship = {**all_synsets_1, **all_synsets_2} frequencies = generate_word_frequencies(words) word_matches = [] word_set = {} duplication_check = set() for word, frequency in frequencies.items(): for keyword, synonym in word_relationship.items(): match = [x for x in synonym if word == x] if word == keyword or match: match = ' '.join(map(str, match)) if match not in word_set or match not in duplication_check or word not in duplication_check: duplication_check.add(word) duplication_check.add(match) word_matches.append([keyword, match, frequency]) # used to hold the final keyword and frequencies final_results = {} # list comprehension to obtain the primary keyword and its frequencies synonym_matches = [(keyword[0], keyword[2]) for keyword in word_matches] # iterate synonym_matches and output total frequency count for a specific keyword for item in synonym_matches: if item[0] not in final_results.keys(): frequency_count = 0 frequency_count = frequency_count + item[1] final_results[item[0]] = frequency_count else: frequency_count = frequency_count + item[1] final_results[item[0]] = frequency_count # do something with the final results
23
15
63,715,045
2020-9-3
https://stackoverflow.com/questions/63715045/how-to-catch-the-stop-button-in-pycharm-on-windows
I want to create a program that does something in which someone terminates the script by clicking the stop button in PyCharm. I tried from sys import exit def handler(signal_received, frame): # Handle any cleanup here print('SIGINT or CTRL-C detected. Exiting gracefully') exit(0) if __name__ == '__main__': signal(SIGINT, handler) print('Running. Press CTRL-C to exit.') while True: # Do nothing and hog CPU forever until SIGINT received. pass from https://www.devdungeon.com/content/python-catch-sigint-ctrl-c. I tried on both Mac and Windows. On the Mac, PyCharm behaved as expected, when I click the stop button it catches the SIGINT. But on Windows, I did exactly the same thing, but it just straightly returns to me a Process finished with exit code -1. Is there something I can do to change to make the Windows behave like what on Mac? Any help is appreciated!
I don't think it's a strange question at all. On unix systems, pycham sends a SIGTERM, waits one second, then send a SIGKILL. On windows, it does something else to end the process, something that seems untrappable. Even during development you need a way to cleanly shut down a process that uses native resources. In my case, there is a CAN controller that, if not shut down properly, can't ever be opened again. My work around was to build a simple UI with a stop button that shuts the process down cleanly. The problem is, out of habit, from using pycharm, goland, and intellij, is to just hit the red, square button. Every time I do that I have to reboot the development system. So I think it is clearly also a development time question.
6
4
63,749,267
2020-9-5
https://stackoverflow.com/questions/63749267/how-to-efficiently-find-the-indices-of-max-values-in-a-multidimensional-array-of
Background It is common in machine learning to deal with data of a high dimensionality. For example, in a Convolutional Neural Network (CNN) the dimensions of each input image may be 256x256, and each image may have 3 color channels (Red, Green, and Blue). If we assume that the model takes in a batch of 16 images at a time, the dimensionality of the input going into our CNN is [16,3,256,256]. Each individual convolutional layer expects data in the form [batch_size, in_channels, in_y, in_x], and all of these quantities often change layer-to-layer (except batch_size). The term we use for the matrix made up of the [in_y, in_x] values is feature map, and this question is concerned with finding the maximum value, and its index, in every feature map at a given layer. Why do I want to do this? I want to apply a mask to every feature map, and I want to apply that mask centered at the max value in each feature map, and to do that I need to know where each max value is located. This mask application is done during both training and testing of the model, so efficiency is vitally important to keep computational times down. There are many Pytorch and Numpy solutions for finding singleton max values and indices, and for finding the maximum values or indices along a single dimension, but no (that I could find) dedicated and efficient built-in functions for finding the indices of maximum values along 2 or more dimensions at a time. Yes, we can nest functions that operate on a single dimension, but these are some of the least efficient approaches. What I've Tried I've looked at this Stackoverflow question, but the author is dealing with a special-case 4D array which is trivially squeezed to a 3D array. The accepted answer is specialized for this case, and the answer pointing to TopK is misguided because it not only operates on a single dimension, but would necessitate that k=1 given the question asked, thus devlolving to a regular torch.max call. I've looked at this Stackoverflow question, but this question, and its answer, focus on looking through a single dimension. I have looked at this Stackoverflow question, but I already know of the answer's approach as I independently formulated it in my own answer here (where I amended that the approach is very inefficient). I have looked at this Stackoverflow question, but it does not satisfy the key part of this question, which is concerned with efficiency. I have read many other Stackoverflow questions and answers, as well as the Numpy documentation, Pytorch documentation, and posts on the Pytorch forums. I've tried implementing a LOT of varying approaches to this problem, enough that I have created this question so that I can answer it and give back to the community, and anyone who goes looking for a solution to this problem in the future. Standard of Performance If I am asking a question about efficiency I need to detail expectations clearly. I am trying to find a time-efficient solution (space is secondary) for the problem above without writing C code/extensions, and which is reasonably flexible (hyper specialized approaches aren't what I'm after). The approach must accept an [a,b,c,d] Torch tensor of datatype float32 or float64 as input, and output an array or tensor of the form [a,b,2] of datatype int32 or int64 (because we are using the output as indices). Solutions should be benchmarked against the following typical solution: max_indices = torch.stack([torch.stack([(x[k][j]==torch.max(x[k][j])).nonzero()[0] for j in range(x.size()[1])]) for k in range(x.size()[0])])
The Approach We are going to take advantage of the Numpy community and libraries, as well as the fact that Pytorch tensors and Numpy arrays can be converted to/from one another without copying or moving the underlying arrays in memory (so conversions are low cost). From the Pytorch documentation: Converting a torch Tensor to a Numpy array and vice versa is a breeze. The torch Tensor and Numpy array will share their underlying memory locations, and changing one will change the other. Solution One We are first going to use the Numba library to write a function that will be just-in-time (JIT) compiled upon its first usage, meaning we can get C speeds without having to write C code ourselves. Of course, there are caveats to what can get JIT-ed, and one of those caveats is that we work with Numpy functions. But this isn't too bad because, remember, converting from our torch tensor to Numpy is low cost. The function we create is: @njit(cache=True) def indexFunc(array, item): for idx, val in np.ndenumerate(array): if val == item: return idx This function if from another Stackoverflow answer located here (This was the answer which introduced me to Numba). The function takes an N-Dimensional Numpy array and looks for the first occurrence of a given item. It immediately returns the index of the found item on a successful match. The @njit decorator is short for @jit(nopython=True), and tells the compiler that we want it to compile the function using no Python objects, and to throw an error if it is not able to do so (Numba is the fastest when no Python objects are used, and speed is what we are after). With this speedy function backing us, we can get the indices of the max values in a tensor as follows: import numpy as np x = x.numpy() maxVals = np.amax(x, axis=(2,3)) max_indices = np.zeros((n,p,2),dtype=np.int64) for index in np.ndindex(x.shape[0],x.shape[1]): max_indices[index] = np.asarray(indexFunc(x[index], maxVals[index]),dtype=np.int64) max_indices = torch.from_numpy(max_indices) We use np.amax because it can accept a tuple for its axis argument, allowing it to return the max values of each 2D feature map in the 4D input. We initialize max_indices with np.zeros ahead of time because appending to numpy arrays is expensive, so we allocate the space we need ahead of time. This approach is much faster than the Typical Solution in the question (by an order of magnitude), but it also uses a for loop outside the JIT-ed function, so we can improve... Solution Two We will use the following solution: @njit(cache=True) def indexFunc(array, item): for idx, val in np.ndenumerate(array): if val == item: return idx raise RuntimeError @njit(cache=True, parallel=True) def indexFunc2(x,maxVals): max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64) for i in prange(x.shape[0]): for j in prange(x.shape[1]): max_indices[i,j] = np.asarray(indexFunc(x[i,j], maxVals[i,j]),dtype=np.int64) return max_indices x = x.numpy() maxVals = np.amax(x, axis=(2,3)) max_indices = torch.from_numpy(indexFunc2(x,maxVals)) Instead of iterating through our feature maps one-at-a-time with a for loop, we can take advantage of parallelization using Numba's prange function (which behaves exactly like range but tells the compiler we want the loop to be parallelized) and the parallel=True decorator argument. Numba also parallelizes the np.zeros function. Because our function is compiled Just-In-Time and uses no Python objects, Numba can take advantage of all the threads available in our system! It is worth noting that there is now a raise RuntimeError in the indexFunc. We need to include this, otherwise the Numba compiler will try to infer the return type of the function and infer that it will either be an array or None. This doesn't jive with our usage in indexFunc2, so the compiler would throw an error. Of course, from our setup we know that indexFunc will always return an array, so we can simply raise and error in the other logical branch. This approach is functionally identical to Solution One, but changes the iteration using nd.index into two for loops using prange. This approach is about 4x faster than Solution One. Solution Three Solution Two is fast, but it is still finding the max values using regular Python. Can we speed this up using a more comprehensive JIT-ed function? @njit(cache=True) def indexFunc(array, item): for idx, val in np.ndenumerate(array): if val == item: return idx raise RuntimeError @njit(cache=True, parallel=True) def indexFunc3(x): maxVals = np.zeros((x.shape[0],x.shape[1]),dtype=np.float32) for i in prange(x.shape[0]): for j in prange(x.shape[1]): maxVals[i][j] = np.max(x[i][j]) max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64) for i in prange(x.shape[0]): for j in prange(x.shape[1]): x[i][j] == np.max(x[i][j]) max_indices[i,j] = np.asarray(indexFunc(x[i,j], maxVals[i,j]),dtype=np.int64) return max_indices max_indices = torch.from_numpy(indexFunc3(x)) It might look like there is a lot more going on in this solution, but the only change is that instead of calculating the maximum values of each feature map using np.amax, we have now parallelized the operation. This approach is marginally faster than Solution Two. Solution Four This solution is the best I've been able to come up with: @njit(cache=True, parallel=True) def indexFunc4(x): max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64) for i in prange(x.shape[0]): for j in prange(x.shape[1]): maxTemp = np.argmax(x[i][j]) max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]] return max_indices max_indices = torch.from_numpy(indexFunc4(x)) This approach is more condensed and also the fastest at 33% faster than Solution Three and 50x faster than the Typical Solution. We use np.argmax to get the index of the max value of each feature map, but np.argmax only returns the index as if each feature map were flattened. That is, we get a single integer telling us which number the element is in our feature map, not the indices we need to be able to access that element. The math [maxTemp // x.shape[2], maxTemp % x.shape[2]] is to turn that singular int into the [row,column] that we need. Benchmarking All approaches were benchmarked together against a random input of shape [32,d,64,64], where d was incremented from 5 to 245. For each d, 15 samples were gathered and the times were averaged. An equality test ensured that all solutions provided identical values. An example of the benchmark output is: A plot of the benchmarking times as d increased is (leaving out the Typical Solution so the graph isn't squashed): Woah! What is going on at the start with those spikes? Solution Five Numba allows us to produce Just-In-Time compiled functions, but it doesn't compile them until the first time we use them; It then caches the result for when we call the function again. This means the very first time we call our JIT-ed functions we get a spike in compute time as the function is compiled. Luckily, there is a way around this- if we specify ahead of time what our function's return type and argument types will be, the function will be eagerly compiled instead of compiled just-in-time. Applying this knowledge to Solution Four we get: @njit('i8[:,:,:](f4[:,:,:,:])',cache=True, parallel=True) def indexFunc4(x): max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64) for i in prange(x.shape[0]): for j in prange(x.shape[1]): maxTemp = np.argmax(x[i][j]) max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]] return max_indices max_indices6 = torch.from_numpy(indexFunc4(x)) And if we restart our kernel and rerun our benchmark, we can look at the first result where d==5 and the second result where d==10 and note that all of the JIT-ed solutions were slower when d==5 because they had to be compiled, except for Solution Four, because we explicitly provided the function signature ahead of time: There we go! That's the best solution I have so far for this problem. EDIT #1 Solution Six An improved solution has been developed which is 33% faster than the previously posted best solution. This solution only works if the input array is C-contiguous, but this isn't a big restriction since numpy arrays or torch tensors will be contiguous unless they are reshaped, and both have functions to make the array/tensor contiguous if needed. This solution is the same as the previous best, but the function decorator which specifies the input and return types are changed from @njit('i8[:,:,:](f4[:,:,:,:])',cache=True, parallel=True) to @njit('i8[:,:,::1](f4[:,:,:,::1])',cache=True, parallel=True) The only difference is that the last : in each array typing becomes ::1, which signals to the numba njit compiler that the input arrays are C-contiguous, allowing it to better optimize. The full solution six is then: @njit('i8[:,:,::1](f4[:,:,:,::1])',cache=True, parallel=True) def indexFunc5(x): max_indices = np.zeros((x.shape[0],x.shape[1],2),dtype=np.int64) for i in prange(x.shape[0]): for j in prange(x.shape[1]): maxTemp = np.argmax(x[i][j]) max_indices[i][j] = [maxTemp // x.shape[2], maxTemp % x.shape[2]] return max_indices max_indices7 = torch.from_numpy(indexFunc5(x)) The benchmark including this new solution confirms the speedup:
6
5
63,780,573
2020-9-7
https://stackoverflow.com/questions/63780573/trying-to-understand-fb-prophet-cross-validation
I have a dataset with 84 Monthly Sales (from 01/2013 to 12/2019) - just months, not days. Month 01 | Sale 1 Month 02 | Sale 2 Month 03 | Sale 3 .... | ... Month 84 | Sale 84 By visualization it looks like that the model fits very well... but I need to check it.... So what I understood is that cross val does not support Months, and so what I did was convert to use it w/ days(although there is no day info into my original df)... I wanted to try my model w/ the first five years(60 months) and leave the 2 remaining years(24 months) to see how well the model is predicting.... So i did something like: cv_results = cross_validation( model = prophet, initial='1825 days', period='30 days', horizon = '60 days') Does this make sense? I did not get the concept of cut off dates and forecast periods
I struggled with this for a while as well. But here is how it works. The initial model will be trained on the first 1,825 days of data. It will forecast the next 60 days of data (because horizon is set to 60). The model will then train on the initial period + the period (1,825 + 30 days in this case) and forecast the next 60 days. It will continued like this, adding another 30 days to the training data and then forecasting for the next 60 until there is no longer enough data to do this. In summary, period is how much data to add to the training data set in every iteration of cross-validation, and horizon is how far out it will forecast.
17
52
63,785,105
2020-9-7
https://stackoverflow.com/questions/63785105/how-to-setup-two-pypi-indices
I have a local GitLab installation that comes with a local PyPI server to store company internal Python packages. How can I configure my PyPI to search packages in both index servers? I read about .pypirc / pip/pip.ini and found various settings but no solution so far. Most solutions permanently switch all searches to the other index server. But I want to be able to install and update packages from pypi.org as normal while some packages come from the local index. setting multiple index servers with credentials seams to be limited to distutils (used e.g. by twine) only, but is not used by pip There is confusion if to configure index servers in [global] or [install]. I assume the latter one is a rule subset for pip install. (The documentation is here unclear.) While twine can reference a repository entry in the config file like -r gitlab refers to a [gitlab] section, such a named reference can't be used by pip... So what I want to achieve: pip should be able to install and update regular packages from pypi.org like colorama pip should be able to install and update packages from gitlab.company.com authentication with username (__token__) and password (7a3b62342c784d87) must work Experiment so far: [global] [install] find-links = https://pypi.org https://gitlab.company.de/api/v4/projects/2142423/packages/pypi trusted-host = https://pypi.org https://gitlab.company.de/api/v4/projects/2142423/packages/pypi [distutils] index-servers = gitlab [gitlab] repository = https://gitlab.company.de/api/v4/projects/2142423/packages/pypi username = __token__ password = geheim
Goal pip install should install/update packages from GitLab as well as PyPi repo. If same package is present in both, PyPi is preferred. pip install should support authentication. Preferred, if somehow we can make it read from a config file so that we don't need to specify it repeatatively. Theory pip install supports --extra-index-url to specify additional PyPi indexes. The same can also be provided via pip.conf file. pip uses requests which supports ~/.netrc as config file (docs). Steps Create a pip.conf (pip.ini if on Windows) in any of the locations suggested by pip config -v list. Add your GitLab PyPi index URL to pip.conf. [install] extra-index-url = https://gitlab.com/api/v4/projects/12345678/packages/pypi/simple Create or update your ~/.netrc file and add your auth details for GitLab. machine gitlab.com login <token-name> password <token-pass> We can now install packages as simply as pip install <package-name>. pip will now look at both indexes to find your packages, with preference provided to the one pointed by index-url. Additional info The same could have been possible for pip search too, had there been support for multiple indexes. Till then, one needs to manually specify which PyPi index URL should be used. GitLab does not seem to support pip search since it throws 415 Client Error: Unsupported Media Type when specified as the PyPi index. As for your doubts, each section in pip.conf points to that particular command, [install] provides configuration for pip install, [search] for pip search and so on. [global] probably refers to parameters that can be specified for all the commands be it pip install or pip search. .pypirc file is made specially for configuring package indexes related to upload (used by twine/flint), where as pip.conf is associated with configuring pip which manages python packages on your local system.
12
11
63,716,543
2020-9-3
https://stackoverflow.com/questions/63716543/plotly-how-to-update-redraw-a-plotly-express-figure-with-new-data
During debugging or computationally heavy loops, i would like to see how my data processing evolves (for example in a line plot or an image). In matplotlib the code can redraw / update the figure with plt.cla() and then plt.draw() or plt.pause(0.001), so that i can follow the progress of my computation in real time or while debugging. How do I do that in plotly express (or plotly)?
So i think i essentially figured it out. The trick is to not use go.Figure() to create a figure, but go.FigureWidget() Which is optically the same thing, but behind the scenes it's not. documentation youtube video demonstration Those FigureWidgets are exactly there to be updated as new data comes in. They stay dynamic, and later calls can modify them. A FigureWidget can be made from a Figure: figure = go.Figure(data=data, layout=layout) f2 = go.FigureWidget(figure) f2 #display the figure This is practical, because it makes it possible to use the simplified plotly express interface to create a Figure and then use this to construct a FigureWidget out of it. Unfortunately plotly express does not seem to have it's own simplified FigureWidget module. So one needs to use the more complicated go.FigureWidget.
14
16
63,754,359
2020-9-5
https://stackoverflow.com/questions/63754359/correct-way-to-mock-patch-smtplib-smtp
Trying to mock.patch a call to smtplib.SMTP.sendmail in a unittest. The sendmail method appears to be successfully mocked and we can query it as MagicMock, but the called and called_args attributes of the sendmail mock are not correctly updated. It seems likely I'm not applying the patch correctly. Here's a simplified example of what I'm trying: import unittest.mock with unittest.mock.patch('smtplib.SMTP', autospec=True) as mock: import smtplib smtp = smtplib.SMTP('localhost') smtp.sendmail('me', 'me', 'hello world\n') mock.assert_called() # <--- this succeeds mock.sendmail.assert_called() # <--- this fails This example generates: AssertionError: Expected 'sendmail' to have been called. If I alter the patch to smtp.SMTP.sendmail; eg: with unittest.mock.patch('smtplib.SMTP.sendmail.', autospec=True) as mock: ... I can successfully access the called_args and called attributes of the mock in this case, but because the smtplib.SMTP initialization was allowed to take place, an actual smtp-session is established with a host. This is unittesting, and I'd prefer no actual networking take place.
I had the same issue today and forgot that I'm using a context, so just change mock.sendmail.assert_called() to mock.return_value.__enter__.return_value.sendmail.assert_called() That looks messy but here's my example: msg = EmailMessage() msg['From'] = '[email protected]' msg['To'] = '[email protected]' msg['Subject'] = 'subject' msg.set_content('content'); with patch('smtplib.SMTP', autospec=True) as mock_smtp: misc.send_email(msg) mock_smtp.assert_called() context = mock_smtp.return_value.__enter__.return_value context.ehlo.assert_called() context.starttls.assert_called() context.login.assert_called() context.send_message.assert_called_with(msg)
9
14
63,751,319
2020-9-5
https://stackoverflow.com/questions/63751319/django-rest-framework-get-field-of-related-model-in-serializer
I'm new to Django Rest Framework. I'm trying to get my ListAPI to show various fields of my Quiz (and related) models. It's working fine, except for my attempt_number field. I'm getting the right queryset, but I'm not sure how to get only the relevant value for every query. Users can take every quiz as many times as they want, and I want to show the queryset for each attempt, since the score etc. will be different. My model setup is as follows: class Quiz(models.Model): title = models.CharField(max_length=15) slug = models.SlugField(blank=True) questions_count = models.IntegerField(default=0) class Question(models.Model): quiz = models.ForeignKey(Quiz, on_delete=models.CASCADE) label = models.CharField(max_length=1000) class Choice(models.Model): question = models.ForeignKey(Question, on_delete=models.CASCADE) answer = models.CharField(max_length=100) is_correct = models.BooleanField('Correct answer', default=False) class QuizTaker(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE) quiz = models.ForeignKey(Quiz, on_delete=models.CASCADE) correct_answers = models.IntegerField(default=0) completed = models.BooleanField(default=False) attempt_number = models.PositiveIntegerField(default=0) My serializer for the ListAPI looks as follows: class MyQuizListSerializer(serializers.ModelSerializer): attempt = serializers.SerializerMethodField() # etc.. class Meta: model = Quiz fields = "__all__" def get_attempt(self, obj): try: quiztaker = QuizTaker.objects.filter(user=self.context['request'].user, quiz=obj) for attempt in quiztaker: attempt_number = attempt.attempt_number return attempt_number If I do it like this, I always get the last value for attempt_number (because the loop overwrites the value). So then I tried to append it to a list instead, like this: a = [] for attempt in quiztaker: attempt_number = attempt.attempt_number a.append(attempt_number) return a But then I get the list of attempts for every query, instead of the attempt number for each query. I.e. I get the following three times (because in this case there are three attempts): { "id": 4, "attempt": [ 1, 2, 3 ] }, But instead what I want is (and the same for attempt 2 and 3 etc.): { "id": 4, "attempt": 1 }, So I tried doing it like this: return a[attempt_number-1] Hoping it would give me index zero for attempt number 1, 1 for 2, etc. But then I still just get the last attempt number (3 in this case). How can I solve this? I also tried just using an IntegerField instead of a SerializerMethodField as follows: attempt = serializers.IntegerField(read_only=True, source='quiztaker.attempt_number') But it returned nothing.
If I correctly understood you, you want the list of attempts added to each quiz object. { "id": 4, "attempts": [{ "id": 1, "attempt_number": 1, }, { "id": 2, "attempt_number": 2, }...] } In that case, you should have a separate serializer for the QuizTaker model and serialize the objects in the SerializerMethodField. class QuizTakerSerializer(serializers.ModelSerializer): class Meta: model = QuizTaker fields = ('id', 'attempt_number') class MyQuizListSerializer(serializers.ModelSerializer): attempts = serializers.SerializerMethodField() # etc.. class Meta: model = Quiz fields = "__all__" def get_attempts(self, obj): quiztakers = QuizTaker.objects.filter(user=self.context['request'].user,quiz=obj) return QuizTakerSerializer(quiztakers, many=True).data Honestly, your question is not very clear and it would help to edit it and make it clearer, giving the JSON structure you want to achieve. I also suspect your intended use of queryset isn't the actual Django meaning for a container of ORM objects.
6
6
63,713,575
2020-9-2
https://stackoverflow.com/questions/63713575/pytest-issues-with-a-session-scoped-fixture-and-asyncio
I have multiple test files, each has an async fixture that looks like this: @pytest.fixture(scope="module") def event_loop(request): loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() @pytest.fixture(scope="module") async def some_fixture(): return await make_fixture() I'm using xdist for parallelization. In addition I have this decorator: @toolz.curry def throttle(limit, f): semaphore = asyncio.Semaphore(limit) @functools.wraps(f) async def wrapped(*args, **kwargs): async with semaphore: return await f(*args, **kwargs) return wrapped and I have a function uses it: @throttle(10) def f(): ... Now f is being called from multiple test files, and I'm getting an exception telling me that I can't use the semaphore from different event loops. I tried moving to a session-level event loop fixture: @pytest.fixture(scope="session", autouse=True) def event_loop(request): loop = asyncio.get_event_loop_policy().new_event_loop() yield loop loop.close() But this only gave me: ScopeMismatch: You tried to access the 'function' scoped fixture 'event_loop' with a 'module' scoped request object, involved factories Is it even possible to have xdist + async fixture + semaphore working together?
Eventually got it to work using the following conftest.py: import asyncio import pytest @pytest.fixture(scope="session") def event_loop(): return asyncio.get_event_loop()
8
7
63,763,809
2020-9-6
https://stackoverflow.com/questions/63763809/error-when-converting-xml-files-to-tfrecord-files
I am following the TensorFlow 2 Object Detection API Tutorial on a Macbook Here's what I got when running the given script for converting xmls to TFrecords Traceback (most recent call last): File "generate_tfrecord.py", line 62, in <module> label_map_dict = label_map_util.get_label_map_dict(label_map) File "/usr/local/lib/python3.8/site-packages/object_detection/utils/label_map_util.py", line 164, in get_label_map_dict label_map = load_labelmap(label_map_path) File "/usr/local/lib/python3.8/site-packages/object_detection/utils/label_map_util.py", line 133, in load_labelmap label_map_string = fid.read() File "/usr/local/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 116, in read self._preread_check() File "/usr/local/lib/python3.8/site-packages/tensorflow/python/lib/io/file_io.py", line 78, in _preread_check self._read_buf = _pywrap_file_io.BufferedInputStream( TypeError: __init__(): incompatible constructor arguments. The following argument types are supported: 1. tensorflow.python._pywrap_file_io.BufferedInputStream(arg0: str, arg1: int) Invoked with: item { name: "cat" id: 1 } , 524288 My label map file contains the following item { id: 1 name: 'cat' }
It seems the problem can be resolved by replacing label_map = label_map_util.load_labelmap(args.labels_path) label_map_dict = label_map_util.get_label_map_dict(label_map) as label_map_dict = label_map_util.get_label_map_dict(args.labels_path)
6
17
63,738,900
2020-9-4
https://stackoverflow.com/questions/63738900/pylint-raise-missing-from
I have a pylint message (w0707) on this piece of code (from https://www.django-rest-framework.org/tutorial/3-class-based-views/): class SnippetDetail(APIView): """ Retrieve, update or delete a snippet instance. """ def get_object(self, pk): try: return Snippet.objects.get(pk=pk) except Snippet.DoesNotExist: raise Http404 the message is: Consider explicitly re-raising using the 'from' keyword I don't quite understand how to act to correct the problem.
The link in the comment on your question above outlines the issue and provides a solution, but for clarity of those landing straight on this page like myself, without having to go off to another thread, read and gain context, here is the answer to your specific problem: TL;DR; This is simply solved by aliasing the Exception you are 'excepting' and refering to it in your second raise. Taking your code snippet above, see the bottom two lines, I've added 'under-carets' to denote what I've added. class SnippetDetail(APIView): """ Retrieve, update or delete a snippet instance. """ def get_object(self, pk): try: return Snippet.objects.get(pk=pk) except Snippet.DoesNotExist as snip_no_exist: # ^^^^^^^^^^^^^^^^ raise Http404 from snip_no_exist # ^^^^^^^^^^^^^^^^^^ Note: The alias can be any well formed string.
103
160
63,788,083
2020-9-8
https://stackoverflow.com/questions/63788083/how-to-check-if-a-cookie-is-set-in-fastapi
I defined an optional cookie parameter and now want to check if the cookie was set. Unfortunately, the variable does not equal to None but to an empty Cookie object. How can I check the cookie object if it is set? Here's how I defined the cookie parameter: @app.route("/graphcall") def graphcall(request: Request, ads_id: Optional[str] = Cookie(None)): if ads_id: # Do stuff if the ads_id is set
I assume you tried this via SwaggerUI. Setting Cookie values currently does not work via SwaggerUI due to browser security restrictions. @app.get("/items/") async def read_items(ads_id: Optional[str] = Cookie(None)): if ads_id: answer = "set to %s" % ads_id else: answer = "not set" return {"ads_id": answer} works perfectly from command line with Fastapi 0.61.0 $ curl -X GET "http://127.0.0.1:8000/items/" -H "accept: application/json" -H "Cookie: ads_id=foobar" {"ads_id":"set to foobar"} $ curl -X GET "http://127.0.0.1:8000/items/" -H "accept: application/json" {"ads_id":"not set"}
7
10
63,802,819
2020-9-8
https://stackoverflow.com/questions/63802819/hide-a-line-on-plotly-line-graph
Imagine I have lines A, B, C, D, and E. I want lines A, B, and C to appear on the plotly line chart. I want the user to have the option to add lines D and E but D and E should be hidden by default. Any suggestions on how to do this? Example, how would I hide Australia by default. import plotly.express as px df = px.data.gapminder().query("continent=='Oceania'") fig = px.line(df, x="year", y="lifeExp", color='country') fig.show()
You need to play with the parameter visible setting it as legendonly within every trace import plotly.express as px countries_to_hide = ["Australia"] df = px.data.gapminder().query("continent=='Oceania'") fig = px.line(df, x="year", y="lifeExp", color='country') fig.for_each_trace(lambda trace: trace.update(visible="legendonly") if trace.name in countries_to_hide else ()) fig.show()
8
20
63,796,920
2020-9-8
https://stackoverflow.com/questions/63796920/nested-list-of-dictionary-with-nested-list-of-dictionary-into-a-pandas-dataframe
I need help with converting a nested list of dictionaries with a nested list of dictionaries inside of it to a dataframe. At the end, I want something that looks like (the dots are for other columns in between): id | isbn | isbn13 | .... | average_rating| 30278752 |1594634025|9781594634024| .... |3.92 | 34006942 |1501173219|9781501173219| .... |4.33 | review_stat =[{'books': [{'id': 30278752, 'isbn': '1594634025', 'isbn13': '9781594634024', 'ratings_count': 4832, 'reviews_count': 8435, 'text_reviews_count': 417, 'work_ratings_count': 2081902, 'work_reviews_count': 3313007, 'work_text_reviews_count': 109912, 'average_rating': '3.92'}]}, {'books': [{'id': 34006942, 'isbn': '1501173219', 'isbn13': '9781501173219', 'ratings_count': 4373, 'reviews_count': 10741, 'text_reviews_count': 565, 'work_ratings_count': 1005504, 'work_reviews_count': 2142280, 'work_text_reviews_count': 75053, 'average_rating': '4.33'}]}]
If you key is always books pd.concat([pd.DataFrame(i['books']) for i in review_stat]) id isbn isbn13 ratings_count reviews_count text_reviews_count work_ratings_count work_reviews_count work_text_reviews_count average_rating 0 30278752 1594634025 9781594634024 4832 8435 417 2081902 3313007 109912 3.92 0 34006942 1501173219 9781501173219 4373 10741 565 1005504 2142280 75053 4.33 You can always reset the index if you need
10
7
63,785,319
2020-9-7
https://stackoverflow.com/questions/63785319/pytorch-torch-no-grad-versus-requires-grad-false
I'm following a PyTorch tutorial which uses the BERT NLP model (feature extractor) from the Huggingface Transformers library. There are two pieces of interrelated code for gradient updates that I don't understand. (1) torch.no_grad() The tutorial has a class where the forward() function creates a torch.no_grad() block around a call to the BERT feature extractor, like this: bert = BertModel.from_pretrained('bert-base-uncased') class BERTGRUSentiment(nn.Module): def __init__(self, bert): super().__init__() self.bert = bert def forward(self, text): with torch.no_grad(): embedded = self.bert(text)[0] (2) param.requires_grad = False There is another portion in the same tutorial where the BERT parameters are frozen. for name, param in model.named_parameters(): if name.startswith('bert'): param.requires_grad = False When would I need (1) and/or (2)? If I want to train with a frozen BERT, would I need to enable both? If I want to train to let BERT be updated, would I need to disable both? Additionaly, I ran all four combinations and found: with torch.no_grad requires_grad = False Parameters Ran ------------------ --------------------- ---------- --- a. Yes Yes 3M Successfully b. Yes No 112M Successfully c. No Yes 3M Successfully d. No No 112M CUDA out of memory Can someone please explain what's going on? Why am I getting CUDA out of memory for (d) but not (b)? Both have 112M learnable parameters.
This is an older discussion, which has changed slightly over the years (mainly due to the purpose of with torch.no_grad() as a pattern. An excellent answer that kind of answers your question as well can be found on Stackoverflow already. However, since the original question is vastly different, I'll refrain from marking as duplicate, especially due to the second part about the memory. An initial explanation of no_grad is given here: with torch.no_grad() is a context manager and is used to prevent calculating gradients [...]. requires_grad on the other hand is used to freeze part of your model and train the rest [...]. Source again the SO post. Essentially, with requires_grad you are just disabling parts of a network, whereas no_grad will not store any gradients at all, since you're likely using it for inference and not training. To analyze the behavior of your combinations of parameters, let us investigate what is happening: a) and b) do not store any gradients at all, which means that you have vastly more memory available to you, no matter the number of parameters, since you're not retaining them for a potential backward pass. c) has to store the forward pass for later backpropagation, however, only a limited number of parameter (3 million) are stored, which makes this still manageable. d), however, needs to store the forward pass for all 112 million parameters, which causes you to run out of memory.
14
13
63,757,763
2020-9-5
https://stackoverflow.com/questions/63757763/timeit-and-its-default-timer-completely-disagree
I benchmarked these two functions (they unzip pairs back into source lists, came from here): n = 10**7 a = list(range(n)) b = list(range(n)) pairs = list(zip(a, b)) def f1(a, b, pairs): a[:], b[:] = zip(*pairs) def f2(a, b, pairs): for i, (a[i], b[i]) in enumerate(pairs): pass Results with timeit.timeit (five rounds, numbers are seconds): f1 1.06 f2 1.57 f1 0.96 f2 1.69 f1 1.00 f2 1.85 f1 1.11 f2 1.64 f1 0.95 f2 1.63 So clearly f1 is a lot faster than f2, right? But then I also measured with timeit.default_timer and got a completely different picture: f1 7.28 f2 1.92 f1 5.34 f2 1.66 f1 6.46 f2 1.70 f1 6.82 f2 1.59 f1 5.88 f2 1.63 So clearly f2 is a lot faster, right? Sigh. Why do the timings totally differ like that, and which timing method should I believe? Full benchmark code: from timeit import timeit, default_timer n = 10**7 a = list(range(n)) b = list(range(n)) pairs = list(zip(a, b)) def f1(a, b, pairs): a[:], b[:] = zip(*pairs) def f2(a, b, pairs): for i, (a[i], b[i]) in enumerate(pairs): pass print('timeit') for _ in range(5): for f in f1, f2: t = timeit(lambda: f(a, b, pairs), number=1) print(f.__name__, '%.2f' % t, end=' ') print() print('default_timer') for _ in range(5): for f in f1, f2: t0 = default_timer() f(a, b, pairs) t = default_timer() - t0 print(f.__name__, '%.2f' % t, end=' ') print()
As Martijn commented, the difference is Python's garbage collection, which timeit.timeit disables during its run. And zip creates 10 million iterator objects, one for each of the 10 million iterables it's given. So, garbage-collecting 10 million objects simply takes a lot of time, right? Mystery solved! Well... no. That's not really what happens, and it's way more interesting than that. And there's a lesson to be learned to make such code faster in real life. Python's main way to discard objects no longer needed is reference counting. The garbage collector, which is being disabled here, is for reference cycles, which the reference counting won't catch. And there aren't any cycles here, so it's all discarded by reference counting and the garbage collector doesn't actually collect any garbage. Let's look at a few things. First, let's reproduce the much faster time by disabling the garbage collector ourselves. Common setup code (all further blocks of code should be run directly after this in a fresh run, don't combine them): import gc from timeit import default_timer as timer n = 10**7 a = list(range(n)) b = list(range(n)) pairs = list(zip(a, b)) Timing with garbage collection enabled (the default): t0 = timer() a[:], b[:] = zip(*pairs) t1 = timer() print(t1 - t0) I ran it three times, took 7.09, 7.03 and 7.09 seconds. Timing with garbage collection disabled: t0 = timer() gc.disable() a[:], b[:] = zip(*pairs) gc.enable() t1 = timer() print(t1 - t0) Took 0.96, 1.02 and 0.99 seconds. So now we know it's indeed the garbage collection that somehow takes most of the time, even though it's not collecting anything. Here's something interesting: Already just the creation of the zip iterator is responsible for most of the time: t0 = timer() z = zip(*pairs) t1 = timer() print(t1 - t0) That took 6.52, 6.51 and 6.50 seconds. Note that I kept the zip iterator in a variable, so there isn't even anything to discard yet, neither by reference counting nor by garbage collecting! What?! Where does the time go, then? Well... as I said, there are no reference cycles, so the garbage collector won't actually collect any garbage. But the garbage collector doesn't know that! In order to figure that out, it needs to check! Since the iterators could become part of a reference cycle, they're registered for garbage collection tracking. Let's see how many more objects get tracked due to the zip creation (doing this just after the common setup code): gc.collect() tracked_before = len(gc.get_objects()) z = zip(*pairs) print(len(gc.get_objects()) - tracked_before) The output: 10000003 new objects tracked. I believe that's the zip object itself, its internal tuple to hold the iterators, its internal result holder tuple, and the 10 million iterators. Ok, so the garbage collector tracks all these objects. But what does that mean? Well, every now and then, after a certain number of new object creations, the collector goes through the tracked objects to see whether some are garbage and can be discarded. The collector keeps three "generations" of tracked objects. New objects go into generation 0. If they survive a collection run there, they're moved into generation 1. If they survive a collection there, they're moved into generation 2. If they survive further collection runs there, they remain in generation 2. Let's check the generations before and after: gc.collect() print('collections:', [stats['collections'] for stats in gc.get_stats()]) print('objects:', [len(gc.get_objects(i)) for i in range(3)]) z = zip(*pairs) print('collections:', [stats['collections'] for stats in gc.get_stats()]) print('objects:', [len(gc.get_objects(i)) for i in range(3)]) Output (each line shows values for the three generations): collections: [13111, 1191, 2] objects: [17, 0, 13540] collections: [26171, 2378, 20] objects: [317, 2103, 10011140] The 10011140 shows that most of the 10 million iterators were not just registered for tracking, but are already in generation 2. So they were part of at least two garbage collection runs. And the number of generation 2 collections went up from 2 to 20, so our millions of iterators were part of up to 20 garbage collection runs (two to get into generation 2, and up to 18 more while already in generation 2). We can also register a callback to count more precisely: checks = 0 def count(phase, info): if phase == 'start': global checks checks += len(gc.get_objects(info['generation'])) gc.callbacks.append(count) z = zip(*pairs) gc.callbacks.remove(count) print(checks) That told me 63,891,314 checks total (i.e., on average, each iterator was part of over 6 garbage collection runs). That's a lot of work. And all this just to create the zip iterator, before even using it. Meanwhile, the loop for i, (a[i], b[i]) in enumerate(pairs): pass creates almost no new objects at all. Let's check how much tracking enumerate causes: gc.collect() tracked_before = len(gc.get_objects()) e = enumerate(pairs) print(len(gc.get_objects()) - tracked_before) Output: 3 new objects tracked (the enumerate iterator object itself, the single iterator it creates for iterating over pairs, and the result tuple it'll use (code here)). I'd say that answers the question "Why do the timings totally differ like that?". The zip solution creates millions of objects that go through multiple garbage collection runs, while the loop solution doesn't. So disabling the garbage collector helps the zip solution tremendously, while the loop solution doesn't care. Now about the second question: "Which timing method should I believe?". Here's what the documentation has to say about it (emphasis mine): By default, timeit() temporarily turns off garbage collection during the timing. The advantage of this approach is that it makes independent timings more comparable. The disadvantage is that GC may be an important component of the performance of the function being measured. If so, GC can be re-enabled as the first statement in the setup string. For example: timeit.Timer('for i in range(10): oct(i)', 'gc.enable()').timeit() In our case here, the cost of garbage collection doesn't stem from some other unrelated code. It's directly caused by the zip call. And you do pay this price in reality, when you run that. So in this case, I do consider it an "important component of the performance of the function being measured". To directly answer the question as asked: Here I'd believe the default_timer method, not the timeit method. Or put differently: Here the timeit method should be used with enabling garbage collection as suggested in the documentatiion. Or... alternatively, we could actually disable garbage collection as part of the solution (not just for benchmarking): def f1(a, b, pairs): gc.disable() a[:], b[:] = zip(*pairs) gc.enable() But is that a good idea? Here's what the gc documentation says: Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles. Sounds like it's an ok thing to do. But I'm not sure I don't create reference cycles elsewhere in my program, so I finish with gc.enable() to turn garbage collection back on after I'm done. At that point, all those temporary objects have already been discarded thanks to reference counting. So all I'm doing is avoiding lots of pointless garbage collection checks. I find this a valuable lesson and I might actually do that in the future, if I know I only temporarily create a lot of objects. Finally, I highly recommend reading the gc module documentation and the Design of CPython’s Garbage Collector in Python's developer guide. Most of it is easy to understand, and I found it quite interesting and enlightening.
53
61