question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
68,454,202
2021-7-20
https://stackoverflow.com/questions/68454202/how-to-use-maxlen-of-typing-annotation-of-python-3-9
I'm aware there's this new typing format Annotated where you can specify some metadata to the entry variables of a function. From the docs, you could specify the maximum length of a incoming list such as: Annotated can be used with nested and generic aliases: T = TypeVar('T') Vec = Annotated[list[tuple[T, T]], MaxLen(10)] V = Vec[int] V == Annotated[list[tuple[int, int]], MaxLen(10)] But I cannot finish to comprehend what MaxLen is. Are you supposed to import a class from somewhere else? I've tried importing typing.MaxLen but doesn't seems to exists (I'm using Python 3.9.6, which I think it should exist here...?). Example code of what I imagined it should have worked: from typing import List, Annotated, MaxLen def function(foo: Annotated[List[int], MaxLen(10)]): # ... return True Where can one find MaxLen? EDIT: It seems like MaxLen is some sort of class you have to create. The problem is that I cannot see how you should do it. Are there public examples? How can someone implement this function?
As stated by AntiNeutronicPlasma, Maxlen is just an example so you'll need to create it yourself. Here's an example for how to create and parse a custom annotation such as MaxLen to get you started. First, we define the annotation class itself. It's a very simple class, we only need to store the relevant metadata, in this case, the max value: class MaxLen: def __init__(self, value): self.value = value Now, we can define a function that uses this annotation, such as the following: def sum_nums(nums: Annotated[List[int], MaxLen(10)]): return sum(nums) But it's going to be of little use if nobody checks for it. So, one option could be to implement a decorator that checks your custom annotations at runtime. The functions get_type_hints, get_origin and get_args from the typing module are going to be your best friends. Below is an example of such a decorator, which parses and enforces the MaxLen annotation on list types: from functools import wraps from typing import get_type_hints, get_origin, get_args, Annotated def check_annotations(func): @wraps(func) def wrapped(**kwargs): # perform runtime annotation checking # first, get type hints from function type_hints = get_type_hints(func, include_extras=True) for param, hint in type_hints.items(): # only process annotated types if get_origin(hint) is not Annotated: continue # get base type and additional arguments hint_type, *hint_args = get_args(hint) # if a list type is detected, process the args if hint_type is list or get_origin(hint_type) is list: for arg in hint_args: # if MaxLen arg is detected, process it if isinstance(arg, MaxLen): max_len = arg.value actual_len = len(kwargs[param]) if actual_len > max_len: raise ValueError(f"Parameter '{param}' cannot have a length " f"larger than {max_len} (got length {actual_len}).") # execute function once all checks passed return func(**kwargs) return wrapped (Note that this particular example only works with keyword arguments, but you could probably find a way to make it work for normal arguments too). Now, you can apply this decorator to any function, and your custom annotation will get parsed: from typing import Annotated, List @check_annotations def sum_nums_strict(nums: Annotated[List[int], MaxLen(10)]): return sum(nums) Below is an example of the code in action: >>> sum_nums(nums=list(range(5))) 10 >>> sum_nums(nums=list(range(15))) 105 >>> sum_nums_strict(nums=list(range(5))) 10 >>> sum_nums_strict(nums=list(range(15))) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "annotated_test.py", line 29, in wrapped raise ValueError(f"Parameter '{param}' cannot have a length " ValueError: Parameter 'nums' cannot have a length larger than 10 (got length 15).
10
12
68,409,249
2021-7-16
https://stackoverflow.com/questions/68409249/how-to-download-pdf-files-with-playwright-python
I'm trying to automate the download of a PDF file using Playwright, I've the code working with Selenium, but some features in Playwright got my attention. The real problem the documentation isn't helpful. When I click on download I get this: And I cant change the directory of the download, it also delete the "file" when the browser/context are closed. Using Playwright I can achieve a nice download automation? Code: def run(playwright): browser = playwright.chromium.launch(headless=False) context = browser.new_context(accept_downloads=True) # Open new page page = context.new_page() # Go to http://xcal1.vodafone.co.uk/ page.goto("http://xcal1.vodafone.co.uk/") # Click text=Extra Small File 5 MB A high quality 5 minute MP3 music file 30secs @ 2 Mbps 10s >> img with page.expect_download() as download_info: page.click("text=Extra Small File 5 MB A high quality 5 minute MP3 music file 30secs @ 2 Mbps 10s >> img") download = download_info.value path = download.path() download.save_as(path) print(path) # --------------------- context.close() browser.close() with sync_playwright() as playwright: run(playwright)
The download.path() in playwright is just a random GUID (globally unique identifier). It's designed to validate the download works - not to keep the file. Playwright is a testing tool and imagine running tests across every major browser on every code change - any downloads would quickly take up a lot of space and it would hack people off if you need to manually clear them out. Good news is you are very close - If you want to keep the file you just need to give the file a name in the save_as. instead of this: download.save_as(path) use this: download.save_as(download.suggested_filename) That saves the file in the same location as the script.
12
10
68,461,155
2021-7-20
https://stackoverflow.com/questions/68461155/different-results-on-anomaly-detection-bettween-pycaret-and-h2o
I'm working on detect anomalies from the following data: It comes from a processed signal of and hydraulic system, from there I know that the dots in the red boxes are anomalies happen when the system fails. I'm using the first 3k records to train a model, both in pycaret and H20. These 3k records covers 5 cycles of data, as shown in the image bellow: To train the model in pycaret I'm using the following code: from pycaret.anomaly import * from pycaret.datasets import get_data import pandas as pd exp_ano101 = setup(df[["Pressure_median_mw_2500_ac"]][0:3000], normalize = True, session_id = 123) iforest = create_model('iforest') unseen_predictions = predict_model(iforest, data=df[["Pressure_median_mw_2500_ac"]]) unseen_predictions = unseen_predictions.reset_index() The results I get from pycaret are pretty good: And with a bit of post processing I can get the follwing, which is quite close to the ideal: On the other hand, using H20, with the following code: import pandas as pd from h2o.estimators import H2OIsolationForestEstimator, H2OGenericEstimator import tempfile ifr = H2OIsolationForestEstimator() ifr.train(x="Pressure_median_mw_2500_ac",training_frame=hf) th = df["mean_length"][0:3000].quantile(0.05) df["anomaly"] = df["mean_length"].apply(lambda x: "1" if x> th else "0") I get this: Which is a huge difference, since it is not detecting as anomalies this block: My doubt is, how can I get similar results that the ones I get from pycaret given that I'm using the same algorithm, which is Isolation Forest. And even using SVM in Pycaret I get closer results than using isolation forest in H2O
TLDR: your problem would be massively simplified by changing the instances to detect anomalies to be cycles, not individual data samples from sensor. The differences between existing applied methods are probably due to differences in hyper-parameters, and the sensitivity to hyperparameters due to the less-than-ideal problem specification. This is a time-series, and your anomalies seem to be stateful - that is an anomaly starts to occur, and then affects many time-steps, then recovers again. However, you appear to be trying to detect anomalies in individual time-steps / samples, which will not work well, because in the anomalous condition the highest values are still within the normal range of individual datapoints from a normal condition. Furthermore there are strong temporal patterns in your data for the normal condition, and these are not possible to model with such an approach. That different softwares give different not-so-good results is expected, as tradeoffs will have to be made, and different hyperparameters will influence this. What you should do is to transform your original time-series to get instances that are more meaningful than individual point samples. The best for this kind of cyclic process with strong similarities between cycles, is to transform into a time-series for each cycle. This requires knowing (or reliably detecting) when a cycle starts. If cycle start is not available, one can instead use a sliding window approach, where the window is long enough to cover one or more cycles. Once you have such a set of windows, one can think about doing anomaly detection on it. Start with computing basic statistics that summaries the window (mean,std,min,max,max-min etc). The anomalies you have shown as an example will be trivially separable by the mean value of the cycle (or max or min). Don't need a isolation forest even, a Gaussian Mixture Model will do just fine, and allow for more interpretable results. This should work across a wide range of models and hyperparamters. Once a basic solution that captures such large discrepancies are in place, one can consider going further. Adding a sequence model autoencoder would for example be able to pick up much smaller deviations, if one has enough data.
4
3
68,414,632
2021-7-16
https://stackoverflow.com/questions/68414632/pickle-load-fails-on-protocol-4-objects-from-python-3-7-when-using-python-3-8
Python changed its pickle protocol to 4 in python 3.4 to 3.7 and again changed it to protocol=5 in python 3.8. How do I open older pickled files in python 3.8? I tried: >>> with open('data_frame_111.pkl','rb') as pfile: ... x1 = pickle.load(pfile) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> AttributeError: Can't get attribute 'new_block' on <module 'pandas.core.internals.blocks' from '/opt/anaconda3/lib/python3.8/site- packages/pandas/core/internals/blocks.py'> and >>> with open('data_frame_111.pkl','rb') as pfile: ... x1 = unpkl.load(pfile, protocol=4) but whereas protocol is a keyword in pickle.dump it is not part of pickle.load. Instantiating pickle.Unpickler() also doesn't work. But obviously there should be a way. In python 3.7, I would import pickle5 and use that to open newer pickles, but can't find documentation on doing the reverse in python 3.8.
You need to upgrade to the latest version (1.3.1 worked for me) of pandas. Or, to be more precise, the pandas version when you did pickle.dump(some_path) should be the same pandas version as when you will do pickle.load(some_path).
13
9
68,394,091
2021-7-15
https://stackoverflow.com/questions/68394091/fastapi-sqlalchemy-pydantic-%e2%86%92-how-to-process-many-to-many-relations
I have editors and articles. Many editors may be related to many articles and many articles may have many editors at same time. My DB tables are Article id subject text 1 New Year Holidays In this year... etc etc etc Editor id name email 1 John Smith some@email EditorArticleRelation editor_id article_id 1 1 My models are from sqlalchemy import Boolean, Column, Integer, String, ForeignKey from sqlalchemy.orm import relationship from database import Base class Editor(Base): __tablename__ = "editor" id = Column(Integer, primary_key=True, index=True) name = Column(String(32), unique=False, index=False, nullable=True) email = Column(String(115), unique=True, index=True) articles = relationship("Article", secondary=EditorArticleRelation, back_populates="articles", cascade="all, delete") class Article(Base): __tablename__ = "article" id = Column(Integer, primary_key=True, index=True) subject = Column(String(32), unique=True, index=False) text = Column(String(256), unique=True, index=True, nullable=True) editors = relationship("Editor", secondary=EditorArticleRelation, back_populates="editors", cascade="all, delete") EditorArticleRelation = Table('editorarticlerelation', Base.metadata, Column('editor_id', Integer, ForeignKey('editor.id')), Column('article_id', Integer, ForeignKey('article.id')) ) My schemas are from typing import Optional, List from pydantic import BaseModel class EditorBase(BaseModel): name: Optional[str] email: str class EditorCreate(EditorBase): pass class Editor(EditorBase): id: int class Config: orm_mode = True class ArticleBase(BaseModel): subject: str text: str class ArticleCreate(ArticleBase): # WHAT I NEED TO SET HERE??? editor_ids: List[int] = [] class Article(ArticleBase): id: int editors: List[Editor] = [] class Config: orm_mode = True My crud def create_article(db: Session, article_data: schema.ArticleCreate): db_article = model.Article(subject=article_data.subject, text=article_data.text, ??? HOW TO SET EDITORS HERE ???) db.add(db_article) db.commit() db.refresh(db_article) return db_article My route @app.post("/articles/", response_model=schema.Article) def create_article(article_data: schema.ArticleCreate, db: Session = Depends(get_db)): db_article = crud.get_article_by_name(db, name=article_data.name) if db_article: raise HTTPException(status_code=400, detail="article already registered") if len(getattr(article_data, 'editor_ids', [])) > 0: ??? WHAT I NEED TO SET HERE??? return crud.create_article(db=db, article_data=article_data) What I want β†’ I want to post data for article creation API and automatically resolve and add editor relations, or raise error if some of editors doesn't exist: { "subject": "Fresh news" "text": "Today is ..." "editor_ids": [1, 2, ...] } Questions are: How to correctly set crud operations (HOW TO SET EDITORS HERE place)? How to correctly set create/read schemas and relation fields (especially WHAT I NEED TO SET HERE place)? How to correctly set route code (especially WHAT I NEED TO SET HERE place)? If here is no possible to resolve relations automatically, what place will be better to resolve relations (check if editor exists, etc)? route or crud? Maybe my way is bad at all? If you know any examples how to handle many-to-many relations with pydantic and sqlalchemy, any information will be welcome
Not sure if my solution is most effective, but I did it by this way: route (same as in question): ... @app.post("/articles/", response_model=schema.Article) def create_article(article_data: schema.ArticleCreate, db: Session = Depends(get_db)): db_article = crud.get_article_by_name(db, name=article_data.name) if db_article: raise HTTPException(status_code=400, detail="article already registered") return crud.create_article(db=db, article_data=article_data) ... schema (same as in question): ... class ArticleCreate(ArticleBase): editor_ids: List[int] = [] ... crud (solution is here): def create_article(db: Session, article_data: schema.ArticleCreate): db_article = model.Article(subject=article_data.subject, text=article_data.text) if (editors := db.query(model.Editor).filter(model.Editor.id.in_(article_data.editor_ids))).count() == len(endpoint_data.topic_ids): db_article.topics.extend(editors) else: # even if at least one editor is not found, an error is raised # if existence is not matter you can skip this check and add relations only for existing data raise HTTPException(status_code=404, detail="editor not found") db.add(db_article) db.commit() db.refresh(db_article) return db_article Any better ideas are welcome
10
6
68,446,601
2021-7-19
https://stackoverflow.com/questions/68446601/pandas-class-with-pandas-pipe
@pd.api.extensions.register_dataframe_accessor("data_cleaner") class DataCleaner: def __init__(self, pandas_obj): self._obj = pandas_obj def multiply(self, col): self._obj[col] = self._obj[col] * self._obj[col] return self._obj def square(self, col): self._obj[col] = self._obj[col]**2 return self._obj def add_strings(self, col): self._obj[col] = self._obj[col] + self._obj[col] return self._obj def process_all(self): self._obj.pipe( self.multiply(col='A'), self.square(col='B') self.add_strings(col='C') ) class DataProcessor(DataCleaner): data = [ [1, 1.5, "AABB"], [2, 2.5, "BBCC"], [3, 3.5, "CCDD"], [4, 4.5, "DDEE"], [5, 5.5, "EEFF"], [6, 6.5, "FFGG"], ] def __init__(self): self.df = pd.DataFrame(data=DataProcessor.data, columns=['A', 'B', 'C']) def get_data(self): return self.df def clean_the_df(self, obj): obj = obj.data_cleaner.multiply(col='A') obj = obj.data_cleaner.square(col='B') obj = obj.data_cleaner.add_strings(col='C') return obj def process_all(self, obj): obj = obj.data_cleaner.process_all() if __name__ == '__main__': data = DataProcessor().get_data() # this works print(DataProcessor().clean_the_df(data)) # this does not work print(DataProcessor().process_all(data)) I want to use pandas .pipe() function with the dataframe accessor to chain methods together. In the DataCleaner class I have a method process_all that contains other cleaning methods inside the class. I want to chain them together and process the dataframe with multiple methods in one go. It would be nice to keep this chaining method inside the DataCleaner class so all I have to do is call it one time from another Class or file, e.g. process_all inside DataProcessor. That way I do not have to individually write out each method to process the dataframe one at a time, for example in DataProcessor.clean_the_df(). The problem is that process_all is complaining: TypeError: 'DataFrame' object is not callable So my question is, how do I use the pandas dataframe accessor, self.obj, with .pipe() to chain together multiple cleaning methods inside one function so that I can call that function from another class and process a dataframe with multiple methods in one go? Desired output with process_all: A B C 0 1 2.25 AABBAABB 1 4 6.25 BBCCBBCC 2 9 12.25 CCDDCCDD 3 16 20.25 DDEEDDEE 4 25 30.25 EEFFEEFF 5 36 42.25 FFGGFFGG
The question here is that .pipe expects a function that takes a DataFrame, a Series, or a GroupBy object. The documentation is quite clear with regards to that: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pipe.html. On top of that, the DataCleaner.process_all function is not implementing .pipe correctly. In order to chain several functions, the expected syntax is: >>>(df.pipe(h) ... .pipe(g, arg1=a) ... .pipe(func, arg2=b, arg3=c) ... ) which is equivalent to >>>func(g(h(df), arg1=a), arg2=b, arg3=c) In order to combine the data frame accessor with .pipe you need to define static methods within your DataCleaner class that take a DataFrame and a column as arguments. Here is an example that fixes your problem: @pd.api.extensions.register_dataframe_accessor("data_cleaner") class DataCleaner: def __init__(self, pandas_obj): self._obj = pandas_obj @staticmethod def multiply(df, col): df[col] = df[col] * df[col] return df @staticmethod def square(df, col): df[col] = df[col]**2 return df @staticmethod def add_strings(df, col): df[col] = df[col] + df[col] return df def process_all(self): self._obj = (self._obj.pipe(self.multiply, col='A') .pipe(self.square, col='B') .pipe(self.add_strings, col='C')) return self._obj class DataProcessor(DataCleaner): data = [ [1, 1.5, "AABB"], [2, 2.5, "BBCC"], [3, 3.5, "CCDD"], [4, 4.5, "DDEE"], [5, 5.5, "EEFF"], [6, 6.5, "FFGG"], ] def __init__(self): self.df = pd.DataFrame(data=DataProcessor.data, columns=['A', 'B', 'C']) def get_data(self): return self.df def clean_the_df(self, obj): obj = obj.data_cleaner.multiply(obj, col='A') # modified to use static method obj = obj.data_cleaner.square(obj, col='B') obj = obj.data_cleaner.add_strings(obj, col='C') return obj def process_all(self, obj): obj = obj.data_cleaner.process_all() return obj Using this code, running this should yield: >>>data = data = DataProcessor().get_data() >>>print(DataProcessor().process_all(data)) A B C 0 1 2.25 AABBAABB 1 4 6.25 BBCCBBCC 2 9 12.25 CCDDCCDD 3 16 20.25 DDEEDDEE 4 25 30.25 EEFFEEFF 5 36 42.25 FFGGFFGG
5
2
68,419,632
2021-7-17
https://stackoverflow.com/questions/68419632/apply-function-only-on-slice-of-array-under-jit
I am using JAX, and I want to perform an operation like @jax.jit def fun(x, index): x[:index] = other_fun(x[:index]) return x This cannot be performed under jit. Is there a way of doing this with jax.ops or jax.lax? I thought of using jax.ops.index_update(x, idx, y) but I cannot find a way of computing y without incurring in the same problem again.
The previous answer by @rvinas using dynamic_slice works well if your index is static, but you can also accomplish this with a dynamic index using jnp.where. For example: import jax import jax.numpy as jnp def other_fun(x): return x + 1 @jax.jit def fun(x, index): mask = jnp.arange(x.shape[0]) < index return jnp.where(mask, other_fun(x), x) x = jnp.arange(5) print(fun(x, 3)) # [1 2 3 3 4]
7
7
68,460,396
2021-7-20
https://stackoverflow.com/questions/68460396/contractnotfound-no-contract-deployed-at
I have been involved in the chainlink bootcamp and trying to finishing the final 'Exercise 3: Putting it all together'. However, I am stuck running: brownie run scripts/price_exercise_scripts/01_deploy_price_exercise.py --network kovan ContractNotFound: No contract deployed at 0xF4030086511a5bEEa4966F8cA5B36dbC97BeE88c Printed contract_type._name is a mock address returned from `MockV3Aggregator which also doesn't make sense, why the code calls this logic. def get_contract(contract_name): contract_type = contract_to_mock[contract_name] if network.show_active() in NON_FORKED_LOCAL_BLOCKCHAIN_ENVIRONMENTS: if len(contract_type) <= 0: deploy_mocks() contract = contract_type[-1] else: try: contract_address = config["networks"][network.show_active()][contract_name] contract = Contract.from_abi( contract_type._name, contract_address, contract_type.abi ) except KeyError: print( f"{network.show_active()} address not found, perhaps you should add it to the config or deploy mocks?") print( f"brownie run scripts/deploy_mocks.py --network {network.show_active()}" ) return contract I am struggling to understand this error message, should this command not be deploying contracts? I.e they should already already exist on the kovan network? Any insights welcome!
Problem - I was using the ethereum mainnet address instead of the correct kovan network address for btc / usd price. Changing the btc_usd_price_feed value to 0x6135b13325bfC4B00278B4abC5e20bbce2D6580e in the config.yml fixed this issue for me. price feed addresses
4
3
68,460,544
2021-7-20
https://stackoverflow.com/questions/68460544/how-to-add-uirevision-directly-to-figure-in-plotly-dash-for-automatic-updates
I have a Plotly figure built in Python that updates automatically. I want to preserve dashboard zooms even with automatic updates. The documentation in Plotly says this can be done using the layout uirevision field, per the this community writeup. The docs give this as an example of the return dictionary: return { 'data': data, 'layout': { # `uirevsion` is where the magic happens # this key is tracked internally by `dcc.Graph`, # when it changes from one update to the next, # it resets all of the user-driven interactions # (like zooming, panning, clicking on legend items). # if it remains the same, then that user-driven UI state # doesn't change. # it can be equal to anything, the important thing is # to make sure that it changes when you want to reset the user # state. # # in this example, we *only* want to reset the user UI state # when the user has changed their dataset. That is: # - if they toggle on or off reference, don't reset the UI state # - if they change the color, then don't reset the UI state # so, `uirevsion` needs to change when the `dataset` changes: # this is easy to program, we'll just set `uirevision` to be the # `dataset` value itself. # # if we wanted the `uirevision` to change when we add the "reference" # line, then we could set this to be `'{}{}'.format(dataset, reference)` 'uirevision': dataset, 'legend': {'x': 0, 'y': 1} } } However, my figure is built more like this: import plotly.express as px @app.callback( Output("graph", "figure"), [Input("interval-component", "n_intervals")]) def display_graph(n_intervals): # Logic for obtaining data/processing is not shown my_figure = px.line(my_data_frame, x=my_data_frame.index, y=['line_1', 'line_2'], title='Some Title', template='plotly_dark') return my_figure In other words, since I am not returning a dictionary, but a plotly express figure directly, how can I directly access the uirevision value so that UI changes from the user are preserved?
Use the figure dictionary, which can be accessed like so: my_figure['layout']['uirevision'] = 'some_value' This can also be used to access other useful aspects of the figure, such as changing the line color of a specific line entry: my_figure['data'][2]['line']['color'] = '#FFFF00' To see the other entry options, print out my_figure in a Python session. Note: since the uirevision option isn't documented very well (at least, not in my searching online), I thought it worth posting this as an option.
6
2
68,453,051
2021-7-20
https://stackoverflow.com/questions/68453051/decode-a-uint8array-into-a-json
I am fetching data from an API in order to show sales and finance reports, but I receive a type gzip file which I managed to convert into a Uint8Array. I'd like to somehow parse-decode this into a JSON file that I can use to access data and create charts in my frontend with. I was trying with different libraries (pako and cborg seemed to be the ones with the closest use cases), but I ultimately get an error Error: CBOR decode error: unexpected character at position 0 This is the code as I have it so far: let req = https.request(options, function (res) { console.log("Header: " + JSON.stringify(res.headers)); res.setEncoding("utf8"); res.on("data", function (body) { const deflatedBody = pako.deflate(body); console.log("DEFLATED DATA -----> ", typeof deflatedBody, deflatedBody); console.log(decode(deflatedBody)); }); res.on("error", function (error) { console.log("connection could not be made " + error.message); }); }); req.end(); }; I hope anyone has stumbled upon this already and has some idea. Thanks a lot!
Please visit this answer https://stackoverflow.com/a/12776856/16315663 to retrieve GZIP data from the response. Assuming, You have already retrieved full data as UInt8Array. You just need the UInt8Array as String const jsonString = Buffer.from(dataAsU8Array).toString('utf8') const parsedData = JSON.parse(jsonString) console.log(parsedData) Edit Here is what worked for me const {request} = require("https") const zlib = require("zlib") const parseGzip = (gzipBuffer) => new Promise((resolve, reject) =>{ zlib.gunzip(gzipBuffer, (err, buffer) => { if (err) { reject(err) return } resolve(buffer) }) }) const fetchJson = (url) => new Promise((resolve, reject) => { const r = request(url) r.on("response", (response) => { if (response.statusCode !== 200) { reject(new Error(`${response.statusCode} ${response.statusMessage}`)) return } const responseBufferChunks = [] response.on("data", (data) => { console.log(data.length); responseBufferChunks.push(data) }) response.on("end", async () => { const responseBuffer = Buffer.concat(responseBufferChunks) const unzippedBuffer = await parseGzip(responseBuffer) resolve(JSON.parse(unzippedBuffer.toString())) }) }) r.end() }) fetchJson("https://wiki.mozilla.org/images/f/ff/Example.json.gz") .then((result) => { console.log(result) }) .catch((e) => { console.log(e) })
7
14
68,455,515
2021-7-20
https://stackoverflow.com/questions/68455515/different-results-in-idle-and-python-shell-using-is
I am exploring python is vs ==, when I was exploring it, I find out if I write following; >>> a = 10.24 >>> b = 10.24 in a python shell and on typing >> a is b, it gives me output as false. But when I write the following code in a python editor and run it I get true. a = 10.24 b = 10.24 print(a is b) Can anyone explain why I am getting two different results of the same variables and expression?
You should not rely on is for comparison of values when you want to test equality. The is keyword compares id's of the variables, and checks if they are the same object. This will only work for the range of integers [-5,256] in Python, as these are singletons (these values are cached and referred to, instead of having a value stored in memory). See What's with the integer cache maintained by the interpreter? This is not the same as checking if they are the same value. As for why it behaves differently in a REPL environment versus a passed script, see Different behavior in python script and python idle?. The jist of it is that a passed script parses the entire file first, while a REPL environment like ipython or an IDLE shell reads lines one at a time. a=10.24 and b=10.24 are executed in different contexts, so the shell doesn't know that they should be the same value.
5
5
68,444,252
2021-7-19
https://stackoverflow.com/questions/68444252/multiple-training-with-huggingface-transformers-will-give-exactly-the-same-resul
I have a function that will load a pre-trained model from huggingface and fine-tune it for sentiment analysis then calculates the F1 score and returns the result. The problem is when I call this function multiple times with the exact same arguments, it will give the exact same metric score which is expected, except for the first time which is different, how is that possible? This is my function which is written based on this tutorial in huggingface: import uuid import numpy as np from datasets import ( load_dataset, load_metric, DatasetDict, concatenate_datasets ) from transformers import ( AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding, TrainingArguments, Trainer, ) CHECKPOINT = "distilbert-base-uncased" SAVING_FOLDER = "sst2" def custom_train(datasets, checkpoint=CHECKPOINT, saving_folder=SAVING_FOLDER): model = AutoModelForSequenceClassification.from_pretrained(checkpoint, num_labels=2) tokenizer = AutoTokenizer.from_pretrained(checkpoint) def tokenize_function(example): return tokenizer(example["sentence"], truncation=True) tokenized_datasets = datasets.map(tokenize_function, batched=True) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) saving_folder = f"{SAVING_FOLDER}_{str(uuid.uuid1())}" training_args = TrainingArguments(saving_folder) trainer = Trainer( model, training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, tokenizer=tokenizer, ) trainer.train() predictions = trainer.predict(tokenized_datasets["test"]) print(predictions.predictions.shape, predictions.label_ids.shape) preds = np.argmax(predictions.predictions, axis=-1) metric_fun = load_metric("f1") metric_result = metric_fun.compute(predictions=preds, references=predictions.label_ids) return metric_result And then I will run this function several times with the same datasets, and append the result of the returned F1 score each time: raw_datasets = load_dataset("glue", "sst2") small_datasets = DatasetDict({ "train": raw_datasets["train"].select(range(100)).flatten_indices(), "validation": raw_datasets["validation"].select(range(100)).flatten_indices(), "test": raw_datasets["validation"].select(range(100, 200)).flatten_indices(), }) results = [] for i in range(4): result = custom_train(small_datasets) results.append(result) And then when I check the results list: [{'f1': 0.7755102040816325}, {'f1': 0.5797101449275361}, {'f1': 0.5797101449275361}, {'f1': 0.5797101449275361}] Something that may come to mind is that when I load a pre-trained model, the head will be initialized with random weights and that is why the results are different, if that is the case, why only the first one is different and the others are exactly the same?
Sylvain Gugger answered this question here: https://discuss.huggingface.co/t/multiple-training-will-give-exactly-the-same-result-except-for-the-first-time/8493 You need to set the seed before instantiating your model, otherwise the random head is not initialized the same way, that’s why the first run will always be different. The subsequent runs are all the same because the seed has been set by the Trainer in the train method. To set the seed: from transformers import set_seed set_seed(42)
8
16
68,436,511
2021-7-19
https://stackoverflow.com/questions/68436511/tabula-py-read-pdf-with-template-method
I am trying to read a particular portion of a document as a table. It is structured as a table but there are no dividing lines between, cells, rows or columns. I had success with using the read_pdf() method with the area and column arguments. I could specify exactly where the table starts and ends and where the columns divide. But my pdf has multiple different sizes of tables on each page with no clear markers to identify them and I have to use these arguments. I found out about the read_pdf_with_template() method in the Github repo issues here, and a bit more about it in the documentation, pull request and the example notebook. But nowhere it is mentioned how to structure the teamplate.json and which arguments I could use or what they mean. I tried inserting the area coordinates to the x1, y1, x2, y2, passing the columns list in the method argument and height, width arguments with the size of the table. But it is picking up a top centre section of the pdf which does not equate to any of the coordinates I inserted when I reverse calculated everything. Here's the page I'm trying to read (I've deleted some sensitive data) and here are the code snippets import tabula tables = tabula.read_pdf_with_template(input_path = "test.pdf", template_path = "template.json", columns=[195, 310, 380]) print(tables[0]) [ { "page": 1, "extraction_method": "stream", "x1": 225, "x2": 35, "y1": 375, "y2": 565, "width": 525, "height": 400 } ]
I was just being a dum-dum. Templates are not something that you generate manually. They are supposed to be generated by the tabula app as mentioned here. Just download tabula from the official website. Once you launch the app, it's fairly simple. Manually click and drag on each table on each page and click on the download template button on top.
6
10
68,449,103
2021-7-20
https://stackoverflow.com/questions/68449103/tf-keras-preprocessing-image-dataset-from-directory-value-error-no-images-found
belos is my code to ensure that the folder has images, but tf.keras.preprocessing.image_dataset_from_directory returns no images found. What did I do wrong? Thanks. DATASET_PATH = pathlib.Path('C:\\Users\\xxx\\Documents\\images') image_count = len(list(DATASET_PATH.glob('.\\*.jpg'))) print(image_count) output = 2715 batch_size = 4 img_height = 32 img_width = 32 train_ds = tf.keras.preprocessing.image_dataset_from_directory( DATASET_PATH.name, validation_split=0.8, subset="training", seed=123, image_size=(img_height, img_width), batch_size=batch_size) output: Found 0 files belonging to 0 classes. Using 0 files for training. Traceback (most recent call last): File ".\tensorDataPreProcessed.py", line 23, in <module> batch_size=batch_size) File "C:\Users\xxx\Anaconda3\envs\xxx\lib\site-packages\tensorflow\python\keras\preprocessing\image_dataset.py", line 200, in image_dataset_from_directory raise ValueError('No images found.') ValueError: No images found.
There are two issues here, firstly image_dataset_from_directory requires subfolders for each of the classes within the directory. This way it can automatically identify and assign class labels to images. So the standard folder structure for TF is: data | |___train | |___class_1 | |___class_2 | |___validation | |___class_1 | |___class_2 | |___test(optional) |___class_1 |___class_2 The other issue is that you are attempting to create a model using only one class which is not a way to go. The model needs to be able to differentiate between the class you are trying to generate using GAN but to do this it needs a sample of images that do not belong to this class.
6
12
68,418,727
2021-7-17
https://stackoverflow.com/questions/68418727/pip-install-py-find-1st-fails-on-ubuntu20-centos-with-python3-9
This is my process. I start a new aws t2.micro ec2 on ubuntu20 and run this script sudo apt-get update sudo apt-get install gcc sudo apt-get install python3.9 sudo apt-get install python3.9-venv curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.py > get-pip.py python3.9 get-pip.py sudo apt-get install python-dev sudo apt-get install python3-dev python3.9 -m pip install --upgrade pip python3.9 -m pip install --upgrade pip setuptools python3.9 -m pip install --upgrade wheel python3.9 -m pip install testresources sudo apt-get install build-essential sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev libffi-dev wget liblzma-dev lzma sudo apt-get install python3-devel python3.9 -m pip install p5py python3.9 -m pip install PEP517 python3.9 -m pip install py-find-1st The last line is the problem I get this output (error is at the bottom, I included more output incase it's helpful, had to trim output a little bit) Preparing to unpack .../python3.9-venv_3.9.5-3~20.04.1_amd64.deb ... Unpacking python3.9-venv (3.9.5-3~20.04.1) ... Setting up python-pip-whl (20.0.2-5ubuntu1.5) ... Setting up python3.9-venv (3.9.5-3~20.04.1) ... Defaulting to user installation because normal site-packages is not writeable Collecting pip Downloading pip-21.1.3-py3-none-any.whl (1.5 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1.5 MB 5.0 MB/s Collecting wheel Downloading wheel-0.36.2-py2.py3-none-any.whl (35 kB) Installing collected packages: wheel, pip WARNING: The script wheel is installed in '/home/ubuntu/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. WARNING: The scripts pip, pip3 and pip3.9 are installed in '/home/ubuntu/.local/bin' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Successfully installed pip-21.1.3 wheel-0.36.2 Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'python-dev-is-python2' instead of 'python-dev' The following additional packages will be installed: libexpat1-dev libpython2-dev libpython2-stdlib libpython2.7 libpython2.7-dev libpython2.7-minimal libpython2.7-stdlib python-is-python2 python2 python2-dev python2-minimal python2.7 python2.7-dev python2.7-minimal Suggested packages: python2-doc python-tk python2.7-doc binfmt-support The following NEW packages will be installed: libexpat1-dev libpython2-dev libpython2-stdlib libpython2.7 libpython2.7-dev libpython2.7-minimal libpython2.7-stdlib python-dev-is-python2 python-is-python2 python2 python2-dev python2-minimal python2.7 python2.7-dev python2.7-minimal 0 upgraded, 15 newly installed, 0 to remove and 78 not upgraded. Need to get 7744 kB of archives. After this operation, 35.1 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 libpython2.7-minimal amd64 2.7.18-1~20.04.1 [335 kB] Get:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 python2.7-minimal amd64 2.7.18-1~20.04.1 [1285 kB] Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 python2-minimal amd64 2.7.17-2ubuntu4 [27.5 kB] Get:4 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 libpython2.7-stdlib amd64 2.7.18-1~20.04.1 [1887 kB] Get:5 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 python2.7 amd64 2.7.18-1~20.04.1 [248 kB] Get:6 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 libpython2-stdlib amd64 2.7.17-2ubuntu4 [7072 B] Get:7 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 python2 amd64 2.7.17-2ubuntu4 [26.5 kB] Get:8 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libexpat1-dev amd64 2.2.9-1build1 [116 kB] Get:9 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 libpython2.7 amd64 2.7.18-1~20.04.1 [1038 kB] Get:10 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 libpython2.7-dev amd64 2.7.18-1~20.04.1 [2475 kB] Get:11 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 libpython2-dev amd64 2.7.17-2ubuntu4 [7140 B] Get:12 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 python-is-python2 all 2.7.17-4 [2496 B] Get:13 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 python2.7-dev amd64 2.7.18-1~20.04.1 [287 kB] Get:14 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 python2-dev amd64 2.7.17-2ubuntu4 [1268 B] Get:15 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 python-dev-is-python2 all 2.7.17-4 [1396 B] Fetched 7744 kB in 0s (46.3 MB/s) Selecting previously unselected package libpython2.7-minimal:amd64. (Reading database ... 65082 files and directories currently installed.) Preparing to unpack .../0-libpython2.7-minimal_2.7.18-1~20.04.1_amd64.deb ... Unpacking libpython2.7-minimal:amd64 (2.7.18-1~20.04.1) ... Selecting previously unselected package python2.7-minimal. Preparing to unpack .../1-python2.7-minimal_2.7.18-1~20.04.1_amd64.deb ... Unpacking python2.7-minimal (2.7.18-1~20.04.1) ... Selecting previously unselected package python2-minimal. Preparing to unpack .../2-python2-minimal_2.7.17-2ubuntu4_amd64.deb ... Unpacking python2-minimal (2.7.17-2ubuntu4) ... Selecting previously unselected package libpython2.7-stdlib:amd64. Preparing to unpack .../3-libpython2.7-stdlib_2.7.18-1~20.04.1_amd64.deb ... Unpacking libpython2.7-stdlib:amd64 (2.7.18-1~20.04.1) ... Selecting previously unselected package python2.7. Preparing to unpack .../4-python2.7_2.7.18-1~20.04.1_amd64.deb ... Unpacking python2.7 (2.7.18-1~20.04.1) ... Selecting previously unselected package libpython2-stdlib:amd64. Preparing to unpack .../5-libpython2-stdlib_2.7.17-2ubuntu4_amd64.deb ... Unpacking libpython2-stdlib:amd64 (2.7.17-2ubuntu4) ... Setting up libpython2.7-minimal:amd64 (2.7.18-1~20.04.1) ... Setting up python2.7-minimal (2.7.18-1~20.04.1) ... Linking and byte-compiling packages for runtime python2.7... Setting up python2-minimal (2.7.17-2ubuntu4) ... Selecting previously unselected package python2. (Reading database ... 65829 files and directories currently installed.) Preparing to unpack .../0-python2_2.7.17-2ubuntu4_amd64.deb ... Unpacking python2 (2.7.17-2ubuntu4) ... Selecting previously unselected package libexpat1-dev:amd64. Preparing to unpack .../1-libexpat1-dev_2.2.9-1build1_amd64.deb ... Unpacking libexpat1-dev:amd64 (2.2.9-1build1) ... Selecting previously unselected package libpython2.7:amd64. Preparing to unpack .../2-libpython2.7_2.7.18-1~20.04.1_amd64.deb ... Unpacking libpython2.7:amd64 (2.7.18-1~20.04.1) ... Selecting previously unselected package libpython2.7-dev:amd64. Preparing to unpack .../3-libpython2.7-dev_2.7.18-1~20.04.1_amd64.deb ... Unpacking libpython2.7-dev:amd64 (2.7.18-1~20.04.1) ... Selecting previously unselected package libpython2-dev:amd64. Preparing to unpack .../4-libpython2-dev_2.7.17-2ubuntu4_amd64.deb ... Unpacking libpython2-dev:amd64 (2.7.17-2ubuntu4) ... Selecting previously unselected package python-is-python2. Preparing to unpack .../5-python-is-python2_2.7.17-4_all.deb ... Unpacking python-is-python2 (2.7.17-4) ... Selecting previously unselected package python2.7-dev. Preparing to unpack .../6-python2.7-dev_2.7.18-1~20.04.1_amd64.deb ... Unpacking python2.7-dev (2.7.18-1~20.04.1) ... Selecting previously unselected package python2-dev. Preparing to unpack .../7-python2-dev_2.7.17-2ubuntu4_amd64.deb ... Unpacking python2-dev (2.7.17-2ubuntu4) ... Selecting previously unselected package python-dev-is-python2. Preparing to unpack .../8-python-dev-is-python2_2.7.17-4_all.deb ... Unpacking python-dev-is-python2 (2.7.17-4) ... Setting up libpython2.7-stdlib:amd64 (2.7.18-1~20.04.1) ... Setting up libexpat1-dev:amd64 (2.2.9-1build1) ... Setting up libpython2.7:amd64 (2.7.18-1~20.04.1) ... Setting up libpython2.7-dev:amd64 (2.7.18-1~20.04.1) ... Setting up python2.7 (2.7.18-1~20.04.1) ... Setting up libpython2-stdlib:amd64 (2.7.17-2ubuntu4) ... Setting up python2 (2.7.17-2ubuntu4) ... Setting up libpython2-dev:amd64 (2.7.17-2ubuntu4) ... Setting up python-is-python2 (2.7.17-4) ... Setting up python2.7-dev (2.7.18-1~20.04.1) ... Setting up python2-dev (2.7.17-2ubuntu4) ... Setting up python-dev-is-python2 (2.7.17-4) ... Processing triggers for libc-bin (2.31-0ubuntu9.2) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for mime-support (3.64ubuntu1) ... Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libpython3-dev libpython3.8 libpython3.8-dev libpython3.8-minimal libpython3.8-stdlib python3.8 python3.8-dev python3.8-minimal zlib1g-dev Suggested packages: python3.8-venv python3.8-doc binfmt-support The following NEW packages will be installed: libpython3-dev libpython3.8-dev python3-dev python3.8-dev zlib1g-dev The following packages will be upgraded: libpython3.8 libpython3.8-minimal libpython3.8-stdlib python3.8 python3.8-minimal 5 upgraded, 5 newly installed, 0 to remove and 73 not upgraded. Need to get 10.9 MB of archives. After this operation, 21.2 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 python3.8 amd64 3.8.10-0ubuntu1~20.04 [387 kB] Get:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8 amd64 3.8.10-0ubuntu1~20.04 [1625 kB] Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8-stdlib amd64 3.8.10-0ubuntu1~20.04 [1675 kB] Get:4 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 python3.8-minimal amd64 3.8.10-0ubuntu1~20.04 [1898 kB] Get:5 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8-minimal amd64 3.8.10-0ubuntu1~20.04 [717 kB] Get:6 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8-dev amd64 3.8.10-0ubuntu1~20.04 [3943 kB] Get:7 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libpython3-dev amd64 3.8.2-0ubuntu2 [7236 B] Get:8 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 zlib1g-dev amd64 1:1.2.11.dfsg-2ubuntu1.2 [155 kB] Get:9 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 python3.8-dev amd64 3.8.10-0ubuntu1~20.04 [510 kB] Get:10 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 python3-dev amd64 3.8.2-0ubuntu2 [1212 B] Fetched 10.9 MB in 0s (42.5 MB/s) (Reading database ... 66030 files and directories currently installed.) Preparing to unpack .../0-python3.8_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking python3.8 (3.8.10-0ubuntu1~20.04) over (3.8.5-1~20.04.2) ... Preparing to unpack .../1-libpython3.8_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking libpython3.8:amd64 (3.8.10-0ubuntu1~20.04) over (3.8.5-1~20.04.2) ... Preparing to unpack .../2-libpython3.8-stdlib_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking libpython3.8-stdlib:amd64 (3.8.10-0ubuntu1~20.04) over (3.8.5-1~20.04.2) ... Preparing to unpack .../3-python3.8-minimal_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking python3.8-minimal (3.8.10-0ubuntu1~20.04) over (3.8.5-1~20.04.2) ... Preparing to unpack .../4-libpython3.8-minimal_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking libpython3.8-minimal:amd64 (3.8.10-0ubuntu1~20.04) over (3.8.5-1~20.04.2) ... Selecting previously unselected package libpython3.8-dev:amd64. Preparing to unpack .../5-libpython3.8-dev_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking libpython3.8-dev:amd64 (3.8.10-0ubuntu1~20.04) ... Selecting previously unselected package libpython3-dev:amd64. Preparing to unpack .../6-libpython3-dev_3.8.2-0ubuntu2_amd64.deb ... Unpacking libpython3-dev:amd64 (3.8.2-0ubuntu2) ... Selecting previously unselected package zlib1g-dev:amd64. Preparing to unpack .../7-zlib1g-dev_1%3a1.2.11.dfsg-2ubuntu1.2_amd64.deb ... Unpacking zlib1g-dev:amd64 (1:1.2.11.dfsg-2ubuntu1.2) ... Selecting previously unselected package python3.8-dev. Preparing to unpack .../8-python3.8-dev_3.8.10-0ubuntu1~20.04_amd64.deb ... Unpacking python3.8-dev (3.8.10-0ubuntu1~20.04) ... Selecting previously unselected package python3-dev. Preparing to unpack .../9-python3-dev_3.8.2-0ubuntu2_amd64.deb ... Unpacking python3-dev (3.8.2-0ubuntu2) ... Setting up libpython3.8-minimal:amd64 (3.8.10-0ubuntu1~20.04) ... Setting up zlib1g-dev:amd64 (1:1.2.11.dfsg-2ubuntu1.2) ... Setting up python3.8-minimal (3.8.10-0ubuntu1~20.04) ... Setting up libpython3.8-stdlib:amd64 (3.8.10-0ubuntu1~20.04) ... Setting up python3.8 (3.8.10-0ubuntu1~20.04) ... Setting up libpython3.8:amd64 (3.8.10-0ubuntu1~20.04) ... Setting up libpython3.8-dev:amd64 (3.8.10-0ubuntu1~20.04) ... Setting up python3.8-dev (3.8.10-0ubuntu1~20.04) ... Setting up libpython3-dev:amd64 (3.8.2-0ubuntu2) ... Setting up python3-dev (3.8.2-0ubuntu2) ... Processing triggers for libc-bin (2.31-0ubuntu9.2) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for mime-support (3.64ubuntu1) ... Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in ./.local/lib/python3.9/site-packages (21.1.3) Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: pip in ./.local/lib/python3.9/site-packages (21.1.3) Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (45.2.0) Collecting setuptools Downloading setuptools-57.4.0-py3-none-any.whl (819 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 819 kB 5.0 MB/s Installing collected packages: setuptools ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. launchpadlib 1.10.13 requires testresources, which is not installed. Successfully installed setuptools-57.4.0 Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: wheel in ./.local/lib/python3.9/site-packages (0.36.2) Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: dpkg-dev fakeroot g++ g++-9 libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libdpkg-perl libfakeroot libfile-fcntllock-perl libstdc++-9-dev make Suggested packages: debian-keyring g++-multilib g++-9-multilib gcc-9-doc bzr libstdc++-9-doc make-doc The following NEW packages will be installed: build-essential dpkg-dev fakeroot g++ g++-9 libalgorithm-diff-perl libalgorithm-diff-xs-perl libalgorithm-merge-perl libdpkg-perl libfakeroot libfile-fcntllock-perl libstdc++-9-dev make 0 upgraded, 13 newly installed, 0 to remove and 73 not upgraded. Need to get 11.4 MB of archives. After this operation, 52.2 MB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libstdc++-9-dev amd64 9.3.0-17ubuntu1~20.04 [1714 kB] Get:2 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 g++-9 amd64 9.3.0-17ubuntu1~20.04 [8405 kB] Get:3 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 g++ amd64 4:9.3.0-1ubuntu2 [1604 B] Get:4 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 make amd64 4.2.1-1.2 [162 kB] Get:5 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libdpkg-perl all 1.19.7ubuntu3 [230 kB] Get:6 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 dpkg-dev all 1.19.7ubuntu3 [679 kB] Get:7 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 build-essential amd64 12.8ubuntu1.1 [4664 B] Get:8 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libfakeroot amd64 1.24-1 [25.7 kB] Get:9 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 fakeroot amd64 1.24-1 [62.6 kB] Get:10 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libalgorithm-diff-perl all 1.19.03-2 [46.6 kB] Get:11 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libalgorithm-diff-xs-perl amd64 0.04-6 [11.3 kB] Get:12 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libalgorithm-merge-perl all 0.08-3 [12.0 kB] Get:13 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libfile-fcntllock-perl amd64 0.22-3build4 [33.1 kB] Fetched 11.4 MB in 0s (54.8 MB/s) Selecting previously unselected package libstdc++-9-dev:amd64. (Reading database ... 66237 files and directories currently installed.) Preparing to unpack .../00-libstdc++-9-dev_9.3.0-17ubuntu1~20.04_amd64.deb ... Unpacking libstdc++-9-dev:amd64 (9.3.0-17ubuntu1~20.04) ... Selecting previously unselected package g++-9. Preparing to unpack .../01-g++-9_9.3.0-17ubuntu1~20.04_amd64.deb ... Unpacking g++-9 (9.3.0-17ubuntu1~20.04) ... Selecting previously unselected package g++. Preparing to unpack .../02-g++_4%3a9.3.0-1ubuntu2_amd64.deb ... Unpacking g++ (4:9.3.0-1ubuntu2) ... Selecting previously unselected package make. Preparing to unpack .../03-make_4.2.1-1.2_amd64.deb ... Unpacking make (4.2.1-1.2) ... Selecting previously unselected package libdpkg-perl. Preparing to unpack .../04-libdpkg-perl_1.19.7ubuntu3_all.deb Get:87 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 tcl8.6 amd64 8.6.10+dfsg-1 [14.8 kB] Get:88 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 tcl amd64 8.6.9+1 [5112 B] Get:89 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 tcl8.6-dev amd64 8.6.10+dfsg-1 [905 kB] Get:90 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 tcl-dev amd64 8.6.9+1 [5760 B] Get:91 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 tk8.6 amd64 8.6.10-1 [12.5 kB] Get:92 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 tk amd64 8.6.9+1 [3240 B] Get:93 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 tk8.6-dev amd64 8.6.10-1 [711 kB] Get:94 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/universe amd64 tk-dev amd64 8.6.9+1 [3076 B] Get:95 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libglvnd0 amd64 1.3.2-1~ubuntu0.20.04.1 [51.4 kB] Get:96 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libglx0 amd64 1.3.2-1~ubuntu0.20.04.1 [32.6 kB] Get:97 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 libgl1 amd64 1.3.2-1~ubuntu0.20.04.1 [86.9 kB] Get:98 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 x11-utils amd64 7.7+5 [199 kB] Get:99 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 xbitmaps all 1.1.1-2 [28.1 kB] Get:100 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/universe amd64 xterm amd64 353-1ubuntu1.20.04.2 [765 kB] Get:101 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libffi-dev amd64 3.3-4 [57.0 kB] Get:102 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal-updates/main amd64 liblzma-dev amd64 5.2.4-1ubuntu1 [147 kB] Get:103 http://us-east-2.ec2.archive.ubuntu.com/ubuntu focal/main amd64 libreadline-gplv2-dev amd64 5.2+dfsg-3build3 [125 kB] Fetched 47.5 MB in 1s (53.2 MB/s) Extracting templates from packages: 100% Preconfiguring packages ... Created wheel for p5py: filename=p5py-1.0.0-py2.py3-none-any.whl size=2333 sha256=e7417e00f0c9701b6889208ba42a352140b10d38748267029e0b3f5adb557857 Stored in directory: /home/ubuntu/.cache/pip/wheels/72/c3/3e/d2e21f7f687d90134f4774eee0b36f1b3303ef35d4ebf832c7 Successfully built p5py Installing collected packages: p5py Successfully installed p5py-1.0.0 Defaulting to user installation because normal site-packages is not writeable Collecting PEP517 Downloading pep517-0.11.0-py2.py3-none-any.whl (19 kB) Collecting tomli Downloading tomli-1.0.4-py3-none-any.whl (11 kB) Installing collected packages: tomli, PEP517 Successfully installed PEP517-0.11.0 tomli-1.0.4 Defaulting to user installation because normal site-packages is not writeable Collecting py-find-1st Downloading py_find_1st-1.1.5.tar.gz (8.8 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing wheel metadata ... done Collecting numpy>=1.13.0 Downloading numpy-1.21.1-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.8 MB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 15.8 MB 6.0 MB/s Building wheels for collected packages: py-find-1st Building wheel for py-find-1st (PEP 517) ... error ERROR: Command errored out with exit status 1: command: /usr/bin/python3.9 /home/ubuntu/.local/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp47_3kk6k cwd: /tmp/pip-install-pvu0gblm/py-find-1st_36eb63373254485b8448e2b46d13c71f Complete output (20 lines): /tmp/pip-build-env-qm53q7xb/overlay/lib/python3.9/site-packages/setuptools/dist.py:697: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead warnings.warn( running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.9 creating build/lib.linux-x86_64-3.9/utils_find_1st copying utils_find_1st/__init__.py -> build/lib.linux-x86_64-3.9/utils_find_1st running build_ext check for clang compiler ... no building 'find_1st' extension creating build/temp.linux-x86_64-3.9 creating build/temp.linux-x86_64-3.9/utils_find_1st x86_64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DNPY_NO_DEPRECATED_API=NPY_1_13_API_VERSION -I/tmp/pip-build-env-qm53q7xb/overlay/lib/python3.9/site-packages/numpy/core/include -I/usr/include/python3.9 -c utils_find_1st/find_1st.cpp -o build/temp.linux-x86_64-3.9/utils_find_1st/find_1st.o utils_find_1st/find_1st.cpp:3:10: fatal error: Python.h: No such file or directory 3 | #include "Python.h" | ^~~~~~~~~~ compilation terminated. error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1 ---------------------------------------- ERROR: Failed building wheel for py-find-1st Failed to build py-find-1st ERROR: Could not build wheels for py-find-1st which use PEP 517 and cannot be installed directly If you would like to reproduce the error, you can start a t2.micro ec2 running ubuntu20 and run the script listed above
This seems to be an issue with the module you are trying to install and not the header itself. The py-find-1st is a rather exotic module (9 stars on GitHub at the time of writing) and the build problem of this sort has been already reported. Solutions: Install libpython3.9 sudo apt install libpython3.9-dev Edit: this solution worked, the OP was missing include/python3.9/Python.h header that's in the libpython3.9-dev. Forget about that module The module implements one function for "finding first indices without requiring to read the full array" which (partially) can be implemented as: def find_1st(X): ind = np.flatnonzero(X < 0.) if len(ind) : return ind[0] else: return -1 Problem is, this reads full array. If you arrays are too large, you might consider running this in a loop.
5
4
68,444,765
2021-7-19
https://stackoverflow.com/questions/68444765/why-is-it-not-possible-to-unpack-lists-inside-a-list-comprehension
Starred expressions raise SyntaxError when used in list or generator comprehension. I'm curious about the reason behind this; is it an implementation choice or there are technical constraints that would prevent this operation? I've found a lot about the contexts that don't allow for unpacking iterables but nothing about why. Example: lis = [1, 2, 3, 4, 5] listcomp = [*lis for i in range(3)] I thought maybe I could use this to get [1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5] as a result, but it raises SyntaxError("Iterable unpacking cannot be used in comprehension")
This was proposed in PEP 448 -- Additional Unpacking Generalizations but ultimately not accepted due to concerns about readability: Earlier iterations of this PEP allowed unpacking operators inside list, set, and dictionary comprehensions as a flattening operator over iterables of containers: [...] This was met with a mix of strong concerns about readability and mild support. In order not to disadvantage the less controversial aspects of the PEP, this was not accepted with the rest of the proposal. Notably, the possibility to add this at a later point has not been ruled out. This PEP does not include unpacking operators inside list, set and dictionary comprehensions although this has not been ruled out for future proposals.
4
7
68,444,335
2021-7-19
https://stackoverflow.com/questions/68444335/why-does-print-with-end-doesnt-appear-until-new-line
I have been making a program in python 3.9 and after having this code: #Print 3 dots at the interval shown def dots(t): t *= 3 sleep(t) print('.', end='') sleep(t) print('.', end='') sleep(t) print('.') And this calling it: # These are completely aesthetic sleep(0.25) print("Defining Functions", end='') dots(0.4) I expected for the program to print Defining Functions and after 1.2 seconds, 3 times, add a dot(.) at the end. What really happened is after 3.85 seconds it printed the whole thing altogether with dots(Defining Functions...). So it has not printed anything until a new line is added(3rd dot). Sorry if it's messy, I don't know how to make these questions well
Python buffers output to stdout. This is because writing larger pieces of text at a time is more efficient (less syscalls). By default, if stdout is connected to a terminal, the output will be line-buffered. Thus printing a newline flushes the buffer and you see the output immediately. If stdout is redirected into a pipe or file, Python buffers even more aggressively and will not flush on newlines, only when the buffer is full. To avoid this problem, flush the buffer explicitly before sleeping: print('.', end='', flush=True)
6
13
68,439,799
2021-7-19
https://stackoverflow.com/questions/68439799/typeerror-missing-1-required-positional-argument-while-using-pytest-fixture
I have written my test classes in a file and I am trying to use pytest fixtures so that I don't have to create the same input data in each test functions. Below is the minimal working example. import unittest import pytest @pytest.fixture def base_value(): return 5 class Test(unittest.TestCase): def test_add_two(self, base_value): result = base_value + 2 self.assertEqual(result, 7, "Result doesn't match") However, when I test this using pytest-3, I get the following error: TypeError: test_add_two() missing 1 required positional argument: 'base_value' This is confusing for me since the base_value is clearly given as one of the arguments to test_add_two. Any help is highly appreciated.
This is because you are mixing pytest and unittest. Try @pytest.fixture def base_value(): return 5 class Test: def test_add_two(self, base_value): result = base_value + 2 assert result == 7, "Result doesn't match" And in case of failure the error will be def test_add_two(self, base_value): result = base_value + 2 > assert result == 8, "Result doesn't match" E AssertionError: Result doesn't match E assert 7 == 8 But isn't pytest compatible with unittest? Only on a limited basis. From Pytest unittest.TestCase Support pytest features in unittest.TestCase subclasses The following pytest features work in unittest.TestCase subclasses: Marks: skip, skipif, xfail; Auto-use fixtures; The following pytest features do not work, and probably never will due to different design philosophies: Fixtures (except for autouse fixtures, see below); Parametrization; Custom hooks; Third party plugins may or may not work well, depending on the plugin and the test suite.
27
37
68,398,033
2021-7-15
https://stackoverflow.com/questions/68398033/svg-figures-hidden-in-jupyterlab-after-some-time
I recently found I could make all my matplotlib figures with SVG by default in my jupyterlab notebooks with import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_formats = ['svg'] However, if I refresh the page, the figures disappear, leaving behind <Figure size 864x576 with 1 Axes> This effect does not occur without changing the inline backend. My plotly figures also still show up after a refresh. I would prefer not to have to re-run the cells. Looking at the actual ipynb file, the SVG is right there in the actual file. How can I get this figure to show without re-running the cell?
The behaviour looks in line with the security model of Jupyter. Untrusted HTML is always sanitized Untrusted Javascript is never executed HTML and Javascript in Markdown cells are never trusted Outputs generated by the user are trusted Any other HTML or Javascript (in Markdown cells, output generated by others) is never trusted The central question of trust is β€œDid the current user do this?” Because SVG can have <script> tags, there's an attack surface. Hence, you have two options to display the SVG: Re-run the cell that generates an SVG (last point above). Explicitly trust the notebook, jupyter trust /path/to/notebook.ipynb. Notes: Plotly saves image/png version in the output of the cell. It's displayed when the notebook is loaded. But the loaded (not reran) cell also has interactivity. Not sure how that exemption works. As a side note, current JupyterLab's behaviour varies with different forms of SVG display (see issue#10464), but that's not the issue here.
4
7
68,432,070
2021-7-18
https://stackoverflow.com/questions/68432070/can-i-distinguish-positional-and-keyword-arguments-from-inside-the-called-functi
I would like to deprecate an old syntax for a function in a Python library. In order to effectively detect whether someone is using the old syntax, I need to know whether an argument is called positionally or through a keyword. Is there a way to detect this? As an example, consider this function: def store(name='', value=0): # Some functionality here... Can this function know whether it has been called like this: store('ben', 5) or like this? store(name='ben', value=5)
You could add *args to your function's arguments and check if that contains any arguments - if yes, the user passed positional arguments to your function that should have been passed as keyword arguments: def store(*args, name='', value=0): if args: # args is not empty - user passed deprecated positional arguments print(f"Warning: you passed the arguments {' and '.join(map(str, args))} as positional arguments.") print("This is deprecated - please pass them as keyword arguments") name = args[0] if len(args) >= 2: value = args[1] store('ben', 5) store(name='ben', value=5) Tested in the interactive Python console: >>> store('ben', 5) Warning: you passed the arguments ben and 5 as positional arguments. This is deprecated - please pass them as keyword arguments >>> store('ben', value=5) Warning: you passed the arguments ben as positional arguments. This is deprecated - please pass them as keyword arguments >>> store(name='ben', value=5) >>>
4
3
68,396,513
2021-7-15
https://stackoverflow.com/questions/68396513/problem-in-lr-find-in-pytorch-fastai-course
While following the Jupyter notebooks for the course I hit upon an error when these lines are run. I know that the cnn_learner line has got no errors whatsoever, The problem lies in the lr_find() part It seems that learn.lr_find() does not want to return two values! Although its documentation says that it returns a tuple. That is my problem. These are the lines of code: learn = cnn_learner(dls, resnet34, metrics=error_rate) lr_min,lr_steep = learn.lr_find() The error says: not enough values to unpack (expected 2, got 1) for the second line. Also, I get this graph with one 'marker' which I suppose is either one of the values of lr_min or lr_steep This is the graph When I run learn.lr_find() only, i.e. do not capture the output in lr_min, lr_steep; it runs well but then I do not get the min and steep learning rates (which is really important for me) I read through what lr_find does and it is clear that it returns a tuple. Its docstring says Launch a mock training to find a good learning rate and return suggestions based on suggest_funcs as a named tuple I had duplicated the original notebook, and when I hit this error, I ran the original notebook, with the same results. I update the notebooks as well, but no change! Wherever I have searched for this online, any sort of error hasn't popped up. The only relevant thing I found is that lr_find() returns different results of the learning rates after every run, which is perfectly fine.
I was having the same problem and I found that the lr_find() output's has updated. You can substitute the second line to lrs = learn.lr_find(suggest_funcs=(minimum, steep, valley, slide)), and then you just substitute where you using lr_min and lr_steep to lrs.minimum and lrs.steep respectively, this should work fine and solve your problem. If you wanna read more about it, you can see this post that is in the fastai's forum.
5
12
68,427,977
2021-7-18
https://stackoverflow.com/questions/68427977/how-does-the-magic-store-commands-for-dataframe-work
I created two dataframes (df1 and df2) in two jupyter notebooks (N1 and N2) respectively. On day 1, I used the below store command to use the df1 and its variables in N2 jupyter notebook %store -r df1 But on day 25, I created a new jupyter notebook N3 and used the below store command again %store -r df1 And it seemed to easily pull all the details of dataframe df1 into N3 jupyter notebook easily? How does this work? Aren't they valid only for that specific jupyter notebook session? Then instead of storing all dataframes as files, can we just execute a store command and store/retrieve them easily anytime?
Storemagic is an IPython feature that "Stores variables, aliases and macros in IPython’s database". Because it is an IPython feature rather than exclusive to Jupyter you can store and restore variables across many IPython and Jupyter sessions. In my environment (IPython 7.19.0) the variables get stored in the directory: $HOME/.ipython/profile_default/db/autorestore They are stored one per file when stored using %store <name>. The files themselves are the pickled representation of the stored variables. You could manually load the variables by using the following: import pickle # Name of the previously stored variable stored_var = 'test' # myvar will contain the variable previously stored with "%store test" myvar_filename = get_ipython().ipython_dir + '/profile_default/db/autorestore/' + stored_var with open(myvar_filename, 'rb') as f: myvar = pickle.load(f)
4
7
68,429,055
2021-7-18
https://stackoverflow.com/questions/68429055/checking-if-a-list-contains-any-one-of-multiple-characters
I'm new to Python. I want to check if the given list A contains any character among ('0', '2', '4', '6', '8') or not, where '0' <= A[i] <= '9'. I can do this as: if not ('0' in A or '2' in A or '4' in A or '6' in A or '8' in A): return False but, is there any shorter way to do this? Thanks.
You can use any with generator expression A = [...] chars = ('0', '2', '4', '6', '8') return any(c in A for c in chars)
5
5
68,428,331
2021-7-18
https://stackoverflow.com/questions/68428331/is-validation-split-0-2-in-keras-a-cross-validation
I'm a self-taught Python user. In Python codes, model.fit(x_train, y_train, verbose=1, validation_split=0.2, shuffle=True, epochs=20000) Then, 80% of the data is used for training and 20% is used for validation, and the epoch is repeated 20,000 times for training. And, shuffle=True So, I think this code is a cross-validation, or more specifically, a k-divisional cross-validation with k=5. I was wondering if this is correct, because when I looked up the Keras code for k-fold cross-validation, I found some code that uses Scikit-learn's Kfold. I apologize for the rudimentary nature of this question, but I would appreciate it if you could help me.
The model first shuffles the data and then splits it to train and validation For the next epoch, the train & validation have already been defined in the first epoch, so it does not shuffle & split again, but uses the previously defined datasets. Therefore, it is a cross-validation.
7
8
68,426,892
2021-7-18
https://stackoverflow.com/questions/68426892/why-i-get-this-error-on-python-firebase-admin-initialize-app
when I was trying to connect google firebase real time database, I got this error: ValueError: The default Firebase app already exists. This means you called initialize_app() more than once without providing an app name as the second argument. In most cases you only need to call initialize_app() once. But if you do want to initialize multiple apps, pass a second argument to initialize_app() to give each app a unique name. Here is my code: import firebase_admin from firebase_admin import credentials from firebase_admin import db cred = credentials.Certificate('firebase-sdk.json') firebase_admin.initialize_app(cred, { 'databaseURL': 'https://test-139a6-default-rtdb.firebaseio.com/' })
You only need to initialize (create) the app once. When you have created the app, use get_app instead: # The default app's name is "[DEFAULT]" firebase_admin.get_app(name='[DEFAULT]')
5
4
68,422,297
2021-7-17
https://stackoverflow.com/questions/68422297/batch-matrix-multiplication-in-numpy
I have two numpy arrays a and b of shape [5, 5, 5] and [5, 5], respectively. For both a and b the first entry in the shape is the batch size. When I perform matrix multiplication option, I get an array of shape [5, 5, 5]. An MWE is as follows. import numpy as np a = np.ones((5, 5, 5)) b = np.random.randint(0, 10, (5, 5)) c = a @ b # c.shape is (5, 5, 5) Suppose I were to run a loop over the batch size, i.e. a[0] @ b[0].T, it would result in an array of shape [5, 1]. Finally, if I concatenate all the results along axis 1, I would get a resultant array with shape [5, 5]. The code below better describes these lines. a = np.ones((5, 5, 5)) b = np.random.randint(0, 10, (5, 5)) c = [] for i in range(5): c.append(a[i] @ b[i].T) c = np.concatenate([d[:, None] for d in c], axis=1).T # c.shape evaluates to be (5, 5) Can I get the above functionality without using loop? For example, PyTorch provides a function called torch.bmm to compute this. Thanks.
You can work this out using numpy einsum. c = np.einsum('BNi,Bi ->BN', a, b) Pytorch also provides this einsum function with slight change in syntax. So you can easily work it out. It easily handles other shapes as well. Then you don't have to worry about transpose or squeeze operations. It also saves memory because no copy of existing matrices are created internally.
5
5
68,424,586
2021-7-17
https://stackoverflow.com/questions/68424586/set-of-sets-and-in-operator
I was doing some coding exercises and I ended up using a set of frozensets. Here is the code: cities = 4 roads = [[0, 1], [1, 2], [2, 0]] roads = set([frozenset(road) for road in roads]) output = [] for i in range(cities-1): for j in range(i+1, cities): if set([i,j]) not in roads: output.append([i,j]) As you can see, the if in the nested for tests for the presence of the set in the set of sets. However, it was my understanding that in this case, hashables need to be used with the in operator. If I replace set([i,j]) with [i,j], I do get the following error: TypeError: unhashable type: 'list' So, here is my question: why does it work with the set, which is not (as far as I know) hashable and not with the list? Should it not also throw an error, what am I missing?
From my reading of the CPython source it appears that the test for contains checks if the key is found in the set; if not, and if the key is a set object, an attempt is made to convert the key to a frozenset, and then that key is tested. The same behavior exists for operations like remove, as seen here: >>> s = set([frozenset([1,2])]) >>> s {frozenset({1, 2})} >>> s.remove(set([1,2])) >>> s set() The code in question in the interpreter is the set_contains() function in Objects/setobject.c.
7
5
68,422,739
2021-7-17
https://stackoverflow.com/questions/68422739/how-to-write-type-hints-for-a-function-returning-itself
from typing import Callable def f() -> Callable: return f How to explicitly define f's type? like Callable[[], Callable] I think it is slightly like a linked list, but I can't implement it. from typing import Union class Node: def __init__(self, val): self.val = val self.next: Union[Node, None] = None
I think @chepner's answer is great. If you really do want to express this as a recursive Callable type, then you could restructure the function as a callable class and do something like this: from __future__ import annotations class F: def __call__(self) -> F: return self f = F() You can test this with mypy to see that it maintains its type on future calls: g = f() h = g(1) # Too many arguments for "__call__" of "F" i = h() j = i(2) # Too many arguments for "__call__" of "F" k = j()
4
5
68,422,590
2021-7-17
https://stackoverflow.com/questions/68422590/python-get-the-last-element-from-generator-items
I'm super amazed using the generator instead of list. But I can't find any solution for this question. What is the efficient way to get the first and last element from generator items? Because with list we can just do lst[0] and lst[-1] Thanks for the help. I can't provide any codes since it's clearly that's just what I want to know :)
You have to iterate through the whole thing. Say you have this generator: def foo(): yield 0 yield 1 yield 2 yield 3 The easiest way to get the first and last value would be to convert the generator into a list. Then access the values using list lookups. data = list(foo()) print(data[0], data[-1]) If you want to avoid creating a container, you could use a for-loop to exhaust the generator. gen = foo() first = last = next(gen) for last in gen: pass print(first, last) Note: You'll want to special case this when there are no values produced by the generator.
5
8
68,400,851
2021-7-15
https://stackoverflow.com/questions/68400851/how-to-rotate-xtick-label-bar-chart-plotly-express
How can I rotate to 90Β° the team names (x-axis) on Plotly express? They are not turned in the right way. Here is my code. fig = px.bar(stacked_ratio, y="percent", x="team", color="outcome", color_discrete_map=colors, title="Long-Form Input") fig.show() Here how it looks:
You should be able to update your x-axis from a figure object with the update_xaxes method: fig = px.bar(stacked_ratio, y="percent", x="team", color="outcome", color_discrete_map=colors, title="Long-Form Input") fig.update_xaxes(tickangle=90) fig.show() You can see all options for fig.update_xaxes on the plotly website here: https://plotly.com/python/reference/layout/xaxis/
19
38
68,408,552
2021-7-16
https://stackoverflow.com/questions/68408552/how-to-overwrite-a-file-using-google-drive-api-with-python
I want to create a simple script which will upload a file to my Drive every 5 minutes using cronjob. This is the code I have so far using the boilerplate code I extracted from different locations (mainly 2: getting started page & create page): from __future__ import print_function from apiclient import errors import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from googleapiclient.http import MediaFileUpload def activateService(): creds = None if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) with open('token.pickle', 'wb') as token: pickle.dump(creds, token) return build('drive', 'v3', credentials=creds) SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly', 'https://www.googleapis.com/auth/drive.file'] myservice = activateService() file_metadata = {'name': 'myFile.txt'} media = MediaFileUpload("myFile.txt", mimetype="text/plain") file = myservice.files().create(body=file_metadata, media_body=media, fields='id').execute() The above code creates the file successfully in the "root" location, but now how I can make it so that it overwrites the previously created file instead of creating new versions everytime? I think I need to use the update API call (https://developers.google.com/drive/api/v3/reference/files/update) but there's no example code on this documentation page which has brought me to a roadblock. Any help trying to decipher that API page to create Python code would be much appreciated, thank you!
Your code uses which will create a new file every time. myservice.files().create You need to use File update The only diffrence is you need to pass the file id. file = service.files().update(fileId=file_id, media_body=media_body).execute()
5
8
68,402,691
2021-7-16
https://stackoverflow.com/questions/68402691/adding-dropping-column-instance-into-a-pipeline
In general, we will df.drop('column_name', axis=1) to remove a column in a DataFrame. I want to add this transformer into a Pipeline Example: numerical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='mean')), ('scaler', StandardScaler(with_mean=False)) ]) How can I do it?
You can encapsulate your Pipeline into a ColumnTransformer which allows you to select the data that is processed through the pipeline as follows: import pandas as pd from sklearn.preprocessing import StandardScaler from sklearn.impute import SimpleImputer from sklearn.compose import make_column_selector, make_column_transformer col_to_exclude = 'A' df = pd.DataFrame({'A' : [ 0]*10, 'B' : [ 1]*10, 'C' : [ 2]*10}) numerical_transformer = make_pipeline SimpleImputer(strategy='mean'), StandardScaler(with_mean=False) ) transform = ColumnTransformer( (numerical_transformer, make_column_selector(pattern=f'^(?!{col_to_exclude})')) ) transform.fit_transform(df) NOTE: I am using here a regex pattern to exclude the column A.
12
4
68,396,403
2021-7-15
https://stackoverflow.com/questions/68396403/kernel-density-estimation-using-scipys-gaussian-kde-and-sklearns-kerneldensity
I created some data from two superposed normal distributions and then applied sklearn.neighbors.KernelDensity and scipy.stats.gaussian_kde to estimate the density function. However, using the same bandwith (1.0) and the same kernel, both methods produce a different outcome. Can someone explain me the reason for this? Thanks for help. Below you can find the code to reproduce the issue: import matplotlib.pyplot as plt import numpy as np from scipy.stats import gaussian_kde import seaborn as sns from sklearn.neighbors import KernelDensity n = 10000 dist_frac = 0.1 x1 = np.random.normal(-5,2,int(n*dist_frac)) x2 = np.random.normal(5,3,int(n*(1-dist_frac))) x = np.concatenate((x1,x2)) np.random.shuffle(x) eval_points = np.linspace(np.min(x), np.max(x)) kde_sk = KernelDensity(bandwidth=1.0, kernel='gaussian') kde_sk.fit(x.reshape([-1,1])) y_sk = np.exp(kde_sk.score_samples(eval_points.reshape(-1,1))) kde_sp = gaussian_kde(x, bw_method=1.0) y_sp = kde_sp.pdf(eval_points) sns.kdeplot(x) plt.plot(eval_points, y_sk) plt.plot(eval_points, y_sp) plt.legend(['seaborn','scikit','scipy']) If I change the scipy bandwith to 0.25, the result of both methods look approximately the same.
What is meant by bandwidth in scipy.stats.gaussian_kde and sklearn.neighbors.KernelDensity is not the same. Scipy.stats.gaussian_kde uses a bandwidth factor https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html. For a 1-D kernel density estimation the following formula is applied: the bandwidth of sklearn.neighbors.KernelDensity = bandwidth factor of the scipy.stats.gaussian_kde * standard deviation of the sample For your estimation this probably means that your standard deviation equals 4. I would like to refer to Getting bandwidth used by SciPy's gaussian_kde function for more information.
6
6
68,407,112
2021-7-16
https://stackoverflow.com/questions/68407112/merging-dataframes-with-multi-indexes-and-column-value
I have two dataframes with multi indexes and dates as a columns: df1 df1 = pd.DataFrame.from_dict({('group', ''): {0: 'A', 1: 'A', 2: 'A', 3: 'A', 4: 'A', 5: 'A', 6: 'A', 7: 'A', 8: 'B', 9: 'B', 10: 'B', 11: 'B', 12: 'B', 13: 'B', 14: 'B', 15: 'B', 16: 'C', 17: 'C', 18: 'C', 19: 'C', 20: 'C', 21: 'C', 22: 'C', 23: 'C', 24: 'D', 25: 'D', 26: 'D', 27: 'D', 28: 'D', 29: 'D', 30: 'D'}, ('category', ''): {0: 'Apple', 1: 'Amazon', 2: 'Google', 3: 'Netflix', 4: 'Facebook', 5: 'Uber', 6: 'Tesla', 7: 'total', 8: 'Apple', 9: 'Amazon', 10: 'Google', 11: 'Netflix', 12: 'Facebook', 13: 'Uber', 14: 'Tesla', 15: 'total', 16: 'Apple', 17: 'Amazon', 18: 'Google', 19: 'Netflix', 20: 'Facebook', 21: 'Uber', 22: 'Tesla', 23: 'total', 24: 'Apple', 25: 'Amazon', 26: 'Google', 27: 'Netflix', 28: 'Uber', 29: 'Tesla', 30: 'total'}, (pd.Timestamp('2021-06-28 00:00:00'), 'total_orders'): {0: 88.0, 1: 66.0, 2: 191.0, 3: 558.0, 4: 12.0, 5: 4.0, 6: 56.0, 7: 975.0, 8: 90.0, 9: 26.0, 10: 49.0, 11: 250.0, 12: 7.0, 13: 2.0, 14: 44.0, 15: 468.0, 16: 36.0, 17: 52.0, 18: 94.0, 19: 750.0, 20: 10.0, 21: 0.0, 22: 52.0, 23: 994.0, 24: 16.0, 25: 22.0, 26: 5.0, 27: 57.0, 28: 3.0, 29: 33.0, 30: 136.0}, (pd.Timestamp('2021-06-28 00:00:00'), 'total_sales'): {0: 4603.209999999999, 1: 2485.059999999998, 2: 4919.39999999998, 3: 6097.77, 4: 31.22, 5: 155.71, 6: 3484.99, 7: 17237.35999999996, 8: 561.54, 9: 698.75, 10: 1290.13, 11: 4292.68000000001, 12: 947.65, 13: 329.0, 14: 2889.65, 15: 9989.4, 16: 330.8899999999994, 17: 2076.26, 18: 2982.270000000004, 19: 11978.62000000002, 20: 683.0, 21: 0.0, 22: 3812.16999999999, 23: 20963.21000000002, 24: 234.4900000000002, 25: 896.1, 26: 231.0, 27: 893.810000000001, 28: 129.0, 29: 1712.329999999998, 30: 4106.729999999996}, (pd.Timestamp('2021-07-05 00:00:00'), 'total_orders'): {0: 109.0, 1: 48.0, 2: 174.0, 3: 592.0, 4: 13.0, 5: 5.0, 6: 43.0, 7: 984.0, 8: 62.0, 9: 13.0, 10: 37.0, 11: 196.0, 12: 8.0, 13: 1.0, 14: 3.0, 15: 30.0, 16: 76.0, 17: 5.0, 18: 147.0, 19: 88.0, 20: 8.0, 21: 1.0, 22: 78.0, 23: 1248.0, 24: 1.0, 25: 18.0, 26: 23.0, 27: 83.0, 28: 0.0, 29: 29.0, 30: 154.0}, (pd.Timestamp('2021-07-05 00:00:00'), 'total_sales'): {0: 3453.02, 1: 17868.730000000003, 2: 44707.82999999999, 3: 61425.99, 4: 1261.0, 5: 1914.6000000000001, 6: 24146.09, 7: 154777.25999999998, 8: 6201.489999999999, 9: 5513.960000000001, 10: 9645.87, 11: 25086.785, 12: 663.0, 13: 448.61, 14: 26332.7, 15: 73892.415, 16: 556.749999999999, 17: 1746.859999999997, 18: 4103.219999999994, 19: 15571.52000000008, 20: 86.0, 21: 69.0, 22: 5882.759999999995, 23: 26476.11000000004, 24: 53.0, 25: 801.220000000001, 26: 684.56, 27: 1232.600000000002, 28: 0.0, 29: 15902.1, 30: 43943.48}, (pd.Timestamp('2021-07-12 00:00:00'), 'total_orders'): {0: 32.0, 1: 15.0, 2: 89.0, 3: 239.0, 4: 2.0, 5: 3.0, 6: 20.0, 7: 400.0, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 21.0, 17: 14.0, 18: 58.0, 19: 281.0, 20: 3.0, 21: 3.0, 22: 33.0, 23: 413.0, 24: 7.0, 25: 6.0, 26: 4.0, 27: 13.0, 28: 0.0, 29: 18.0, 30: 48.0}, (pd.Timestamp('2021-07-12 00:00:00'), 'total_sales'): {0: 2147.7000000000003, 1: 4767.3, 2: 2399.300000000003, 3: 3137.440000000002, 4: 178.0, 5: 866.61, 6: 10639.03, 7: 73235.38, 8: 0.0, 9: 0.0, 10: 0.0, 11: 0.0, 12: 0.0, 13: 0.0, 14: 0.0, 15: 0.0, 16: 220.94, 17: 727.5199999999995, 18: 2500.96999999999, 19: 4414.00999999998, 20: 15.0, 21: 196.71, 22: 2170.1, 23: 9745.24999999997, 24: 126.55, 25: 290.2, 26: 146.01, 27: 233.0, 28: 0.0, 29: 973.18, 30: 1658.940000000002}}).set_index(['group','category']) df2 df2 = pd.DataFrame.from_dict({'group': {0: 'total_full', 1: 'total_full', 2: 'A', 3: 'A', 4: 'B', 5: 'B', 6: 'C', 7: 'C', 8: 'D', 9: 'D', 10: 'Apple_total', 11: 'Apple_total', 12: 'A', 13: 'A', 14: 'B', 15: 'B', 16: 'C', 17: 'C', 18: 'D', 19: 'D', 20: 'Amazon_total', 21: 'Amazon_total', 22: 'A', 23: 'A', 24: 'B', 25: 'B', 26: 'C', 27: 'C', 28: 'D', 29: 'D', 30: 'Google_total', 31: 'Google_total', 32: 'A', 33: 'A', 34: 'B', 35: 'B', 36: 'C', 37: 'C', 38: 'D', 39: 'D', 40: 'Facebook_total', 41: 'Facebook_total', 42: 'A', 43: 'A', 44: 'B', 45: 'B', 46: 'C', 47: 'C', 48: 'D', 49: 'D', 50: 'Netflix_total', 51: 'Netflix_total', 52: 'A', 53: 'A', 54: 'B', 55: 'B', 56: 'C', 57: 'C', 58: 'D', 59: 'D', 60: 'Tesla_total', 61: 'Tesla_total', 62: 'A', 63: 'A', 64: 'B', 65: 'B', 66: 'C', 67: 'C', 68: 'D', 69: 'D', 70: 'Uber_total', 71: 'Uber_total', 72: 'A', 73: 'A', 74: 'B', 75: 'B', 76: 'C', 77: 'C', 78: 'D', 79: 'D'}, 'category': {0: 'total_full', 1: 'total_full', 2: 'group_total', 3: 'group_total', 4: 'group_total', 5: 'group_total', 6: 'group_total', 7: 'group_total', 8: 'group_total', 9: 'group_total', 10: 'Apple_total', 11: 'Apple_total', 12: 'Apple', 13: 'Apple', 14: 'Apple', 15: 'Apple', 16: 'Apple', 17: 'Apple', 18: 'Apple', 19: 'Apple', 20: 'Amazon_total', 21: 'Amazon_total', 22: 'Amazon', 23: 'Amazon', 24: 'Amazon', 25: 'Amazon', 26: 'Amazon', 27: 'Amazon', 28: 'Amazon', 29: 'Amazon', 30: 'Google_total', 31: 'Google_total', 32: 'Google', 33: 'Google', 34: 'Google', 35: 'Google', 36: 'Google', 37: 'Google', 38: 'Google', 39: 'Google', 40: 'Facebook_total', 41: 'Facebook_total', 42: 'Facebook', 43: 'Facebook', 44: 'Facebook', 45: 'Facebook', 46: 'Facebook', 47: 'Facebook', 48: 'Facebook', 49: 'Facebook', 50: 'Netflix_total', 51: 'Netflix_total', 52: 'Netflix', 53: 'Netflix', 54: 'Netflix', 55: 'Netflix', 56: 'Netflix', 57: 'Netflix', 58: 'Netflix', 59: 'Netflix', 60: 'Tesla_total', 61: 'Tesla_total', 62: 'Tesla', 63: 'Tesla', 64: 'Tesla', 65: 'Tesla', 66: 'Tesla', 67: 'Tesla', 68: 'Tesla', 69: 'Tesla', 70: 'Uber_total', 71: 'Uber_total', 72: 'Uber', 73: 'Uber', 74: 'Uber', 75: 'Uber', 76: 'Uber', 77: 'Uber', 78: 'Uber', 79: 'Uber'}, 'type': {0: 'Sales_1', 1: 'Sales_2', 2: 'Sales_1', 3: 'Sales_2', 4: 'Sales_1', 5: 'Sales_2', 6: 'Sales_1', 7: 'Sales_2', 8: 'Sales_1', 9: 'Sales_2', 10: 'Sales_1', 11: 'Sales_2', 12: 'Sales_1', 13: 'Sales_2', 14: 'Sales_1', 15: 'Sales_2', 16: 'Sales_1', 17: 'Sales_2', 18: 'Sales_1', 19: 'Sales_2', 20: 'Sales_1', 21: 'Sales_2', 22: 'Sales_1', 23: 'Sales_2', 24: 'Sales_1', 25: 'Sales_2', 26: 'Sales_1', 27: 'Sales_2', 28: 'Sales_1', 29: 'Sales_2', 30: 'Sales_1', 31: 'Sales_2', 32: 'Sales_1', 33: 'Sales_2', 34: 'Sales_1', 35: 'Sales_2', 36: 'Sales_1', 37: 'Sales_2', 38: 'Sales_1', 39: 'Sales_2', 40: 'Sales_1', 41: 'Sales_2', 42: 'Sales_1', 43: 'Sales_2', 44: 'Sales_1', 45: 'Sales_2', 46: 'Sales_1', 47: 'Sales_2', 48: 'Sales_1', 49: 'Sales_2', 50: 'Sales_1', 51: 'Sales_2', 52: 'Sales_1', 53: 'Sales_2', 54: 'Sales_1', 55: 'Sales_2', 56: 'Sales_1', 57: 'Sales_2', 58: 'Sales_1', 59: 'Sales_2', 60: 'Sales_1', 61: 'Sales_2', 62: 'Sales_1', 63: 'Sales_2', 64: 'Sales_1', 65: 'Sales_2', 66: 'Sales_1', 67: 'Sales_2', 68: 'Sales_1', 69: 'Sales_2', 70: 'Sales_1', 71: 'Sales_2', 72: 'Sales_1', 73: 'Sales_2', 74: 'Sales_1', 75: 'Sales_2', 76: 'Sales_1', 77: 'Sales_2', 78: 'Sales_1', 79: 'Sales_2'}, '2021-06-28': {0: 67.5277641202152, 1: 82.7854700135998, 2: 21.50082266792856, 3: 22.03644997199996, 4: 64.460440147, 5: 10.1060499896, 6: 65.1530371974946, 7: 50.6429700519999, 8: 56.413464107792045, 9: 0, 10: 17.48074540313092, 11: 26.8376199976, 12: 52.172, 13: 61.16600000040001, 14: 20.9447844, 15: 40.69122000000001, 16: 83.55718929717925, 17: 14.98039999719995, 18: 20.806771705951697, 19: np.nan, 20: 18.3766353690825, 21: 12.82565001479992, 22: 52.425508769690694, 23: 25.661999978399994, 24: 17.88071596, 25: 24.384659998799997, 26: 91.10086982794643, 27: 12.77899003759993, 28: 16.969540811445366, 29: np.nan, 30: 18.8795397517309, 31: 26.73017999840005, 32: 53.52039700062155, 33: 58.81199999639999, 34: 12.1243325, 35: 24.0544100028, 36: 55.94068246571674, 37: 133.86376999920006, 38: 7.294127785392621, 39: np.nan, 40: 6.07807089184563, 41: 7.27483001599998, 42: 2.300470581874837, 43: 30.71300000639998, 44: 5.810764652, 45: 12.333119997600003, 46: 25.475930745418292, 47: 64.228710012, 48: 9.490904912552498, 49: np.nan, 50: 8.184780211399392, 51: 24.59321999400001, 52: 6.807138946302334, 53: 12.0879999972, 54: 0.869207661, 55: 0.324, 56: 0.5084336040970575, 57: 12.181219996800007, 58: 0, 59: np.nan, 60: 9.293956915067886, 61: 11.171379993599999, 62: 6.384936971649232, 63: 3.657999996, 64: 0.913782413, 65: 1.9992000012000002, 66: 1.5322078073061867, 67: 5.514179996399999, 68: 0.4630297231124678, 69: np.nan, 70: 36.23403557795798, 71: 53.35258999919999, 72: 21.890370397789923, 73: 9.937449997200002, 74: 5.916852561, 75: 6.319439989199998, 76: 7.03772344983066, 77: 37.095700012799995, 78: 1.3890891693374032, 79: np.nan}, '2021-07-05': {0: 65.4690491915759, 1: 98.5235100112003, 2: 21.4573181155924, 3: 241.06741999679997, 4: 67.481716829, 5: 11.60325000040002, 6: 27.5807099999998, 7: 65.8528400140003, 8: 58.949304246983736, 9: 0.0, 10: 185.65887577993723, 11: 318.9965699964001, 12: 54.517, 13: 66.55265999039996, 14: 21.92632044, 15: 43.67116000320002, 16: 87.47349898707688, 17: 208.7727500028001, 18: 21.742056352860352, 19: np.nan, 20: 16.6038963173654, 21: 25.28952001920013, 22: 54.7820864335212, 23: 36.75802000560001, 24: 18.71872129, 25: 30.1634600016, 26: 95.37075040035738, 27: 138.3680400120001, 28: 17.73233819348684, 29: np.nan, 30: 14.80302342121337, 31: 251.83851001200003, 32: 55.926190956481534, 33: 72.4443400032, 34: 12.69221484, 35: 26.032340003999998, 36: 58.56261169338368, 37: 153.36183000480003, 38: 7.622005931348156, 39: np.nan, 40: 72.24367956241771, 41: 14.83083001279999, 42: 29.5726042895728, 43: 38.723000005199985, 44: 6.083562133, 45: 12.845630001599998, 46: 26.66998281055652, 47: 63.26220000600001, 48: 9.917530329288393, 49: np.nan, 50: 8.555606693927, 51: 23.802009994800002, 52: 7.113126469779095, 53: 7.206999998399999, 54: 0.910216433, 55: 1.4089999991999997, 56: 0.5322637911479053, 57: 15.186009997200001, 58: 0.0, 59: np.nan, 60: 9.716385738295367, 61: 14.7327399948, 62: 6.671946105284065, 63: 5.691999996, 64: 0.956574175, 65: 1.0203399996, 66: 1.6040220980113027, 67: 8.020399999199999, 68: 0.4838433599999999, 69: np.nan, 70: 37.88758167841983, 71: 59.03332998119994, 72: 22.874363860953647, 73: 13.690399997999998, 74: 6.194107518, 75: 6.4613199911999954, 76: 7.367580219466185, 77: 38.881609991999944, 78: 1.4515300799999995, 79: np.nan}, '2021-07-12': {0: 607.2971827405001, 1: 88.9671100664001, 2: 21.26749278974862, 3: 17.1524199804, 4: 64.471138092, 5: 89.84481002279999, 6: 26.2044999999998, 7: 51.9698800632001, 8: 5.354051858751745, 9: 0.0, 10: 177.42361595891452, 11: 287.5395700032, 12: 52.117, 13: 47.388199995600004, 14: 20.94835038, 15: 41.4250800048, 16: 83.57340667555117, 17: 198.72629000280003, 18: 20.784858903363354, 19: np.nan, 20: 178.323907459086, 21: 185.83897002839998, 22: 52.37029646474982, 23: 27.87144997800001, 24: 17.88339044, 25: 23.645340010799984, 26: 91.11855133792106, 27: 134.3221800396, 28: 16.95166921641509, 29: np.nan, 30: 128.82813286243115, 31: 192.6867300156, 32: 53.46403160619618, 33: 41.412320006399995, 34: 12.1261155, 35: 11.840830002000002, 36: 55.95153983444301, 37: 139.43358000720002, 38: 7.286445921791947, 39: np.nan, 40: 69.04410667683521, 41: 93.877410018, 42: 28.270665735943805, 43: 27.512680004399986, 44: 5.811656147, 45: 5.2319800032, 46: 25.480875296710053, 47: 61.132750010400024, 48: 9.480909497181356, 49: np.nan, 50: 8.178601399067174, 51: 17.6743199976, 52: 6.7999699585309585, 53: 6.131999998799999, 54: 0.870099156, 55: 0.6185600004, 56: 0.5085322845362154, 57: 10.923759998400003, 58: 0.0, 59: np.nan, 60: 9.287042311133577, 61: 19.966500000000007, 62: 6.378212628950804, 63: 6.524999997600001, 64: 0.913782413, 65: 1.9303400016, 66: 1.5325051891827732, 67: 11.511160000800006, 68: 0.4625420799999998, 69: np.nan, 70: 36.21177607303267, 71: 51.3836100036, 72: 21.86731639537707, 73: 10.310769999600003, 74: 5.917744056, 75: 5.152679999999999, 76: 7.039089381655591, 77: 35.920160003999996, 78: 1.3876262399999995, 79: np.nan}}).set_index(['group','category','type']) I am trying to merge df2 on df1 by group, category, date (date is a column) so that my output would look like this: I omitted the values from df2 of sales_1 & sales_2 in my desired output example, but those rows should be filled with the corresponding group and category values from df2. 2021-06-28 2021-07-05 2021-07-12 total_orders total_sales sales_1 sales_2 total_orders total_sales sales_1 sales_2 total_orders total_sales sales_1 sales_2 group category A Apple 88.000 4,603.210 Amazon 66.000 2,485.060 Google 191.000 4,919.400 Netflix 558.000 6,097.770 Facebook 12.000 31.220 Uber 4.000 155.710 Tesla 56.000 3,484.990 total 975.000 17,237.360 B Apple 90.000 561.540 Amazon 26.000 698.750 Google 49.000 1,290.130 Netflix 250.000 4,292.680 Facebook 7.000 947.650 Uber 2.000 329.000 Tesla 44.000 2,889.650 total 468.000 9,989.400 C Apple 36.000 330.890 Amazon 52.000 2,076.260 Google 94.000 2,982.270 Netflix 750.000 11,978.620 Facebook 10.000 683.000 Uber 0.000 0.000 Tesla 52.000 3,812.170 total 994.000 20,963.210 D Apple 16.000 234.490 Amazon 22.000 896.100 Google 5.000 231.000 Netflix 57.000 893.810 Uber 3.000 129.000 Tesla 33.000 1,712.330 total 136.000 4,106.730 So that sales_1 & sales_2 are merged on group & category and are on the same date column. The total_x from df2 can be ignored as it can be calculated from the fields. The total_values are not used in merge, only the ones after it. What I've tried: df1.reset_index().merge(df2.reset_index(), left_on=['group', 'category'], right_on=['group', 'category']) Which throws a warning: UserWarning: merging between different levels can give an unintended result (2 levels on the left,1 on the right) And is not how I expect it to merge. How could I achieve my desired output? Using df = df1.merge(df2.unstack(), left_index=True, right_index=True) Produces: Is it then just reordering of columns as I want to have a unique date and 4 columns for it? Or it might be that one date has 00:00:00 to it?
Create DatetimeIndex in column in df2 first, then unstack and merge by MultiIndexes: f = lambda x: pd.to_datetime(x) df = (df1.merge(df2.rename(columns=f).unstack(), left_index=True, right_index=True) .sort_index(axis=1)) print (df.head()) 2021-06-28 2021-07-05 \ Sales_1 Sales_2 total_orders total_sales Sales_1 group category A Apple 52.172000 61.166 88.0 4603.21 54.517000 Amazon 52.425509 25.662 66.0 2485.06 54.782086 Google 53.520397 58.812 191.0 4919.40 55.926191 Netflix 6.807139 12.088 558.0 6097.77 7.113126 Facebook 2.300471 30.713 12.0 31.22 29.572604 2021-07-12 \ Sales_2 total_orders total_sales Sales_1 Sales_2 group category A Apple 66.55266 109.0 3453.02 52.117000 47.38820 Amazon 36.75802 48.0 17868.73 52.370296 27.87145 Google 72.44434 174.0 44707.83 53.464032 41.41232 Netflix 7.20700 592.0 61425.99 6.799970 6.13200 Facebook 38.72300 13.0 1261.00 28.270666 27.51268 total_orders total_sales group category A Apple 32.0 2147.70 Amazon 15.0 4767.30 Google 89.0 2399.30 Netflix 239.0 3137.44 Facebook 2.0 178.00
5
4
68,407,031
2021-7-16
https://stackoverflow.com/questions/68407031/telethon-cannot-sign-into-accounts-with-two-step-verfication
I'm trying to log into telegram using telethon with a number with two-step verification. I use this code, client = TelegramClient(f'sessions/1', API_ID, API_HASH) client.connect() phone = input('phone ; ') y = client.send_code_request(phone) x = client.sign_in(phone=phone, password=input('password : '), code=input('code :')) But It still says that the account is two-step protected. Is there any easier way to do this without this method or... how can I properly use this method? I want to log into the account fully from the code without typing anything in the terminal (Here I used inputs just for testing. I will connect a GUI later where users can enter the details) so I don't think client.start() will work. and I'm a little confused when it comes to passing the parameters to client.start() method. Any help would be really appreciated. Thank you.
You also need to pass the phone_code_hash returned from client.send_code_request(phone). You could try (see the function call of sign_in with phone_code_hash and send_code_request): y = client.send_code_request(phone) client.sign_in(phone=phone, password=input('password : '), code=input('code :'), phone_code_hash=y.phone_code_hash)
4
4
68,381,733
2021-7-14
https://stackoverflow.com/questions/68381733/error-module-keras-optimizers-has-no-attribute-rmsprop
I am running this code below and it returned an error AttributeError: module 'keras.optimizers' has no attribute 'RMSprop'. I download tensorflow using pip install tensorflow. from keras import layers from keras import models model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.summary() from keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['acc']) Could anyone please help explain to me what's wrong with this? Thank you for your time.
As you said, you installed tensorflow (which includes keras) via pip install tensorflow, and not keras directly. Installing keras via pip install keras is not recommended anymore (see also the instructions here). This means that keras is available through tensorflow.keras. Instead of importing via from keras import optimizers, you should use from tensorflow.keras import optimizers.
11
17
68,399,161
2021-7-15
https://stackoverflow.com/questions/68399161/why-not-use-runserver-for-production-at-django
Everywhere i see that uWSGI and Gunicorn are recommended for production mode from everyone. However, there is a lot more suffering to operate with it, the python manage.py runserver is more faster, simpler, and the logging is also more visible if something goes wrong. Still, why not recommend the "python manage.py runserver" command for live production?
The runserver management command is optimized for different things from a web-server. Here are some things it does that are great for local development but would add unnecessary overhead in a production environment (source): The development server automatically reloads Python code for each request, as needed When you start the server, and each time you change Python code while the server is running, the system check framework will check your entire Django project for some common errors Serves static files if the staticfiles contrib app is enabled (in a manner the docs describe as "grossly inefficient and probably insecure") Meanwhile, production web-servers are designed to handle massively parallel workloads and are also under much higher security standards as they are the entry-point for all port 80/443 traffic to the server
8
3
68,399,376
2021-7-15
https://stackoverflow.com/questions/68399376/add-a-column-in-a-dataframe-with-the-date-of-today-like-the-today-function-in-ex
I have a dataframe df wit lots of coumnns and I would love to add another column with the name date that contains the date of today like in Excel the TODAY function. How can I do this?
Assuming your dataframe is named df, you can use the datetime library: from datetime import datetime df['new_column']= datetime.today().strftime('%Y-%m-%d') @Henry Ecker raised the point that the same is possible in native pandas using pandas.Timestamp.today df['new_column'] = pd.Timestamp.today().strftime('%Y-%m-%d')
6
10
68,385,648
2021-7-14
https://stackoverflow.com/questions/68385648/does-pyspark-support-the-short-circuit-evaluation-of-conditional-statements
I want to create a new boolean column in my dataframe that derives its value from the evaluation of two conditional statements on other columns in the same dataframe: columns = ["id", "color_one", "color_two"] data = spark.createDataFrame([(1, "blue", "red"), (2, "red", None)]).toDF(*columns) data = data.withColumn('is_red', data.color_one.contains("red") | data.color_two.contains("red")) This works fine unless either color_one or color_two is NULL in a row. In cases like these, is_red is also set to NULL for that row instead of true or false: +-------+----------+------------+-------+ |id |color_one |color_two |is_red | +-------+----------+------------+-------+ | 1| blue| red| true| | 2| red| NULL| NULL| +-------+----------+------------+-------+ This means that PySpark is evaluating all of the clauses of the conditional statement rather than exiting early (via short-circuit evaluation) if the first condition happens to be true (like in row 2 of my example above). Does PySpark support the short-circuit evaluation of conditional statements? In the meantime, here is a workaround I have come up with to null-check each column: from pyspark.sql import functions as F color_one_is_null = data.color_one.isNull() color_two_is_null = data.color_two.isNull() data = data.withColumn('is_red', F.when(color_two_is_null, data.color_one.contains("red")) .otherwise(F.when(color_one_is_null, data.color_two.contains("red")) .otherwise(F.when(color_one_is_null & color_two_is_null, F.lit(False)) .otherwise(data.color_one.contains("red") | data.color_two.contains("red")))) )
I don't think Spark support short-circuit evaluation on conditionals as stated here https://docs.databricks.com/spark/latest/spark-sql/udf-python.html#:~:text=Spark%20SQL%20(including,short-circuiting%E2%80%9D%20semantics.: Spark SQL (including SQL and the DataFrame and Dataset API) does not guarantee the order of evaluation of subexpressions. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. For example, logical AND and OR expressions do not have left-to-right β€œshort-circuiting” semantics. Another alternative way would be creating an array of column_one and column_two, then evaluate if the array contains 'red' using SQL EXISTS data = data.withColumn('is_red', F.expr("EXISTS(array(color_one, color_two), x -> x = 'red')")) data.show() +---+---------+---------+------+ | id|color_one|color_two|is_red| +---+---------+---------+------+ | 1| blue| red| true| | 2| red| null| true| | 3| null| green| false| | 4| yellow| null| false| | 5| null| red| true| | 6| null| null| false| +---+---------+---------+------+
4
9
68,386,130
2021-7-14
https://stackoverflow.com/questions/68386130/how-to-type-hint-a-callable-of-a-function-with-default-arguments
I'm trying to Type Hint the function bar, but I got the Too few arguments error when I run mypy. from typing import Callable, Optional def foo(arg: int = 123) -> float: return arg+0.1 def bar(foo: Callable[[int], float], arg: Optional[int] = None) -> float: if arg: return foo(arg) return foo() print(bar(foo)) print(bar(foo, 90)) I have also tried: Callable[[], float] (got Too many arguments error) Callable[[Optional[int]], float] (got another error) So, how should I do the Type Hinting of the bar function?
Define this: class Foo(Protocol): def __call__(self, x: int = ..., /) -> float: ... then type hint foo as Foo instead of Callable[[int], float]. Callback protocols allow you to: define flexible callback types that are hard (or even impossible) to express using the Callable[...] syntax and optional arguments are one of those impossible things to express with a normal Callable. The / at the end of __call__'s signature makes x a positional-only parameter, which allows any passed function to bar to have a parameter name that is not x (your specific example of foo calls it arg instead). If you removed /, then not only would the types have to line up as expected, but the names would have to line up too because you would be implying that Foo could be called with a keyword argument. Because bar doesn't call foo with keyword arguments, opting into that behavior by omitting the / imposes inflexibility on the user of bar (and would make your current example still fail because "arg" != "x").
46
57
68,391,621
2021-7-15
https://stackoverflow.com/questions/68391621/zappa-deploy-fails-with-attributeerror-template-object-has-no-attribute-add
Since a few days ago, zappa deploy fails with the following error (zappa version 0.50.0): Traceback (most recent call last): File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 2785, in handle sys.exit(cli.handle()) File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 510, in handle self.dispatch_command(self.command, stage) File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 557, in dispatch_command self.update(self.vargs['zip'], self.vargs['no_upload']) File "/root/repo/venv/lib/python3.6/site-packages/zappa/cli.py", line 993, in update endpoint_configuration=self.endpoint_configuration File "/root/repo/venv/lib/python3.6/site-packages/zappa/core.py", line 2106, in create_stack_template self.cf_template.add_description('Automatically generated with Zappa') AttributeError: 'Template' object has no attribute 'add_description'
Since version 3.0.0, the package troposphere removed the deprecated Template methods (see the changelog). Breaking changes: * Python 3.6+ (Python 2.x and earlier Python 3.x support is now deprecated due to Python EOL) * Remove previously deprecated Template methods. The above issue can be fixed by adding troposphere<3 in the requirements file.
12
18
68,389,791
2021-7-15
https://stackoverflow.com/questions/68389791/how-to-fix-attributeerror-wherenode-object-has-no-attribute-select-format
There are many similar questions on SO, but this specific error message did not turn up in any of my searches: AttributeError: 'WhereNode' object has no attribute 'select_format' This was raised when trying to annotate() a Django queryset with the (boolean) result of a comparison, such as the gt lookup in the following simplified example: Score.objects.annotate(positive=Q(value__gt=0)) The model looks like this: class Score(models.Model): value = models.FloatField() ... How to fix this?
This case can be fixed using the ExpressionWrapper() Score.objects.annotate( positive=ExpressionWrapper(Q(value__gt=0), output_field=BooleanField())) From the docs: ExpressionWrapper is necessary when using arithmetic on F() expressions with different types ... The same apparently holds for Q objects, although I could not find any explicit reference in the docs.
7
15
68,380,572
2021-7-14
https://stackoverflow.com/questions/68380572/displayed-video-in-jupyter-notebook-is-unplayable
I'm trying to embed a video on my local drive in Jupyter Notebook. The file name is "openaigym.video.6.7524.video000000.mp4" and it is in a folder "gym-results". Using the following code produces nothing whatsoever: from IPython.display import Video Video("./gym-results/openaigym.video.4.7524.video000000.mp4",embed =True) If I try to use HTML directly (which I got from here), it produces an unplayable video: from base64 import b64encode def video(fname, mimetype): from IPython.display import HTML video_encoded = b64encode(open(fname, "rb").read()) video_tag = '<video controls alt="test" src="data:video/{0};base64,{1}">'.format(mimetype, video_encoded) return HTML(data=video_tag) path= f"./gym-results/openaigym.video.6.7524.video000000.mp4" video(path, "mp4") That is, it produces the following: Which cannot be started. How do I solve this?
First method: it worked with me! Second method: You could try this as well: from ipywidgets import Video Video.from_file("./play_video_test.mp4", width=320, height=320) Third method: you should change the type of the cell from code to Markdown <video controls src="./play_video_test.mp4">animation</video> If all solutions do not work with you, I recommend you to update your jupyter notebook by conda update jupyter or pip install -U jupyter and go through each solution one more again.
8
3
68,387,192
2021-7-15
https://stackoverflow.com/questions/68387192/what-is-np-uint8
Is np.uint9 possible? Why use it? red_lower = np.array([136, 87, 111], np.uint9)
https://numpy.org/doc/stable/reference/arrays.scalars.html#unsigned-integer-types class numpy.ubyte[source] Unsigned integer type, compatible with C unsigned char. Character code 'B' Alias on this platform (Linux x86_64) numpy.uint8: 8-bit unsigned integer (0 to 255). Most often this is used for arrays representing images, with the 3 color channels having small integer values (0 to 255).
5
6
68,381,803
2021-7-14
https://stackoverflow.com/questions/68381803/cumulative-sum-but-conditionally-excluding-earlier-rows
I have a DataFrame like this: df = pd.DataFrame({ 'val_a': [3, 3, 3, 2, 2, 2, 1, 1, 1], 'val_b': [3, np.nan, 2, 2, 2, 0, 1, np.nan, 0], 'quantity': [1, 4, 2, 8, 5, 7, 1, 4, 2] }) It looks like this: | | val_a | val_b | quantity | |---:|--------:|--------:|-----------:| | 0 | 3 | 3 | 1 | | 1 | 3 | nan | 4 | | 2 | 3 | 2 | 2 | | 3 | 2 | 2 | 8 | | 4 | 2 | 2 | 5 | | 5 | 2 | 0 | 7 | | 6 | 1 | 1 | 1 | | 7 | 1 | nan | 4 | | 8 | 1 | 0 | 2 | It is ordered by val_a. I'd like to take a cumulative sum for the total quantity for each val_a. So: df.groupby('val_a', sort=False).sum().cumsum().drop(columns='val_b') which gives | val_a | quantity | |--------:|-----------:| | 3 | 7 | | 2 | 27 | | 1 | 34 | However, here's the tricky part. I'd like to exclude rows such that the value of val_b is greater than the key val_a. I'll clarify with an example: when calculating the total for when val_a is 3, none of the rows have val_b greater than val_a. So the cumulative total for when val_a is 3 is 7; when calculating the total for when val_a is 2, then row 0 has val_b greater than 2. That row has quantity 1. So, excluding that row, the cumulative total for when val_a is 2 is 27 - 1, i.e. 26; when calculating the total for when val_a is 1, then rows 0, 2, 3, 4 have val_b greater than 1,. That row has quantity 1. So, excluding that row, the cumulative total for when val_a is 1 is 34 - 1 - 2 - 8 - 5, i.e. 18; Here's the desired output: | val_a | quantity | |--------:|-----------:| | 3 | 7 | | 2 | 26 | | 1 | 18 |
With the help of NumPy: # sum without conditions raw_sum = df.groupby("val_a", sort=False).quantity.sum().cumsum() # comparing each `val_b` against each unique `val_a` via `gt.outer` sub_mask = np.greater.outer(df.val_b.to_numpy(), df.val_a.unique()) # selecting values to subtract from `quantity` and summing per `val_a` to_sub = (sub_mask * df.quantity.to_numpy()[:, np.newaxis]).sum(axis=0) # subtracting from the raw sum result = raw_sum - to_sub to get >>> result.reset_index() val_a quantity 0 3 7 1 2 26 2 1 18
5
2
68,380,123
2021-7-14
https://stackoverflow.com/questions/68380123/cythonized-function-with-a-single-positional-argument-is-not-possible-to-call-us
Long story short, I want to cythonize my python code and build .so files to hide it from the customer. Take this simple function: def one_positional_argument(a): print(a) and my setup.py which builds the .so file from setuptools import setup, find_packages from Cython.Build import cythonize setup( name='tmp', version='1.0.0', packages=find_packages(), nthreads=3, ext_modules=cythonize( ["a.py"], compiler_directives={'language_level': 3}, build_dir="output", ), ) When I import the .so file and try to execute my function here is what happens: one_positional_argument(1) # this prints 1 and works fine one_positional_argument(a=1) # throws TypeError: one_positional_argument() takes no keyword arguments There are multiple workarounds to this but I would like to know if I am doing anything wrong Additional info: If I have a function with 2 positional arguments, or one positional and one with default value, everything works fine. The issue is present only with 1 positional argument.
You need the always_allow_keywords compiler directive (https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html#compiler-directives). Not allowing them by default is a deliberate compatibility/speed trade-off. However, in the forthcoming Cython v3 (the alpha version is usable now...) that will change: https://github.com/cython/cython/issues/3090
4
8
68,379,495
2021-7-14
https://stackoverflow.com/questions/68379495/is-there-a-pyqt5-method-to-convert-a-python-string-to-a-qbytearray
this is probably a very simple question but I haven't been able to find a good answer to it yet. I've found answers for converting QByteArrays into python strings, but not vice versa. Is there a pyqt5 method that allows me to simply convert a python string into a QByteArray (so that it can be sent over a serial connection using QSerialPort.write()). I reckon it's likely that there is a nice built-in feature in pyqt5 to do this without manually extracting the bytes from the string and building a QByteArray from them?
You have to convert the string to bytes: >>> from PyQt5.QtCore import QByteArray >>> s = "hello world" >>> ba = QByteArray(s.encode()) >>> print(ba) b'hello world'
4
8
68,375,133
2021-7-14
https://stackoverflow.com/questions/68375133/as-a-pip-install-user-am-i-supposed-to-have-wheel-installed
Consider the usual scenario - I want to create a virtual environment and install some packages. Say python3 -m venv venv source venv/bin/activate pip install databricks-cli During the installation, I get an error Building wheels for collected packages: databricks-cli Building wheel for databricks-cli (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/paulius/Documents/wheeltest/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-m7jmyh1m/databricks-cli/setup.py'"'"'; __file__='"'"'/tmp/pip-install-m7jmyh1m/databricks-cli/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-maxix98x cwd: /tmp/pip-install-m7jmyh1m/databricks-cli/ Complete output (8 lines): /tmp/pip-install-m7jmyh1m/databricks-cli/setup.py:24: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_wheel' ---------------------------------------- ERROR: Failed building wheel for databricks-cli While it is benign (the installation actually works), it is still annoying. I know that pip install wheel resolves this, but wheel does not come with the virtual environment by default. So should I always add it to my requirements.txt, or maybe this is something that can be solved by the package maintainer (in this case databricks-cli) and hence I should open an issue in their Github? Update: note that the wheel is not necessary to install wheels, in this example bunch of dependencies get successfully downloaded and installed as wheels. The only databricks-cli package gets the error, as it does not have a wheel, but for some reason, pip tries to build it.
This was a pip bug, and the solution is to upgrade the pip. With the newest version things look fine: (venv) paulius@xps:~/Documents/wheeltest$ pip install databricks-cli Collecting databricks-cli Using cached databricks-cli-0.14.3.tar.gz (54 kB) Collecting click>=6.7 Using cached click-8.0.1-py3-none-any.whl (97 kB) Collecting requests>=2.17.3 Using cached requests-2.26.0-py2.py3-none-any.whl (62 kB) Collecting six>=1.10.0 Using cached six-1.16.0-py2.py3-none-any.whl (11 kB) Collecting tabulate>=0.7.7 Using cached tabulate-0.8.9-py3-none-any.whl (25 kB) Collecting idna<4,>=2.5 Using cached idna-3.2-py3-none-any.whl (59 kB) Collecting certifi>=2017.4.17 Using cached certifi-2021.5.30-py2.py3-none-any.whl (145 kB) Collecting urllib3<1.27,>=1.21.1 Using cached urllib3-1.26.6-py2.py3-none-any.whl (138 kB) Collecting charset-normalizer~=2.0.0 Using cached charset_normalizer-2.0.1-py3-none-any.whl (35 kB) Using legacy 'setup.py install' for databricks-cli, since package 'wheel' is not installed. Installing collected packages: urllib3, idna, charset-normalizer, certifi, tabulate, six, requests, click, databricks-cli Running setup.py install for databricks-cli ... done Successfully installed certifi-2021.5.30 charset-normalizer-2.0.1 click-8.0.1 databricks-cli-0.14.3 idna-3.2 requests-2.26.0 six-1.16.0 tabulate-0.8.9 urllib3-1.26.6 Note the Using legacy 'setup.py install' ... line. This is a related issue in the pip github https://github.com/pypa/pip/issues/8302. Not exactly that, but there is an explanation in the comments on what's the wheel building logic supposed to by.
14
4
65,550,168
2021-1-3
https://stackoverflow.com/questions/65550168/get-number-of-documents-in-collection-firestore
Is it possible to count how many documents a collection in Firestore has, using python? I just found this code functions.firestore.document('collection/{documentUid}') .onWrite((change, context) => { if (!change.before.exists) { // New document Created : add one to count db.doc(docRef).update({numberOfDocs: FieldValue.increment(1)}); } else if (change.before.exists && change.after.exists) { // Updating existing document : Do nothing } else if (!change.after.exists) { // Deleting document : subtract one from count db.doc(docRef).update({numberOfDocs: FieldValue.increment(-1)}); } return; }); How I can do this with python?
Firestore now has no limited support for aggregation queries. Previous answer below, as this still applies for cases that are not supported. Outside of the built-in count operation, if you want to determine the number of documents, you have two main options: Read all documents, and then count them in the client. Keep the document count in the database itself, and then update it with every add/delete operation. While the firsts option is simpler, it is less scalable as you'll end up with clients reading all documents just to determine the count. That's why you'll find most questions/articles about counting documents focusing on the second approach. For more on this, see: How should I handle aggregated values in Firestore The Firestore documentation on aggregation queries. How fast is counting documents in Cloud Firestore? The Firestore documentation on distributed counters, which you'll need to consider if your count changes more frequently than about once per second.
8
2
65,549,855
2021-1-3
https://stackoverflow.com/questions/65549855/openapi-specification-yml-yaml-all-refs-replace-or-expand-to-its-definition
I am looking for some solution or maybe some script that can help me to replace($ref) or expand its definitions within the YML file with schema validation. (For detail please find below example) **Example: Input with $ref ** /pets/{petId}: get: summary: Info for a specific pet operationId: showPetById tags: - pets parameters: - name: petId in: path required: true description: The id of the pet to retrieve schema: type: string responses: '200': description: Expected response to a valid request content: application/json: schema: $ref: "#/components/schemas/Pet" default: description: unexpected error content: application/json: schema: $ref: "#/components/schemas/Error" components: schemas: Pet: type: object required: - id - name properties: id: type: integer format: int64 name: type: string tag: type: string Pets: type: array items: $ref: "#/components/schemas/Pet" Error: type: object required: - code - message properties: code: type: integer format: int32 message: type: string Output: All $ref replace or expanded to its definition (with schema validation) /pets/{petId}: get: summary: Info for a specific pet operationId: showPetById tags: - pets parameters: - name: petId in: path required: true description: The id of the pet to retrieve schema: type: string responses: '200': description: Expected response to a valid request content: application/json: schema: type: object required: - id - name properties: id: type: integer format: int64 name: type: string tag: type: string default: description: unexpected error content: application/json: schema: type: object required: - code - message properties: code: type: integer format: int32 message: type: string components: schemas: Pet: type: object required: - id - name properties: id: type: integer format: int64 name: type: string tag: type: string Pets: type: array items: type: object required: - id - name properties: id: type: integer format: int64 name: type: string tag: type: string Error: type: object required: - code - message properties: code: type: integer format: int32 message: type: string Can you please suggest?
Here are some tools that can claim to be able dereference internal $refs in addition to external ones. Be aware of potential issues with circular $refs. CLI: https://github.com/APIDevTools/swagger-cli swagger-cli bundle --dereference <file> Redocly OpenAPI CLI redocly bundle --dereferenced --output <outputName> --ext <ext> [entrypoints...] Libraries: Java: Swagger Parser with the resolvefully option Node.js: oas-resolver with the resolveInternal option
8
6
65,509,313
2020-12-30
https://stackoverflow.com/questions/65509313/tortoise-orm-filter-with-logical-operators
I have two tables class User(models.Model): id = fields.BigIntField(pk=True) name = CharField(max_length=100) tags: fields.ManyToManyRelation["Tag"] = fields.ManyToManyField( "models.Tag", related_name="users", through="user_tags" ) class Tag(models.Model): id = fields.BigIntField(pk=True) name = fields.CharField(max_length=100) value = fields.CharField(max_length=100) users: fields.ManyToManyRelation[User] Let's assume this dummy data #users bob = await User.create(name="bob") alice = await User.create(name="alice") #tags foo = await Tag.create(name="t1", value="foo") bar = await Tag.create(name="t2", value="bar") #m2m await bob.tags.add(foo) await alice.tags.add(foo, bar) Now I want to count users who have both tags foo and bar, which is alice in this case, so it should be 1. The below query will give me a single level of filtering, but how do I specify that the user should have both foo and bar in their tags ? u = await User.filter(tags__name="t1", tags__value="foo").count()
Tortoise-ORM provides Q objects for complicated queries with logical operators like |(or) and &(and). Your query could be made like this: u = await User.filter(Q(tags__name="t1") & (Q(tags__value="foo") | Q(tags__value="bar"))).count()
5
9
65,515,182
2020-12-31
https://stackoverflow.com/questions/65515182/are-multiple-objectives-possible-or-tools-constraint-programming
I have a problem where I have a set of warehouses with a given production capacity that send some product to a list of customers at a given cost. I'm trying to minimize the total cost of sending the products so that each customer's demand is satisfied. That part is sorted. Now I need to add a new objective (or constraint) where I try to satisfy all the clients demand at a minimum cost but also using the minimum number of warehouses possible. Say start with 5 warehouses, if the problem is impossible then try 6, 7, 8 etc. until a solution is found were I satisfy all the demand using the minimum number of warehouses possible. How could I go about this using or-tool constraint programming module? Is it even possible? I've had a good look at the documentation but couldn't find any constraint or function that seemed to cater for this idea.
Solve with the first objective, constraint the objective with the solution, hint and solve with the new objective. from ortools.sat.python import cp_model model = cp_model.CpModel() solver = cp_model.CpSolver() x = model.NewIntVar(0, 10, "x") y = model.NewIntVar(0, 10, "y") # Maximize x model.Maximize(x) solver.Solve(model) print("x", solver.Value(x)) print("y", solver.Value(y)) print() # Hint (speed up solving) model.AddHint(x, solver.Value(x)) model.AddHint(y, solver.Value(y)) # Maximize y (and constraint prev objective) model.Add(x == round(solver.ObjectiveValue())) # use <= or >= if not optimal model.Maximize(y) solver.Solve(model) print("x", solver.Value(x)) print("y", solver.Value(y)) Reference (github issue)
5
13
65,575,796
2021-1-5
https://stackoverflow.com/questions/65575796/why-does-the-flask-bool-query-parameter-always-evaluate-to-true
I have an odd behavior for one of my endpoints in my Flask application which accepts boolean query parameters. No matter what I pass to it, such as asfsdfd or true or false, it is considered true. Only by leaving it empty does it become false. full_info = request.args.get("fullInfo", default=False, type=bool) if full_info: # do stuff It seems to be that either any input is considered to be true. Is there any way to make this work with the Flask intended way of defining the type, or do I need to accept a string and compare it?
The type parameter of request.args.get is not for specifying the value's type, but for specifying a callable: type – A callable that is used to cast the value in the MultiDict. If a ValueError is raised by this callable the default value is returned. It accepts a callable (ex. a function), applies that callable to the query parameter value, and returns the result. So the code request.args.get("fullInfo", default=False, type=bool) calls bool(value) where value is the query parameter value. In Flask, the query parameter values are always stored as strings. And calling bool() on a non-empty string will always be True: In [10]: bool('true') Out[10]: True In [11]: bool('false') Out[11]: True In [12]: bool('any non-empty will be true') Out[12]: True In [13]: bool('') Out[13]: False Instead of bool, you can pass a function that explicitly checks if the string is equal to the literal string true (or whichever value your API rules consider as true-ish): def is_it_true(value): return value.lower() == 'true' @app.route("/test") def test(): full_info = request.args.get('fullInfo', default=False, type=is_it_true) return jsonify({'full_info': full_info}) $ curl -XGET http://localhost:5000/test?fullInfo=false {"full_info":false} $ curl -XGET http://localhost:5000/test?fullInfo=adasdasd {"full_info":false} $ curl -XGET http://localhost:5000/test?fullInfo=11431423 {"full_info":false} $ curl -XGET http://localhost:5000/test?fullInfo= {"full_info":false} $ curl -XGET http://localhost:5000/test?fullInfo=true {"full_info":true} $ curl -XGET http://localhost:5000/test?fullInfo=TRUE {"full_info":true} $ curl -XGET http://localhost:5000/test {"full_info":false}
24
51
65,523,844
2020-12-31
https://stackoverflow.com/questions/65523844/colormap-diverging-from-black-instead-of-white
I would like a diverging colormap that has another colour than white (preferably black) as it center color. Neither matplotlib or cmocean seems to have such a colormap. Is my best option to create an own colormap, or are there existing ones?
@JohanC put an obvious answer in their comment and I didn't see it because of the accepted answer which requires a less-known package: Seaborn supports arbitrary diverging palettes with black center hue_neg, hue_pos = 250, 15 cmap = sns.diverging_palette(hue_neg, hue_pos, center="dark", as_cmap=True)
10
1
65,549,588
2021-1-3
https://stackoverflow.com/questions/65549588/shap-treeexplainer-for-randomforest-multiclass-what-is-shap-valuesi
I am trying to plot SHAP This is my code rnd_clf is a RandomForestClassifier: import shap explainer = shap.TreeExplainer(rnd_clf) shap_values = explainer.shap_values(X) shap.summary_plot(shap_values[1], X) I understand that shap_values[0] is negative and shap_values[1] is positive. But what about for multiple class RandomForestClassifier? I have the rnd_clf classifying one of: ['Gusto','Kestrel 200 SCI Older Road Bike', 'Vilano Aluminum Road Bike 21 Speed Shimano', 'Fixie']. How do I determine which index of shap_values[i] corresponds to which class of my output?
How do I determine which index of shap_values[i] corresponds to which class of my output? shap_values[i] are SHAP values for i'th class. What is an i'th class is more a question of an encoding schema you use: LabelEncoder, pd.factorize, etc. You may try the following as a clue: from sklearn.preprocessing import LabelEncoder labels = [ "Gusto", "Kestrel 200 SCI Older Road Bike", "Vilano Aluminum Road Bike 21 Speed Shimano", "Fixie", ] le = LabelEncoder() y = le.fit_transform(labels) encoding_scheme = dict(zip(y, labels)) pprint(encoding_scheme) {0: 'Fixie', 1: 'Gusto', 2: 'Kestrel 200 SCI Older Road Bike', 3: 'Vilano Aluminum Road Bike 21 Speed Shimano'} So, eg shap_values[3] for this particular case is for 'Vilano Aluminum Road Bike 21 Speed Shimano' To further understand how to interpret SHAP values let's prepare a synthetic dataset for multiclass classification with 100 features and 10 classes: from sklearn.datasets import make_classification from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from shap import TreeExplainer from shap import summary_plot X, y = make_classification(1000, 100, n_informative=8, n_classes=10) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) print(X_train.shape) (750, 100) At this point we have train dataset with 750 rows, 100 features, and 10 classes. Let's train RandomForestClassifier and feed it to TreeExplainer: clf = RandomForestClassifier(n_estimators=100, max_depth=3) clf.fit(X_train, y_train) explainer = TreeExplainer(clf) shap_values = np.array(explainer.shap_values(X_train)) print(shap_values.shape) (10, 750, 100) 10 : number of classes. All SHAP values are organized into 10 arrays, 1 array per class. 750 : number of datapoints. We have local SHAP values per datapoint. 100 : number of features. We have SHAP value per every feature. For example, for Class 3 you'll have: print(shap_values[3].shape) (750, 100) 750: SHAP values for every datapoint 100: SHAP value contributions for every feature Finally, you can run a sanity check to make it sure real predictions from model are the same as those predicted by shap. To do so, we'll (1) swap the first 2 dimensions of shap_values, (2) sum up SHAP values per class for all features, (3) add SHAP values to base values: shap_values_ = shap_values.transpose((1,0,2)) np.allclose( clf.predict_proba(X_train), shap_values_.sum(2) + explainer.expected_value ) True Then you may proceed to summary_plot that will show feature rankings based on SHAP values on a per class basis. For class 3 this will be: summary_plot(shap_values[3],X_train) Which is interpreted as follows: For class 3 most influential features based on SHAP contributions are 44, 64, 17 For features 64 and 17 lower values tend to result in higher SHAP values (hence higher probability of the class label) Features 92, 6, 53 are least influential out of 20 displayed
6
11
65,512,844
2020-12-30
https://stackoverflow.com/questions/65512844/how-to-generate-apple-authorization-token-client-secret
How can I generate an authorization code/client secret in python for apple sign in and device check?
First of all we need to generate a app specific p8 file (pem formatted private key) do the following for this: go to your apple developer portal, under certificates identifiers & profiles apple => keys click the + sign and create a key with the services you want to use it for then download the p8 file (be cautious not to lose it you cannot download it again) also copy the key id you will need it later in python install pyjwt and do the following: create a payload dict: data = { "iss": "team_id", # team id of your developer account this can be found in your apple developer portal => identifier of your app => "App ID prefix" "iat": timestamp_now, # creation timestamp in seconds "exp": timestamp_exp, # expiration timestamp in seconds (max 20 mins) see "aud": "https://appleid.apple.com", "sub": client_id # your bundle } open and read the private key (you downloaded in step 1) into a variable with open("filename.p8", "r") as f: private_key = f.read() generate your signed jwt token: token = jwt.encode(payload=data, key=private_key, algorithm="ES256", headers={ "kid":key_id # the key id is the id u saved in step 1 }).decode() jwt.encode returns bytes if you want it as a string you need to decode it as I did the complete code will look like this import jwt def generate_token(): with open("filename.p8", "r") as f: private_key = f.read() team_id = "teamid" client_id = "bundle.id" key_id = "keyid" validity_minutes = 20 timestamp_now = int(utils.time_stamp_seconds()) timestamp_exp = timestamp_now + (60 * validity_minutes) cls.last_token_expiration = timestamp_exp data = { "iss": team_id, "iat": timestamp_now, "exp": timestamp_exp, "aud": "https://appleid.apple.com", "sub": client_id } token = jwt.encode(payload=data, key=private_key, algorithm="ES256", headers={"kid": key_id}).decode()
6
15
65,562,875
2021-1-4
https://stackoverflow.com/questions/65562875/migration-admin-0001-initial-is-applied-before-its-dependency-app-0001-initial-o
I am trying to make custom made user model for my project in Django. My models.py: class myCustomeUser(AbstractUser): id = models.AutoField(primary_key=True) username = models.CharField(max_length=20, unique="True", blank=False) password = models.CharField(max_length=20, blank=False) is_Employee = models.BooleanField(default=False) is_Inspector = models.BooleanField(default=False) is_Industry = models.BooleanField(default=False) is_Admin = models.BooleanField(default=False) class Industry(models.Model): user = models.OneToOneField(myCustomeUser, on_delete=models.CASCADE, primary_key=True, related_name='industry_releted_user') name = models.CharField(max_length=200, blank=True) owner = models.CharField(max_length=200, blank=True) license = models.IntegerField(null=True, unique=True) industry_extrafield = models.TextField(blank=True) class Employee(models.Model): user = models.OneToOneField(myCustomeUser, on_delete=models.CASCADE, primary_key=True, related_name='employee_releted_user') industry = models.OneToOneField(Industry, on_delete=models.CASCADE, related_name='employee_releted_industry') i_id = models.IntegerField(null=True, blank=False, unique=True) name = models.CharField(max_length=200, blank=False, null=True) gmail = models.EmailField(null=True, blank=False, unique=True) rank = models.CharField(max_length=20, blank=False, null=True) employee_varified = models.BooleanField(default=False) class Inspector(models.Model): user = models.OneToOneField(myCustomeUser, on_delete=models.CASCADE, primary_key=True, related_name='inspector_releted_user') inspector_extrafield = models.TextField(blank=True) class Admin(models.Model): user = models.OneToOneField(myCustomeUser, on_delete=models.CASCADE, primary_key=True, related_name='admin_releted_user') admin_extrafield = models.TextField(blank=True) in settings.py: AUTH_USER_MODEL = 'app.myCustomeUser' Here admin.site.register is also done in admin.py. Now it shows the following message in the terminal while I try to migrate or makemigrations: Traceback (most recent call last): File "manage.py", line 21, in <module> main() File "manage.py", line 17, in main execute_from_command_line(sys.argv) File "G:\Python\lib\site-packages\django\core\management\__init__.py", line 401, in execute_from_command_line utility.execute() File "G:\Python\lib\site-packages\django\core\management\__init__.py", line 395, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "G:\Python\lib\site-packages\django\core\management\base.py", line 328, in run_from_argv self.execute(*args, **cmd_options) File "G:\Python\lib\site-packages\django\core\management\base.py", line 369, in execute output = self.handle(*args, **options) File "G:\Python\lib\site-packages\django\core\management\base.py", line 83, in wrapped res = handle_func(*args, **kwargs) File "G:\Python\lib\site-packages\django\core\management\commands\makemigrations.py", line 101, in handle loader.check_consistent_history(connection) File "G:\Python\lib\site-packages\django\db\migrations\loader.py", line 295, in check_consistent_history raise InconsistentMigrationHistory( django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency app.0001_initial on database 'default'. What does it mean? And I also don't want to set the default value of username & password in this myCustomeUser model. And also please suggest me that, is this a correct way to make usermodel?
This error will usually happen if you've done your first initial migration without including your custom user model migration file. Exactly as the message says: "admin.0001_initial is applied before its dependency custom_user_app_label.0001_initial on database 'default'" Since beginners will always do their initial migration and then create a custom user afterward. In the first migration, it will migrate all built-in Django apps including admin. Now with the custom user model, Django wanted it to be the first initial migration to be executed. See Django docs Changing to a custom user model mid-project. Due to limitations of Django’s dynamic dependency feature for swappable models, the model referenced by AUTH_USER_MODEL must be created in the first migration of its app (usually called 0001_initial); otherwise, you’ll have dependency issues. In my case, I have Django v4.1 installed. This is how I reproduce the issue: I had initially migrated my application previously. Then Created a custom User. Added the app into INSTALLED_APPS. Created migration files python manage.py makemigrations Set AUTH_USER_MODEL to the new custom User by AUTH_USER_MODEL = 'myapp_label.User' Got the migration error issue as the OP. This is how I resolve the issue: Drop the database but DO NOT delete the User initial migration. Make sure in your 0001_initial.py -> class Migration.initial is set to True. Run migration python manage.py migrate Your User initial migration should be the first migration file in the order of migration execution. See this migration output as example: Running migrations: Applying myapp_label.0001_initial... OK Applying contenttypes.0001_initial... OK Applying admin.0001_initial... As you can see it was executed first followed by the contenttypes then the admin.
17
17
65,514,544
2020-12-30
https://stackoverflow.com/questions/65514544/why-do-i-get-an-infinite-while-loop-when-changing-initial-constant-assignment-to
I'm trying to understand the walrus assignment operator. Classic while loop breaks when condition is reassigned to False within the loop. x = True while x: print('hello') x = False Why doesn't this work using the walrus operator? It ignores the reassignment of x producing an infinite loop. while x := True: print('hello') x = False
You seem to be under the impression that that assignment happens once before the loop is entered, but that isn't the case. The reassignment happens before the condition is checked, and that happens on every iteration. x := True will always be true, regardless of any other code, which means the condition will always evaluate to true.
23
35
65,577,396
2021-1-5
https://stackoverflow.com/questions/65577396/create-random-list-of-given-length-from-a-list-of-strings-in-python
I have a list of strings: lst = ["orange", "yellow", "green"] and I want to randomly repeat the values of strings for a given length. This is my code: import itertools lst = ["orange", "yellow", "green"] list(itertools.chain.from_iterable(itertools.repeat(x, 2) for x in lst)) This implementation repeats but not randomly and also it repeats equally, whereas it should be random as well with the given length.
You can use a list comprehension: import random lst = ["orange", "yellow", "green"] [lst[random.randrange(len(lst))] for i in range(100)] Explanation: random.randrange(n) returns an integer in the range 0 to n-1 included. the list comprehension repeatedly adds a random element from lst 100 times. change 100 to whatever number of elements you wish to obtain.
7
5
65,568,841
2021-1-4
https://stackoverflow.com/questions/65568841/how-to-make-a-typeddict-with-integer-keys
Is it possible to use an integer key with TypedDict (similar to dict?). Trying a simple example: from typing import TypedDict class Moves(TypedDict): 0: int=1 1: int=2 Throws: SyntaxError: illegal target for annotation It seems as though only Mapping[str, int] is supported but I wanted to confirm. It wasn't specifically stated in the Pep docs.
The intent of TypedDict is explicit in the PEP's abstract (emphasis added): This PEP proposes a type constructor typing.TypedDict to support the use case where a dictionary object has a specific set of string keys, each with a value of a specific type. and given the intended use cases are all annotatable in class syntax, implicitly applies only to dicts keyed by strings that constitute valid identifiers (things you could use as attribute or keyword argument names), not even strings in general. So as intended, int keys aren't a thing, this is just for enabling a class that uses dict-like syntax to access the "attributes" rather than attribute access syntax. While the alternative, backwards compatible syntax, allowed for compatibility with pre-3.6 Python, allows this (as well as allowing strings that aren't valid Python identifiers), e.g.: Moves = TypedDict('Moves', {0: int, 1: int}) you could only construct it with dict literals (e.g. Moves({0: 123, 1: 456})) because the cleaner keyword syntax like Moves(0=123, 1=456) doesn't work. And even though that technically works at runtime (it's all just dicts under the hood after all), the actual type-checkers that validate your type correctness may not support it (because the intent and documented use exclusively handles strings that constitute valid identifiers). Point is, don't do this. For the simple case you're describing here (consecutive integer integer "keys" starting from zero, where each position has independent meaning, where they may or may not differ by type), you really just want a tuple anyway: Moves = typing.Tuple[int, int] # Could be [int, str] if index 1 should be a string would be used for annotations the same way, and your actual point of use in the code would just be normal tuple syntax (return 1, 2). If you really want to be able to use the name Moves when creating instances, on 3.9+ you could use PEP 585 to do (no import required): Moves = tuple[int, int] allowing you to write: return Moves((1, 2)) when you want to make an "instance" of it. No runtime checking is involved (it's roughly equivalent to running tuple((1, 2)) at runtime), but static type-checkers should understand the intent.
5
5
65,528,568
2021-1-1
https://stackoverflow.com/questions/65528568/how-do-i-load-the-celeba-dataset-on-google-colab-using-torch-vision-without-ru
I am following a tutorial on DCGAN. Whenever I try to load the CelebA dataset, torchvision uses up all my run-time's memory(12GB) and the runtime crashes. Am looking for ways on how I can load and apply transformations to the dataset without hogging my run-time's resources. To Reproduce Here is the part of the code that is causing issues. # Root directory for the dataset data_root = 'data/celeba' # Spatial size of training images, images are resized to this size. image_size = 64 celeba_data = datasets.CelebA(data_root, download=True, transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ])) The full notebook can be found here Environment PyTorch version: 1.7.1+cu101 Is debug build: False CUDA used to build PyTorch: 10.1 ROCM used to build PyTorch: N/A OS: Ubuntu 18.04.5 LTS (x86_64) GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final) CMake version: version 3.12.0 Python version: 3.6 (64-bit runtime) Is CUDA available: True CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: Tesla T4 Nvidia driver version: 418.67 cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.4 [pip3] torch==1.7.1+cu101 [pip3] torchaudio==0.7.2 pip3] torchsummary==1.5.1 [pip3] torchtext==0.3.1 [pip3] torchvision==0.8.2+cu101 [conda] Could not collect Additional Context Some of the things I have tried are: Downloading and loading the dataset on seperate lines. e.g: # Download the dataset only datasets.CelebA(data_root, download=True) # Load the dataset here celeba_data = datasets.CelebA(data_root, download=False, transforms=...) Using the ImageFolder dataset class instead of the CelebA class. e.g: # Download the dataset only datasets.CelebA(data_root, download=True) # Load the dataset using the ImageFolder class celeba_data = datasets.ImageFolder(data_root, transforms=...) The memory problem is still persistent in either of the cases.
I did not manage to find a solution to the memory problem. However, I came up with a workaround, custom dataset. Here is my implementation: import os import zipfile import gdown import torch from natsort import natsorted from PIL import Image from torch.utils.data import Dataset from torchvision import transforms ## Setup # Number of gpus available ngpu = 1 device = torch.device('cuda:0' if ( torch.cuda.is_available() and ngpu > 0) else 'cpu') ## Fetch data from Google Drive # Root directory for the dataset data_root = 'data/celeba' # Path to folder with the dataset dataset_folder = f'{data_root}/img_align_celeba' # URL for the CelebA dataset url = 'https://drive.google.com/uc?id=1cNIac61PSA_LqDFYFUeyaQYekYPc75NH' # Path to download the dataset to download_path = f'{data_root}/img_align_celeba.zip' # Create required directories if not os.path.exists(data_root): os.makedirs(data_root) os.makedirs(dataset_folder) # Download the dataset from google drive gdown.download(url, download_path, quiet=False) # Unzip the downloaded file with zipfile.ZipFile(download_path, 'r') as ziphandler: ziphandler.extractall(dataset_folder) ## Create a custom Dataset class class CelebADataset(Dataset): def __init__(self, root_dir, transform=None): """ Args: root_dir (string): Directory with all the images transform (callable, optional): transform to be applied to each image sample """ # Read names of images in the root directory image_names = os.listdir(root_dir) self.root_dir = root_dir self.transform = transform self.image_names = natsorted(image_names) def __len__(self): return len(self.image_names) def __getitem__(self, idx): # Get the path to the image img_path = os.path.join(self.root_dir, self.image_names[idx]) # Load image and convert it to RGB img = Image.open(img_path).convert('RGB') # Apply transformations to the image if self.transform: img = self.transform(img) return img ## Load the dataset # Path to directory with all the images img_folder = f'{dataset_folder}/img_align_celeba' # Spatial size of training images, images are resized to this size. image_size = 64 # Transformations to be applied to each individual image sample transform=transforms.Compose([ transforms.Resize(image_size), transforms.CenterCrop(image_size), transforms.ToTensor(), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]) ]) # Load the dataset from file and apply transformations celeba_dataset = CelebADataset(img_folder, transform) ## Create a dataloader # Batch size during training batch_size = 128 # Number of workers for the dataloader num_workers = 0 if device.type == 'cuda' else 2 # Whether to put fetched data tensors to pinned memory pin_memory = True if device.type == 'cuda' else False celeba_dataloader = torch.utils.data.DataLoader(celeba_dataset, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory, shuffle=True) This implementation is memory efficient and works for my use case, even during training the memory used averages around(4GB). I would however, appreciate further intuition as to what might be causing the memory problems.
7
9
65,571,729
2021-1-5
https://stackoverflow.com/questions/65571729/hung-cells-running-multiple-jupyter-notebooks-in-parallel-with-papermill
I am trying to run jupyter notebooks in parallel by starting them from another notebook. I'm using papermill to save the output from the notebooks. In my scheduler.ipynb I’m using multiprocessing which is what some people have had success with. I create processes from a base notebook and this seems to always work the 1st time it’s run. I can run 3 notebooks with sleep 10 in 13 seconds. If I have a subsequent cell that attempts to run the exact same thing, the processes that it spawns (multiple notebooks) hang indefinitely. I’ve tried adding code to make sure the spawned processes have exit codes and have completed, even calling terminate on them once they are done- no luck, my 2nd attempt never completes. If I do: sean@server:~$ ps aux | grep ipython root 2129 0.1 0.2 1117652 176904 ? Ssl 19:39 0:05 /opt/conda/anaconda/bin/python /opt/conda/anaconda/bin/ipython kernel -f /root/.local/share/jupyter/runtime/kernel-eee374ff-0760-4490-8ed0-db03fed84f0c.json root 3418 0.1 0.2 1042076 173652 ? Ssl 19:42 0:03 /opt/conda/anaconda/bin/python /opt/conda/anaconda/bin/ipython kernel -f /root/.local/share/jupyter/runtime/kernel-3e2f09e8-969f-41c9-81cc-acd2ec4e3d54.json root 4332 0.1 0.2 1042796 174896 ? Ssl 19:44 0:04 /opt/conda/anaconda/bin/python /opt/conda/anaconda/bin/ipython kernel -f /root/.local/share/jupyter/runtime/kernel-bbd4575c-109a-4fb3-b6ed-372beb27effd.json root 17183 0.2 0.2 995344 145872 ? Ssl 20:26 0:02 /opt/conda/anaconda/bin/python /opt/conda/anaconda/bin/ipython kernel -f /root/.local/share/jupyter/runtime/kernel-27c48eb1-16b4-4442-9574-058283e48536.json I see that there appears to be 4 running kernels (4 processes). When I view the running notebooks, I see there are 6 running notebooks. This seems to be supported in the doc that a few kernels can service multiple notebooks. β€œA kernel process can be connected to more than one frontend simultaneously” But, I suspect because ipython kernels continue to run, something bad is happening where spawned processes aren’t being reaped? Some say it’s not possible using multiprocessing. Others have described the same problem. import re import os import multiprocessing from os.path import isfile from datetime import datetime import papermill as pm import nbformat # avoid "RuntimeError: This event loop is already running" # it seems papermill used to support this but it is now undocumented: # papermill.execute_notebook(nest_asyncio=True) import nest_asyncio nest_asyncio.apply() import company.config # # Supporting Functions # In[ ]: def get_papermill_parameters(notebook, notebook_prefix='/mnt/jupyter', notebook_suffix='.ipynb'): if isinstance(notebook, list): notebook_path = notebook[0] parameters = notebook[1] tag = '_' + notebook[2] if notebook[2] is not None else None else: notebook_path = notebook parameters = None tag = '' basename = os.path.basename(notebook_path) dirpath = re.sub(basename + '$', '', notebook_path) this_notebook_suffix = notebook_suffix if not re.search(notebook_suffix + '$', basename) else '' input_notebook = notebook_prefix + notebook_path + this_notebook_suffix scheduler_notebook_dir = notebook_prefix + dirpath + 'scheduler/' if not os.path.exists(scheduler_notebook_dir): os.makedirs(scheduler_notebook_dir) output_notebook = scheduler_notebook_dir + basename return input_notebook, output_notebook, this_notebook_suffix, parameters, tag # In[ ]: def add_additional_imports(input_notebook, output_notebook, current_datetime): notebook_name = os.path.basename(output_notebook) notebook_dir = re.sub(notebook_name, '', output_notebook) temp_dir = notebook_dir + current_datetime + '/temp/' results_dir = notebook_dir + current_datetime + '/' if not os.path.exists(temp_dir): os.makedirs(temp_dir) if not os.path.exists(results_dir): os.makedirs(results_dir) updated_notebook = temp_dir + notebook_name first_cell = nbformat.v4.new_code_cell(""" import import_ipynb import sys sys.path.append('/mnt/jupyter/lib')""") metadata = {"kernelspec": {"display_name": "PySpark", "language": "python", "name": "pyspark"}} existing_nb = nbformat.read(input_notebook, nbformat.current_nbformat) cells = existing_nb.cells cells.insert(0, first_cell) new_nb = nbformat.v4.new_notebook(cells = cells, metadata = metadata) nbformat.write(new_nb, updated_notebook, nbformat.current_nbformat) output_notebook = results_dir + notebook_name return updated_notebook, output_notebook # In[ ]: # define this function so it is easily passed to multiprocessing def run_papermill(input_notebook, output_notebook, parameters): pm.execute_notebook(input_notebook, output_notebook, parameters, log_output=True) # # Run All of the Notebooks # In[ ]: def run(notebooks, run_hour_utc=10, scheduler=True, additional_imports=False, parallel=False, notebook_prefix='/mnt/jupyter'): """ Run provided list of notebooks on a schedule or on demand. Args: notebooks (list): a list of notebooks to run run_hour_utc (int): hour to run notebooks at scheduler (boolean): when set to True (default value) notebooks will run at run_hour_utc. when set to False notebooks will run on demand. additional_imports (boolean): set to True if you need to add additional imports into your notebook parallel (boolean): whether to run the notebooks in parallel notebook_prefix (str): path to jupyter notebooks """ if not scheduler or datetime.now().hour == run_hour_utc: # Only run once a day on an hourly cron job. now = datetime.today().strftime('%Y-%m-%d_%H%M%S') procs = [] notebooks_base_url = company.config.cluster['resources']['daedalus']['notebook'] + '/notebooks' if parallel and len(notebooks) > 10: raise Exception("You are trying to run {len(notebooks)}. We recommend a maximum of 10 be run at once.") for notebook in notebooks: input_notebook, output_notebook, this_notebook_suffix, parameters, tag = get_papermill_parameters(notebook, notebook_prefix) if is_interactive_notebook(input_notebook): print(f"Not running Notebook '{input_notebook}' because it's marked interactive-only.") continue if additional_imports: input_notebook, output_notebook = add_additional_imports(input_notebook, output_notebook, now) else: output_notebook = output_notebook + tag + '_' + now + this_notebook_suffix print(f"Running Notebook: '{input_notebook}'") print(" - Parameters: " + str(parameters)) print(f"Saving Results to: '{output_notebook}'") print("Link: " + re.sub(notebook_prefix, notebooks_base_url, output_notebook)) if not os.path.isfile(input_notebook): print(f"ERROR! Notebook file does not exist: '{input_notebook}'") else: try: if parameters is not None: parameters.update({'input_notebook':input_notebook, 'output_notebook':output_notebook}) if parallel: # trailing comma in args is in documentation for multiprocessing- it seems to matter proc = multiprocessing.Process(target=run_papermill, args=(input_notebook, output_notebook, parameters,)) print("starting process") proc.start() procs.append(proc) else: run_papermill(input_notebook, output_notebook, parameters) except Exception as ex: print(ex) print(f"ERROR! See full error in: '{output_notebook}'\n\n") if additional_imports: temp_dir = re.sub(os.path.basename(input_notebook), '', input_notebook) if os.path.exists(temp_dir): os.system(f"rm -rf '{temp_dir}'") if procs: print("joining") for proc in procs: proc.join() if procs: print("terminating") for proc in procs: print(proc.is_alive()) print(proc.exitcode) proc.terminate() print(f"Done: Processed all {len(notebooks)} notebooks.") else: print(f"Waiting until {run_hour_utc}:00:00 UTC to run.") I'm using python==3.6.12, papermill==2.2.2 jupyter core : 4.7.0 jupyter-notebook : 5.5.0 ipython : 7.16.1 ipykernel : 5.3.4 jupyter client : 6.1.7 ipywidgets : 7.2.1
Have you tried using the subprocess module? It seems like a better option for you instead of multiprocessing. It allows you to asynchronously spawn sub-processes that will run in parallel, this can be used to invoke commands and programs as if you were using the shell. I find it really useful to write python scripts instead of bash scripts. So you could use your main notebook to run your other notebooks as independent sub-processes in parallel with subprocesses.run(your_function_with_papermill).
8
4
65,517,931
2020-12-31
https://stackoverflow.com/questions/65517931/xgboost-not-running-with-callibrated-classifier
I am trying to run XGboost with with calibrated classifier, below is the snippet of code where I am facing the error: from sklearn.calibration import CalibratedClassifierCV from xgboost import XGBClassifier import numpy as np x_train =np.array([1,2,2,3,4,5,6,3,4,10,]).reshape(-1,1) y_train = np.array([1,1,1,1,1,3,3,3,3,3]) x_cfl=XGBClassifier(n_estimators=1) x_cfl.fit(x_train,y_train) sig_clf = CalibratedClassifierCV(x_cfl, method="sigmoid") sig_clf.fit(x_train, y_train) Error: TypeError: predict_proba() got an unexpected keyword argument 'X'" Full Trace: TypeError Traceback (most recent call last) <ipython-input-48-08dd0b4ae8aa> in <module> ----> 1 sig_clf.fit(x_train, y_train) ~/anaconda3/lib/python3.8/site-packages/sklearn/calibration.py in fit(self, X, y, sample_weight) 309 parallel = Parallel(n_jobs=self.n_jobs) 310 --> 311 self.calibrated_classifiers_ = parallel( 312 delayed(_fit_classifier_calibrator_pair)( 313 clone(base_estimator), X, y, train=train, test=test, ~/anaconda3/lib/python3.8/site-packages/joblib/parallel.py in __call__(self, iterable) 1039 # remaining jobs. 1040 self._iterating = False -> 1041 if self.dispatch_one_batch(iterator): 1042 self._iterating = self._original_iterator is not None 1043 ~/anaconda3/lib/python3.8/site-packages/joblib/parallel.py in dispatch_one_batch(self, iterator) 857 return False 858 else: --> 859 self._dispatch(tasks) 860 return True 861 ~/anaconda3/lib/python3.8/site-packages/joblib/parallel.py in _dispatch(self, batch) 775 with self._lock: 776 job_idx = len(self._jobs) --> 777 job = self._backend.apply_async(batch, callback=cb) 778 # A job can complete so quickly than its callback is 779 # called before we get here, causing self._jobs to ~/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py in apply_async(self, func, callback) 206 def apply_async(self, func, callback=None): 207 """Schedule a func to be run""" --> 208 result = ImmediateResult(func) 209 if callback: 210 callback(result) ~/anaconda3/lib/python3.8/site-packages/joblib/_parallel_backends.py in __init__(self, batch) 570 # Don't delay the application, to avoid keeping the input 571 # arguments in memory --> 572 self.results = batch() 573 574 def get(self): ~/anaconda3/lib/python3.8/site-packages/joblib/parallel.py in __call__(self) 260 # change the default number of processes to -1 261 with parallel_backend(self._backend, n_jobs=self._n_jobs): --> 262 return [func(*args, **kwargs) 263 for func, args, kwargs in self.items] 264 ~/anaconda3/lib/python3.8/site-packages/joblib/parallel.py in <listcomp>(.0) 260 # change the default number of processes to -1 261 with parallel_backend(self._backend, n_jobs=self._n_jobs): --> 262 return [func(*args, **kwargs) 263 for func, args, kwargs in self.items] 264 ~/anaconda3/lib/python3.8/site-packages/sklearn/utils/fixes.py in __call__(self, *args, **kwargs) 220 def __call__(self, *args, **kwargs): 221 with config_context(**self.config): --> 222 return self.function(*args, **kwargs) ~/anaconda3/lib/python3.8/site-packages/sklearn/calibration.py in _fit_classifier_calibrator_pair(estimator, X, y, train, test, supports_sw, method, classes, sample_weight) 443 n_classes = len(classes) 444 pred_method = _get_prediction_method(estimator) --> 445 predictions = _compute_predictions(pred_method, X[test], n_classes) 446 447 sw = None if sample_weight is None else sample_weight[test] ~/anaconda3/lib/python3.8/site-packages/sklearn/calibration.py in _compute_predictions(pred_method, X, n_classes) 499 (X.shape[0], 1). 500 """ --> 501 predictions = pred_method(X=X) 502 if hasattr(pred_method, '__name__'): 503 method_name = pred_method.__name__ TypeError: predict_proba() got an unexpected keyword argument 'X' I am quite surprised by this, as it was running for me till yesterday, same code is running when I use some other Classifier. from sklearn.calibration import CalibratedClassifierCV from xgboost import XGBClassifier import numpy as np x_train = np.array([1,2,2,3,4,5,6,3,4,10,]).reshape(-1,1) y_train = np.array([1,1,1,1,1,3,3,3,3,3]) x_cfl=LGBMClassifier(n_estimators=1) x_cfl.fit(x_train,y_train) sig_clf = CalibratedClassifierCV(x_cfl, method="sigmoid") sig_clf.fit(x_train, y_train) Output: CalibratedClassifierCV(base_estimator=LGBMClassifier(n_estimators=1)) Is there a problem with my Xgboost installation?? I use conda for installation and last I remember I had uninstalled xgboost yesterday and installed it again. my xgboost version: 1.3.0
I believe that the problem comes from XGBoost. It's explained here: https://github.com/dmlc/xgboost/pull/6555 XGBoost defined: predict_proba(self, data, ... instead of: predict_proba(self, X, ... And since sklearn 0.24 calls clf.predict_proba(X=X), an exception is thrown. Here is an idea to fix the problem without changing the version of your packages: Create a class that inherits XGBoostClassifier to override predict_proba with the right argument names and call super().
5
5
65,516,325
2020-12-31
https://stackoverflow.com/questions/65516325/ssl-wrong-version-number-on-python-request
Python version: 3.9.1 I trying to write bot that send requests and it work perfectly fine, the only issue that i have is when i trying to use web debugging programs such as Charles 4.6.1 or Fiddler Everywhere. When I open it to see bot traffic and response form server it crash showing me this error: (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1124)'))) I used to have this issue and I was able to fix it by simply adding verify=False to my request, but right now it does not work.
I had the same problem. It's a bug in urllib3. You have to specify your proxy in the request, and change the 'https' value to 'http'. My example: proxies = {'https': 'http://127.0.0.1:8888'} request = r.get('https://www.example.net', verify=False, proxies=proxies)
32
42
65,549,053
2021-1-3
https://stackoverflow.com/questions/65549053/typeerror-not-supported-between-instances-of-function-and-str
I have built a sequential model with a customized f1 score metric. I pass this during the compilation of my model and then save it in *.hdf5 format. Whenever I load the model for testing purposes using the custom_objects attribute model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1}) Keras throws the following error TypeError: '<' not supported between instances of 'function' and 'str' Note: No errors are shown if I don't include the f1 metric during compilation, and the testing process works well. Train method from metrics import f1 ... # GRU with glove embeddings and two dense layers model = Sequential() model.add(Embedding(len(word_index) + 1, 100, weights=[embedding_matrix], input_length=max_len, trainable=False)) model.add(SpatialDropout1D(0.3)) model.add(GRU(100, dropout=0.3, recurrent_dropout=0.3, return_sequences=True)) model.add(GRU(100, dropout=0.3, recurrent_dropout=0.3)) model.add(Dense(1024, activation='relu')) #model.add(Dropout(0.8)) model.add(Dense(1024, activation='relu')) #model.add(Dropout(0.8)) model.add(Dense(2)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', f1]) # Fit the model with early stopping callback earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=3, verbose=0, mode='auto') model.fit(xtrain_pad, y=ytrain_enc, batch_size=512, epochs=100, verbose=1, validation_data=(xvalid_pad, yvalid_enc), callbacks=[earlystop]) model.save('app/model/test_model.hdf5') Test method from metrics import f1 ... model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1}) print(model.summary()) model.evaluate(xtest_pad, ytest_enc) # <-- error happens Custom f1 metric from keras import backend as K def f1(y_true, y_pred): def recall(y_true, y_pred): """Recall metric. Only computes a batch-wise average of recall. Computes the recall, a metric for multi-label classification of how many relevant items are selected. """ true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) possible_positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = true_positives / (possible_positives + K.epsilon()) return recall def precision(y_true, y_pred): """Precision metric. Only computes a batch-wise average of precision. Computes the precision, a metric for multi-label classification of how many selected items are relevant. """ true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = true_positives / (predicted_positives + K.epsilon()) return precision precision = precision(y_true, y_pred) recall = recall(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) EDIT test The preprocessed data used for evaluating the model normalized_dataset = pd.read_pickle(DATA['preprocessed_test_path']) lbl_enc = preprocessing.LabelEncoder() y = lbl_enc.fit_transform(normalized_dataset.label.values) xtest = normalized_dataset.preprocessed_tweets.values ytest_enc = np_utils.to_categorical(y) token = text.Tokenizer(num_words=None) max_len = 70 token.fit_on_texts(list(xtest)) xtest_seq = token.texts_to_sequences(xtest) xtest_pad = sequence.pad_sequences(xtest_seq, maxlen=max_len) EDIT2 This is my full traceback that triggers the stated error Traceback (most recent call last): File "app/main.py", line 67, in <module> main() File "app/main.py", line 64, in main test(embedding_dict) File "/Users/justauser/Desktop/sentiment-analysis/app/test.py", line 50, in test model.evaluate(xtest_pad, ytest_enc) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1389, in evaluate tmp_logs = self.test_function(iterator) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__ result = self._call(*args, **kwds) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call self._initialize(args, kwds, add_initializers_to=initializers) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 725, in _initialize self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected graph_function, _ = self._maybe_define_function(args, kwargs) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3196, in _create_graph_function func_graph_module.func_graph_from_py_func( File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn out = weak_wrapped_fn().__wrapped__(*args, **kwds) File "/Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper raise e.ag_error_metadata.to_exception(e) TypeError: in user code: /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1233 test_function * return step_function(self, iterator) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1224 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:1219 run_step ** with ops.control_dependencies(_minimum_control_deps(outputs)): /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:2793 _minimum_control_deps outputs = nest.flatten(outputs, expand_composites=True) /Users/justauser/Desktop/sentiment-analysis/env/lib/python3.8/site-packages/tensorflow/python/util/nest.py:341 flatten return _pywrap_utils.Flatten(structure, expand_composites) TypeError: '<' not supported between instances of 'function' and 'str'
After model.load() if you compile your model again with the custom metric then it should work. Therefore, after loading your model from disk using model = load_model('app/model/test_model.hdf5', custom_objects={'f1':f1}) Make sure to compile it with the metrics of interest model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc', f1])
12
13
65,580,466
2021-1-5
https://stackoverflow.com/questions/65580466/merging-multiple-videos-in-a-template-layout-with-python-ffmpeg
I'm currently trying to edit videos with the Python library of FFMPEG. I'm working with multiple file formats, precisely .mp4, .png and text inputs (.txt). The goal is to embed the different video files within a "layout" - for demonstration purposes I tried to design an example picture: The output is supposed to be a 1920x1080 .mp4 file with the following Elements: Element 3 is the video itself (due to it being a mobile phone screen recording, it's about the size displayed there) Element 1 and 2 are the "boarders", i.e. static pictures (?) Element 4 represents a regularly changing text - input through the python script (probably be read from a .txt file) Element 5 portrays a .png, .svg or alike; in general a "picture" in the broad sense. What I'm trying to achieve is to create a sort of template file in which I "just" need to input the different .mp4 and .png files, as well as the text and in the end I'll receive a .mp4 file whereas my Python script functions as the navigator sending the data packages to FFMPEG to process the video itself. I dug into the FFMPEG library as well as the python-specific repository and wasn't able to find such an option. There were lots of articles explaining the usage of "channel layouts" (though these don't seem to fit my need). In case anyone wants to try on the same versions: python --version: Python 3.7.3 pip show ffmpeg: Version: 1.4 (it's the most recent; on an off-topic note: It's not obligatory to use FFMPEG, I'd prefer using this library though if it doesn't offer the functionality I'm looking for, I'd highly appreciate if someone suggested something else)
I've done it. The code can be used as a command line program or as a module. To find out more about the command line usage, call it with the --help option. For module usage, import the make_video fucntion in your code (or copy-paste it), and pass the appropriate arguments to it. I have included a screenshot of what my script produced with some sample material, and of course, the code. Code: #!/usr/bin/python3 #-*-coding: utf-8-*- import sys, argparse, ffmpeg, os def make_video(video, left_boarder, right_boarder, picture_file, picture_pos, text_file, text_pos, output): videoprobe = ffmpeg.probe(video) videowidth, videoheight = videoprobe["streams"][0]["width"], videoprobe["streams"][0]["height"] # get width of main video scale = (1080 / videoheight) videowidth *= scale videoheight *= scale videostr = ffmpeg.input(video) # open main video audiostr = videostr.audio videostr = ffmpeg.filter(videostr, "scale", "%dx%d" %(videowidth, videoheight)) videostr = ffmpeg.filter(videostr, "pad", 1920, 1080, "(ow-iw)/2", "(oh-ih)/2") videostr = videostr.split() boarderwidth = (1920 - videowidth) / 2 # calculate width of boarders left_boarderstr = ffmpeg.input(left_boarder) # open left boarder and scale it left_boarderstr = ffmpeg.filter(left_boarderstr, "scale", "%dx%d" % (boarderwidth, 1080)) right_boarderstr = ffmpeg.input(right_boarder) # open right boarder right_boarderstr = ffmpeg.filter(right_boarderstr, "scale", "%dx%d" % (boarderwidth, 1080)) picturewidth = boarderwidth - 100 # calculate width of picture pictureheight = (1080 / 3) - 100 # calculate height of picture picturestr = ffmpeg.input(picture_file) # open picture and scale it (there will be a padding of 100 px around it) picturestr = ffmpeg.filter(picturestr, "scale", "%dx%d" % (picturewidth, pictureheight)) videostr = ffmpeg.overlay(videostr[0], left_boarderstr, x=0, y=0) # add left boarder videostr = ffmpeg.overlay(videostr, right_boarderstr, x=boarderwidth + videowidth, y=0) #add right boarder picture_y = (((1080 / 3) * 2) + 50) # calculate picture y position for bottom alignment if picture_pos == "top": picture_y = 50 elif picture_pos == "center": picture_y = (1080 / 3) + 50 videostr = ffmpeg.overlay(videostr, picturestr, x=50, y=picture_y) text_x = (1920 - boarderwidth) + 50 text_y = ((1080 / 3) * 2) + 50 if text_pos == "center": text_y = (1080 / 3) + 50 elif text_pos == "top": text_y = 50 videostr = ffmpeg.drawtext(videostr, textfile=text_file, reload=1, x=text_x, y=text_y, fontsize=50) videostr = ffmpeg.output(videostr, audiostr, output) ffmpeg.run(videostr) def main(): #create ArgumentParser and add options to it argp = argparse.ArgumentParser(prog="ffmpeg-template") argp.add_argument("--videos", help="paths to main videos (default: video.mp4)", nargs="*", default="video.mp4") argp.add_argument("--left-boarders", help="paths to images for left boarders (default: left_boarder.png)", nargs="*", default="left_boarder.png") argp.add_argument("--right-boarders", help="paths to images for right boarders (default: right_boarder.png)", nargs="*", default="right_boarder.png") argp.add_argument("--picture-files", nargs="*", help="paths to pictures (default: picture.png)", default="picture.png") argp.add_argument("--picture-pos", help="where to put the pictures (default: bottom)", choices=["top", "center", "bottom"], default="bottom") argp.add_argument("--text-files", nargs="*", help="paths to files with text (default: text.txt)", default="text.txt") argp.add_argument("--text-pos", help="where to put the texts (default: bottom)", choices=["top", "center", "bottom"], default="bottom") argp.add_argument("--outputs", nargs="*", help="paths to outputfiles (default: out.mp4)", default="out.mp4") args = argp.parse_args() # if only one file was provided, put it into a list (else, later, every letter of the filename will be treated as a filename) if type(args.videos) == str: args.videos = [args.videos] if type(args.left_boarders) == str: args.left_boarders = [args.left_boarders] if type(args.right_boarders) == str: args.right_boarders = [args.right_boarders] if type(args.picture_files) == str: args.picture_files = [args.picture_files] if type(args.text_files) == str: args.text_files = [args.text_files] if type(args.outputs) == str: args.outputs = [args.outputs] for i in (range(0, min(len(args.videos), len(args.left_boarders), len(args.right_boarders), len(args.picture_files), len(args.text_files), len(args.outputs))) or [0]): print("Info : merging video %s, boarders %s %s, picture %s and textfile %s into %s" % (args.videos[i], args.left_boarders[i], args.right_boarders[i], args.picture_files[i], args.text_files[i], args.outputs[i])) # check if all files provided with the options exist if not os.path.isfile(args.videos[i]): print("Error : video %s was not found" % args.videos[i]) continue if not os.path.isfile(args.left_boarders[i]): print("Error : left boarder %s was not found" % args.left_boarders[i]) continue if not os.path.isfile(args.right_boarders[i]): print("Error : rightt boarder %s was not found" % args.right_boarders[i]) continue if not os.path.isfile(args.picture_files[i]): print("Error : picture %s was not found" % args.picture_files[i]) continue if not os.path.isfile(args.text_files[i]): print("Error : textfile %s was not found" % args.text_files[i]) continue try: make_video(args.videos[i], args.left_boarders[i], args.right_boarders[i], args.picture_files[i], args.picture_pos, args.text_files[i], args.text_pos, args.outputs[i]) except Exception as e: print(e) if __name__ == "__main__": main() Example for direct usage as a script: $ ./ffmpeg-template --videos input1.mp4 inout2.mp4 --left-boarders left_boarder1.png left_boarder2.png --right-boarders right_boarder1.png right_boarder2.png --picture-files picture1.png picture2.png --text-files text1.txt text2.png --outputs out1.mp4 out2.mp4 --picture-pos bottom --text-pos bottom As values for the options i took the defaults. If you omit the options, these defaults will be used, and if one of the files is not found, an error message will be displayed. Image:
7
3
65,505,710
2020-12-30
https://stackoverflow.com/questions/65505710/why-is-my-fastapi-or-uvicorn-getting-shutdown
I am trying to run a service that uses simple transformers Roberta model to do classification. the inferencing script/function itself is working as expected when tested. when i include that with fast api its shutting down the server. uvicorn==0.11.8 fastapi==0.61.1 simpletransformers==0.51.6 cmd : uvicorn --host 0.0.0.0 --port 5000 src.main:app @app.get("/article_classify") def classification(text:str): """function to classify article using a deep learning model. Returns: [type]: [description] """ _,_,result = inference(text) return result error : INFO: Started server process [8262] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit) INFO: 127.0.0.1:36454 - "GET / HTTP/1.1" 200 OK INFO: 127.0.0.1:36454 - "GET /favicon.ico HTTP/1.1" 404 Not Found INFO: 127.0.0.1:36454 - "GET /docs HTTP/1.1" 200 OK INFO: 127.0.0.1:36454 - "GET /openapi.json HTTP/1.1" 200 OK before 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 17.85it/s] INFO: Shutting down INFO: Finished server process [8262] inferencing script : model_name = "checkpoint-3380-epoch-20" model = MultiLabelClassificationModel("roberta","src/outputs/"+model_name) def inference(input_text,model_name="checkpoint-3380-epoch-20"): """Function to run inverence on one sample text""" #model = MultiLabelClassificationModel("roberta","src/outputs/"+model_name) all_tags =[] if isinstance(input_text,str): print("before") result ,output = model.predict([input_text]) print(result) tags=[] for idx,each in enumerate(result[0]): if each==1: tags.append(classes[idx]) all_tags.append(tags) elif isinstance(input_text,list): result ,output = model.predict(input_text) tags=[] for res in result : for idx,each in enumerate(res): if each==1: tags.append(classes[idx]) all_tags.append(tags) return result,output,all_tags update: tried with flask and the service is working but when adding uvicorn on top of flask its getting stuck in a loop of restart.
I have solved this issue by starting a process pool using multiprocessing explicitly. from multiprocessing import set_start_method from multiprocessing import Process, Manager try: set_start_method('spawn') except RuntimeError: pass @app.get("/article_classify") def classification(text:str): """function to classify article using a deep learning model. Returns: [type]: [description] """ manager = Manager() return_result = manager.dict() # as the inference is failing p = Process(target = inference,args=(text,return_result,)) p.start() p.join() # print(return_result) result = return_result['all_tags'] return result
16
4
65,557,740
2021-1-4
https://stackoverflow.com/questions/65557740/automatically-wrap-decorate-all-pytest-unit-tests
Let's say I have a very simple logging decorator: from functools import wraps def my_decorator(func): @wraps(func) def wrapper(*args, **kwargs): print(f"{func.__name__} ran with args: {args}, and kwargs: {kwargs}") result = func(*args, **kwargs) return result return wrapper I can add this decorator to every pytest unit test individually: @my_decorator def test_one(): assert True @my_decorator def test_two(): assert 1 How can I automatically add this decorator to every single pytest unit test so I don't have to add it manually? What if I want to add it to every unit test in a file? Or in a module? My use case is to wrap every test function with a SQL profiler, so inefficient ORM code raises an error. Using a pytest fixture should work, but I have thousands of tests so it would be nice to apply the wrapper automatically instead of adding the fixture to every single test. Additionally, there may be a module or two I don't want to profile so being able to opt-in or opt-out an entire file or module would be helpful.
Provided you can move the logic into a fixture, as stated in the question, you can just use an auto-use fixture defined in the top-level conftest.py. To add the possibility to opt out for some tests, you can define a marker that will be added to the tests that should not use the fixture, and check that marker in the fixture, e.g. something like this: conftest.py import pytest def pytest_configure(config): config.addinivalue_line( "markers", "no_profiling: mark test to not use sql profiling" ) @pytest.fixture(autouse=True) def sql_profiling(request): if not request.node.get_closest_marker("no_profiling"): # do the profiling yield test.py import pytest def test1(): pass # will use profiling @pytest.mark.no_profiling def test2(): pass # will not use profiling As pointed out by @hoefling, you could also disable the fixture for a whole module by adding: pytestmark = pytest.mark.no_profiling in the module. That will add the marker to all contained tests.
8
8
65,527,354
2021-1-1
https://stackoverflow.com/questions/65527354/cant-scrape-all-the-company-names-from-a-webpage
I'm trying to parse all the company names from this webpage. There are around 2431 companies in there. However, the way I've tried below can fetches me 1000 results. This is what I can see about the number of results in response while going through dev tools: hitsPerPage: 1000 index: "YCCompany_production" nbHits: 2431 <------------------------ nbPages: 1 page: 0 How can I get the rest of the results using requests? I've tried so far: import requests url = 'https://45bwzj1sgc-dsn.algolia.net/1/indexes/*/queries?' params = { 'x-algolia-agent': 'Algolia for JavaScript (3.35.1); Browser; JS Helper (3.1.0)', 'x-algolia-application-id': '45BWZJ1SGC', 'x-algolia-api-key': 'NDYzYmNmMTRjYzU4MDE0ZWY0MTVmMTNiYzcwYzMyODFlMjQxMWI5YmZkMjEwMDAxMzE0OTZhZGZkNDNkYWZjMHJlc3RyaWN0SW5kaWNlcz0lNUIlMjJZQ0NvbXBhbnlfcHJvZHVjdGlvbiUyMiU1RCZ0YWdGaWx0ZXJzPSU1QiUyMiUyMiU1RCZhbmFseXRpY3NUYWdzPSU1QiUyMnljZGMlMjIlNUQ=' } payload = {"requests":[{"indexName":"YCCompany_production","params":"hitsPerPage=1000&query=&page=0&facets=%5B%22top100%22%2C%22isHiring%22%2C%22nonprofit%22%2C%22batch%22%2C%22industries%22%2C%22subindustry%22%2C%22status%22%2C%22regions%22%5D&tagFilters="}]} with requests.Session() as s: s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36' r = s.post(url,params=params,json=payload) print(len(r.json()['results'][0]['hits']))
As a workaround you can simulate search using alphabet as a search pattern. Using code below you will get all 2431 companies as dictionary with ID as a key and full company data dictionary as a value. import requests import string params = { 'x-algolia-agent': 'Algolia for JavaScript (3.35.1); Browser; JS Helper (3.1.0)', 'x-algolia-application-id': '45BWZJ1SGC', 'x-algolia-api-key': 'NDYzYmNmMTRjYzU4MDE0ZWY0MTVmMTNiYzcwYzMyODFlMjQxMWI5YmZkMjEwMDAxMzE0OTZhZGZkNDNkYWZjMHJl' 'c3RyaWN0SW5kaWNlcz0lNUIlMjJZQ0NvbXBhbnlfcHJvZHVjdGlvbiUyMiU1RCZ0YWdGaWx0ZXJzPSU1QiUyMiUy' 'MiU1RCZhbmFseXRpY3NUYWdzPSU1QiUyMnljZGMlMjIlNUQ=' } url = 'https://45bwzj1sgc-dsn.algolia.net/1/indexes/*/queries' result = dict() for letter in string.ascii_lowercase: print(letter) payload = { "requests": [{ "indexName": "YCCompany_production", "params": "hitsPerPage=1000&query=" + letter + "&page=0&facets=%5B%22top100%22%2C%22isHiring%22%2C%22nonprofit%22%2C%22batch%22%2C%22industries%22%2C%22subindustry%22%2C%22status%22%2C%22regions%22%5D&tagFilters=" }] } r = requests.post(url, params=params, json=payload) result.update({h['id']: h for h in r.json()['results'][0]['hits']}) print(len(result))
10
14
65,553,722
2021-1-3
https://stackoverflow.com/questions/65553722/no-module-named-delta-tables
I am getting the following error for the code below, please help: from delta.tables import * ModuleNotFoundError: No module named 'delta.tables' INFO SparkContext: Invoking stop() from shutdown hook Here is the code: ''' from pyspark.sql import * if __name__ == "__main__": spark = SparkSession \ .builder \ .appName("DeltaLake") \ .config("spark.jars", "delta-core_2.12-0.7.0") \ .config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \ .config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog") \ .getOrCreate() from delta.tables import * data = spark.range(0, 5) data.printSchema() ''' An online search suggesting verifying the scala version to delta core jar version. Here is the scala & Jar versions "delta-core_2.12-0.7.0" "Using Scala version 2.12.10, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_221"
According to delta package documentation, there is a python file named tables. You should clone the repository and copy the delta folder under python/delta to your site packages path (i.e. ..\python37\Lib\site-packages). then restart python and your code runs without the error. I am using Python3.5.3, pyspark==3.0.1,
11
5
65,526,500
2021-1-1
https://stackoverflow.com/questions/65526500/intellisense-vscode-not-showing-parameters-nor-documentation-when-hovering-above
I'm trying to migrate my entire workflow from eclipse and jupyter notebook all over to VS Code. I installed the python extension, which should come with Intellisense, but it only works partly. I get suggestions after typing a period (.), but don't get any information on parameters nor documentation when hovering over with my mouse. Thank you so much for your help and have a wonderful new year! P.S If anyone has any experience with using anaconda environments with VS Code, that would be greatly appreciated as well as I running into some problems with it recognizing the libraries. Also you can see here that when I manually activate IntelliSense, it doesn't recognize that it's in a method. Sorry for the long string of edits, but I discovered that when typing print in a regular Python file, it works, but not in a Jupyter Notebook file. Also, it still doesn't work for numpy. Thanks for the help everyone.
You could use the shortcut key "Ctrl+Space" to open the suggested options: In addition, it is recommended that you use the extension "Pylance", which works better with the extension "Python". Update: Currently in VSCode, the "IntelliSense" document content is provided by the Python language service, which is mainly for Python files (".py" files call this function), while in Jupyter, the "IntelliSense" used by the ".ipynb" file comes from the extension "Jupyter". You could refer to the content of this link to use VS code insiders, and its notebook editor has better intellisense. In VS code insiders:
6
5
65,507,374
2020-12-30
https://stackoverflow.com/questions/65507374/plotting-a-geopandas-dataframe-using-plotly
I have a geopandas dataframe, which consists of the region name(District), the geometry column, and the amount column. My goal is to plot a choropleth map using the method mentioned below https://plotly.com/python/choropleth-maps/#using-geopandas-data-frames Here’s a snippet of my dataframe I also checked that my columns were in the right format/type. And here's the code I used to plot the map fig = px.choropleth(merged, geojson=merged.geometry, locations=merged.index, color="Amount") fig.update_geos(fitbounds="locations", visible=False) fig.show() It produced the below figure which is obviously not the right figure. For some reasons, it doesn't show the map, instead it shows a line and when I zoom in, I am able to see the map but it has lines running through it. Like this Has anyone ran into a similar problem? If so how were you able to resolve it? The Plotly version I am using is 4.7.0. I have tried upgrading to a most recent version but it still didn’t work. Any help is greatly appreciated. Please find my code and the data on my github.
I'll give you the answer to @tgrandje's comment that solved the problem. Thanks to @Poopah and @tgrandje for the opportunity to raise the answer. import pandas as pd import plotly.express as px import geopandas as gpd import pyproj # reading in the shapefile fp = "./data/" map_df = gpd.read_file(fp) map_df.to_crs(pyproj.CRS.from_epsg(4326), inplace=True) df = pd.read_csv("./data/loans_amount.csv") # join the geodataframe with the cleaned up csv dataframe merged = map_df.set_index('District').join(df.set_index('District')) #merged = merged.reset_index() merged.head() fig = px.choropleth(merged, geojson=merged.geometry, locations=merged.index, color="Amount") fig.update_geos(fitbounds="locations", visible=False) fig.show()
20
38
65,563,332
2021-1-4
https://stackoverflow.com/questions/65563332/vscode-doesnt-see-pyenv-python-interpreters
I installed pyenv-win on my windows machine. It works fine in the command line. I can install python versions, set them as global etc. But My VS Code doesn't see them. It only sees one python interpreter I installed a long time ago when I wasn't using pyenv yet. VScode: pyenv: C:\Users\jbron\cmder Ξ» pyenv versions 3.7.0 * 3.8.0 (set by C:\Users\jbron\.pyenv\pyenv-win\version) Why is it not finding my pyenv interpreters? I don't have problems like that on my Linux machines
It is recommended that you try the following: Please check whether the Python environment variable contains your installed Python path: Please reopen VSCode after installation: Update: The environment variable path of "pyenv" I use is: (Under this path, we can find Python 3.6.7 downloaded by pyenv) We can see the storage location where it downloaded Python 3.6.7: C:\Users\...\.pyenv\pyenv-win\install_cache\python-3.6.7-amd64-webinstall.exe Double-click to install:
6
2
65,551,736
2021-1-3
https://stackoverflow.com/questions/65551736/python-3-9-scheduling-periodic-calls-of-async-function-with-different-paramete
How to in python 3.9 implement the functionality of calling the async functions with different parameters, by scheduled periods? The functionality should be working on any OS (Linux, Windows, Mac) I have a function fetchOHLCV which downloads market data from exchanges. The function has two input parameters - pair, timeframe. Depending on the pair and timeframe values the function download data from exchanges and stores them in DB. The goal - call this function with different periods with different parameters. 1) fetchOHLCV(pair ='EUR/USD', timeframe="1m") - each minute 2) fetchOHLCV(pair ='EUR/USD', timeframe="1h") - each new hour. 3) fetchOHLCV(pair ='EUR/USD', timeframe="1d") - each new day 4) fetchOHLCV(pair ='EUR/USD', timeframe="1w") - each new week At this moment I don't have experience working with scheduling in python and I don't know which libraries will optimal for my task and I'm interested in the best practices of implementing similar tasks.
As a ready-to-use solution, you could use aiocron library. If you are not familiar with cron expression syntax, you can use this online editor to check. Example (this will work on any OS): import asyncio from datetime import datetime import aiocron async def foo(param): print(datetime.now().time(), param) async def main(): cron_min = aiocron.crontab('*/1 * * * *', func=foo, args=("At every minute",), start=True) cron_hour = aiocron.crontab('0 */1 * * *', func=foo, args=("At minute 0 past every hour.",), start=True) cron_day = aiocron.crontab('0 9 */1 * *', func=foo, args=("At 09:00 on every day-of-month",), start=True) cron_week = aiocron.crontab('0 9 * * Mon', func=foo, args=("At 09:00 on every Monday",), start=True) while True: await asyncio.sleep(1) asyncio.run(main()) Output: 15:26:00.003226 At every minute 15:27:00.002642 At every minute ...
6
4
65,579,240
2021-1-5
https://stackoverflow.com/questions/65579240/unittest-mock-pandas-to-csv
mymodule.py def write_df_to_csv(self, df, modified_fn): new_csv = self.path + "/" + modified_fn df.to_csv(new_csv, sep=";", encoding='utf-8', index=False) test_mymodule.py class TestMyModule(unittest.TestCase): def setUp(self): args = parse_args(["-f", "test1"]) self.mm = MyModule(args) self.mm.path = "Random/path" self.test_df = pd.DataFrame( [ ["bob", "a"], ["sue", "b"], ["sue", "c"], ["joe", "c"], ["bill", "d"], ["max", "b"], ], columns=["A", "B"], ) def test_write_df_to_csv(self): to_csv_mock = mock.MagicMock() with mock.patch("project.mymodule.to_csv", to_csv_mock, create=True): self.mm.write_df_to_csv(self.test_df, "Stuff.csv") to_csv_mock.assert_called_with(self.mm.path + "/" + "Stuff.csv") When I run this test, I get: FileNotFoundError: [Errno 2] No such file or directory: 'Random/path/Stuff.csv' I'm trying to mock the to_csv in my method. My other tests run as expected, however I'm not sure where I am going wrong with this test. Is my use of MagicMock correct, or am I overlooking something else?
You didn't provide a minimal, reproducible example, so I had to strip some things out to make this work. I suppose you can fill in the missing bits on your own. One problem was with mock.patch("project.mymodule.to_csv", ...) which tries to mock a class named to_csv in the module at the import path project.mymodule. This only "worked" because you passed create=True, but of course mocking something that didn't exist before has no effect because nobody will call it. You could mock out the entire DataFrame class using mock.patch("pandas.DataFrame", ...). Note: it's not pd regardless of how (or even whether) you imported pandas in the current module. But then your unit test will be asserting that to_csv was called on any DataFrame, not necessarily the one you passed in. By mocking just the to_csv method on the one DataFrame object that we are passing into write_df_to_csv, the test becomes a bit more comprehensive and also easier to understand. We can do this using mock.patch.object. mock.patch.object returns the mock function, on which we can subsequently call assertions. Because it's a method mock, not a free function, we don't need to pass the self argument in the assertion. project/mymodule.py def write_df_to_csv(df, file_name): df.to_csv(file_name, sep=";", encoding='utf-8', index=False) project/test_mymodule.py import unittest.mock as mock import unittest import pandas as pd import project.mymodule as mm class TestMyModule(unittest.TestCase): def test_write_df_to_csv(self): test_df = pd.DataFrame(...) with mock.patch.object(test_df, "to_csv") as to_csv_mock: mm.write_df_to_csv(test_df, "Stuff.csv") to_csv_mock.assert_called_with("Stuff.csv") if __name__ == '__main__': unittest.main() Output The test fails in a proper way now, because the arguments don't actually match! $ python -m project.test_mymodule F ====================================================================== FAIL: test_write_df_to_csv (__main__.TestMyModule) ---------------------------------------------------------------------- Traceback (most recent call last): File "/tmp/project/test_mymodule.py", line 25, in test_write_df_to_csv to_csv_mock.assert_called_with("Stuff.csv") File "/usr/lib/python3.8/unittest/mock.py", line 913, in assert_called_with raise AssertionError(_error_message()) from cause AssertionError: expected call not found. Expected: to_csv('Stuff.csv') Actual: to_csv('Stuff.csv', sep=';', encoding='utf-8', index=False) ---------------------------------------------------------------------- Ran 1 test in 0.003s FAILED (failures=1)
8
3
65,588,130
2021-1-5
https://stackoverflow.com/questions/65588130/running-a-loop-multiple-lines-in-vs-code-python-debug-console
How do I run a simple loop in VS Code's python debug console? When I try to enter the following: for el in dataset: It gives me the error below. I seem to be able to enter variable names, but not multi-line commands like I can in the normal python REPL. Traceback (most recent call last): File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py", line 416, in evaluate_expression compiled = compile(_expression_to_evaluate(expression), '<string>', 'eval') File "<string>", line 1 for el in dataset: ^ SyntaxError: invalid syntax During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_comm.py", line 969, in internal_evaluate_expression_json pydevd_vars.evaluate_expression(py_db, frame, expression, is_exec=True) File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py", line 368, in new_func return _run_with_unblock_threads(original_func, py_db, curr_thread, frame, expression, is_exec) File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py", line 336, in _run_with_unblock_threads return _run_with_interrupt_thread(original_func, py_db, curr_thread, frame, expression, is_exec) File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py", line 307, in _run_with_interrupt_thread return original_func(py_db, frame, expression, is_exec) File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_vars.py", line 418, in evaluate_expression Exec(_expression_to_evaluate(expression), updated_globals, frame.f_locals) File "/home/tensorflow/.local/lib/python3.6/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec exec(exp, global_vars, local_vars) File "<string>", line 1 for el in dataset: ^ SyntaxError: unexpected EOF while parsing
You have 2 options: Write the command in a new editor window, then simply copy and paste the code in the debug console and press Enter Write the command directly in the debug console. When you want to enter a new line, press Shift+Enter. When the command is complete, execute with Enter
7
15
65,561,794
2021-1-4
https://stackoverflow.com/questions/65561794/fastai-tabular-model-how-to-get-predictions-for-new-data
I am using kaggle house prices dataset, it is divided into: train and test I built a model with fastai tabular using train set How can I predict values for test data set? I know it sounds easy and most other libs would do it like model.predict(test), but it is not the case here. I have searched fastai forums and SO and docs. There are quite a few topics regarding this issue and most of them either don't have an answer or are outdated workarounds (since fastai2 was released recently and is now called just fastai). a. model.predict only works for a single row and looping through test is not optimal. It is very slow. b. model.get_preds give results for the data you trained on Please suggest can you predict new df using trained learner for tabular data.
I found a problem. For future readers - why can't you get get_preds work for new df? (tested on kaggle's house prices advanced) The root of the problem was in categorical nans. If you train your model with one set of cat features, say color = red, green, blue; and your new df has colors: red, green, blue, black - it will throw an error because it won't know what to do with new class (black). Not to mention you need to have the same columns everywhere, which can be tricky since if you use fillmissing proc, like I did, it's nice, it would create new cols for cat values (was missing or not). So you need to triple check these nans in cats. I really wanted to make it work start to finish with fastai: Columns for train/test are identical, only train has 1 extra - target. At this point there are different classes in some cat cols. I just decided to combine them (jus to make it work), but doesn't it introduce leakage? combined = pd.concat([train, test]) # test will have nans at target, but we don't care cont_cols, cat_cols = cont_cat_split(combined, max_card=50) combined = combined[cat_cols] Some tweaking while we at it. train[cont_cols] = train[cont_cols].astype('float') # if target is not float, there will be an error later test[cont_cols[:-1]] = test[cont_cols[:-1]].astype('float'); # slice target off (I had mine at the end of cont_cols) made it to the Tabular Panda procs = [Categorify, FillMissing] to = TabularPandas(combined, procs = procs, cat_names = cat_cols) train_to_cat = to.items.iloc[:train.shape[0], :] # transformed cat for train test_to_cat = to.items.iloc[train.shape[0]:, :] # transformed cat for test. Need to separate them to.items will gave us transformed cat columns. After that, we need to assemble everything back together train_imp = pd.concat([train_to_cat, train[cont_cols]], 1) # assemble new cat and old cont together test_imp = pd.concat([test_to_cat, test[cont_cols[:-1]]], 1) # exclude SalePrice train_imp['SalePrice'] = np.log(train_imp['SalePrice']) # metric for kaggle After that, we do as per fastai tutorial. dep_var = 'SalePrice' procs = [Categorify, FillMissing, Normalize] splits = RandomSplitter(valid_pct=0.2)(range_of(train_imp)) to = TabularPandas(train_imp, procs = procs, cat_names = cat_cols, cont_names = cont_cols[:-1], # we need to exclude target y_names = 'SalePrice', splits=splits) dls = to.dataloaders(bs=64) learn = tabular_learner(dls, n_out=1, loss_func=F.mse_loss) learn.lr_find() learn.fit_one_cycle(20, slice(1e-2, 1e-1), cbs=[ShowGraphCallback()]) At this point, we have a learner but still can't predict. I thought after we do: dl = learn.dls.test_dl(test_imp, bs=64) preds, _ = learn.get_preds(dl=dl) # get prediction it would just work (preprocessing of cont values and predict), but no. It will not fillna. So just find and fill nans in test: missing = test_imp.isnull().sum().sort_values(ascending=False).head(12).index.tolist() for c in missing: test_imp[c] = test_imp[c].fillna(test_imp[c].median()) after that we can finally predict: dl = learn.dls.test_dl(test_imp, bs=64) preds, _ = learn.get_preds(dl=dl) # get prediction final_preds = np.exp(preds.flatten()).tolist() sub = pd.read_csv('../input/house-prices-advanced-regression-techniques/sample_submission.csv') sub.SalePrice = final_preds filename = 'submission.csv' sub.to_csv(filename, index=False) Apologies for the long narrative but I'm relatively new to coding and this problem was hard to point out. Very little info on how to solve it online. In short, it was a pain. Unfortunately, this is still a workaround to a problem. If the number of classes in any feature is different for test, it will freak out. Also strange it didn't fillna while fitting test to dls. Should you have any suggestions you are willing to share, please let me know.
6
3
65,548,452
2021-1-3
https://stackoverflow.com/questions/65548452/how-to-find-the-common-eigenvectors-of-two-matrices-with-distincts-eigenvalues
I am looking for finding or rather building common eigenvectors matrix X between 2 matrices A and B such as : AX=aX with "a" the diagonal matrix corresponding to the eigenvalues BX=bX with "b" the diagonal matrix corresponding to the eigenvalues where A and B are square and diagonalizable matrices. I took a look in a similar post but had not managed to conclude, i.e having valid results when I build the final wanted endomorphism F defined by : F = P D P^-1 I have also read the wikipedia topic and this interesting paper but couldn't have to extract methods pretty easy to implement. Particularly, I am interested by the eig(A,B) Matlab function. I tried to use it like this : % Search for common build eigen vectors between FISH_sp and FISH_xc [V,D] = eig(FISH_sp,FISH_xc); % Diagonalize the matrix (A B^-1) to compute Lambda since we have AX=Lambda B X [eigenv, eigen_final] = eig(inv(FISH_xc)*FISH_sp); % Compute the final endomorphism : F = P D P^-1 FISH_final = V*eye(7).*eigen_final*inv(V) But the matrix FISH_final don't give good results since I can do other computations from this matrix FISH_final (this is actually a Fisher matrix) and the results of these computations are not valid. So surely, I must have done an error in my code snippet above. In a first time, I prefer to conclude in Matlab as if it was a prototype, and after if it works, look for doing this synthesis with MKL or with Python functions. Hence also tagging python. How can I build these common eigenvectors and finding also the eigenvalues associated? I am a little lost between all the potential methods that exist to carry it out. The screen capture below shows that the kernel of commutator has to be different from null vector : EDIT 1: From maths exchange, one advices to use Singular values Decomposition (SVD) on the commutator [A,B], that is in Matlab doing by : "If 𝑣 is a common eigenvector, then β€–(π΄π΅βˆ’π΅π΄)𝑣‖=0. The SVD approach gives you a unit-vector 𝑣 that minimizes β€–(π΄π΅βˆ’π΅π΄)𝑣‖ (with the constraint that ‖𝑣‖=1)" So I extract the approximative eigen vectors V from : [U,S,V] = svd(A*B-B*A) Is there a way to increase the accuracy to minimize β€–(π΄π΅βˆ’π΅π΄)𝑣‖ as much as possible ? IMPORTANT REMARK : Maybe some of you didn't fully understand my goal. Concerning the common basis of eigen vectors, I am looking for a combination (vectorial or matricial) of V1 and V2, or directly using null operator on the 2 input Fisher marices, to build this new basis "P" in which, with others eigenvalues than known D1 and D2 (noted D1a and D2a), we could have : F = P (D1a+D2a) P^-1 To compute the new Fisher matrix F, I need to know P, assuming that D1a and D2a are equal respectively to D1 and D2 diagonal matrices (coming from diagonalization of A and B matrices) If I know common basis of eigen vectors P, I could deduce D1a and Da2 from D1 and D2, couldn't I ? The 2 Fisher matrices are available on these links : Matrix A Matrix B
I don't think there is a built-in facility in Matlab for computing common eigenvalues of two matrices. I'll just outline brute force way and do it in Matlab in order to highlight some of its eigenvector related methods. We will assume the matrices A and B are square and diagonalizable. Outline of steps: Get eigenvectors/values for A and B respectively. Group the resultant eigenvectors by their eigenspaces. Check for intersection of the eigenspaces by checking linear dependency among the eigenvectors of A and B one pair eigenspaces at a time. Matlab does provide methods for (efficiently) completing each step! Except of course step 3 involves checking linear dependency many many times, which in turn means we are likely doing unnecessary computation. Not to mention, finding common eigenvectors may not require finding all eigenvectors. So this is not meant to be a general numerical recipe. How to get eigenvector/values The syntax is [V,D] = eig(A) where D(i), V(:,i) are the corresponding eigenpairs. Just be wary of numerical errors. In other words, if you check tol=sum(abs(A*V(:,i)-D(i)*V(:,i))); tol<n*eps should be true for some small n for a smallish matrix A but it's probably not true for 0 or 1. Example: >> A = gallery('lehmer',4); >> [V,D] = eig(A); >> sum(abs(A*V(:,1)-D(1)*V(:,1)))<eps ans = logical 0 >> sum(abs(A*V(:,1)-D(1)*V(:,1)))<10*eps ans = logical 1 How to group eigenvectors by their eigenspaces In Matlab, eigenvalues are not automatically sorted in the output of [V,D] = eig(A). So you need to do that. Get diagonal entries of matrix: diag(D) Sort and keep track of the required permutation for sorting: [d,I]=sort(diag(D)) Identify repeating elements in d: [~,ia,~]=unique(d,'stable') ia(i) tells you the beginning index of the ith eigenspace. So you can expect d(ia(i):ia(i+1)-1) to be identical eigenvalues and thus the eigenvectors belonging to the ith eigenspace are the columns W(:,ia(i):ia(i+1)-1) where W=V(:,I). Of course, for the last one, the index is ia(end):end The last step happens to be answered here in true generality. Here, unique is sufficient at least for small A. (Feel free to ask a separate question on how to do this whole step of "shuffling columns of one matrix based on another diagonal matrix" efficiently. There are probably other efficient methods using built-in Matlab functions.) For example, >> A=[1,2,0;1,2,2;3,6,1]; >> [V,D] = eig(A), V = 0 0 0.7071 1.0000 -0.7071 0 0 0.7071 -0.7071 D = 3 0 0 0 5 0 0 0 3 >> [d,I]=sort(diag(D)); >> W=V(:,I), W = 0 0.7071 0 1.0000 0 -0.7071 0 -0.7071 0.7071 >> [~,ia,~]=unique(d,'stable'), ia = 1 3 which makes sense because the 1st eigenspace is the one with eigenvalue 3 comprising of span of column 1 and 2 of W, and similarly for the 2nd space. How to get linear intersect of (the span of) two sets To complete the task of finding common eigenvectors, you do the above for both A and B. Next, for each pair of eigenspaces, you check for linear dependency. If there is linear dependency, the linear intersect is an answer. There are a number of ways for checking linear dependency. One is to use other people's tools. Example: https://www.mathworks.com/matlabcentral/fileexchange/32060-intersection-of-linear-subspaces One is to get the RREF of the matrix formed by concatenating the column vectors column-wise. Let's say you did the computation in step 2 and arrived at V1, D1, d1, W1, ia1 for A and V2, D2, d2, W2, ia2 for B. You need to do for i=1:numel(ia1) for j=1:numel(ia2) check_linear_dependency(col1,col2); end end where col1 is W1(:,ia1(i):ia1(i+1)-1) as mentioned in step 2 but with the caveat for the last space and similarly for col2 and by check_linear_dependency we mean the followings. First we get RREF: [R,p] = rref([col1,col2]); You are looking for, first, rank([col1,col2])<size([col1,col2],2). If you have computed rref anyway, you already have the rank. You can check the Matlab documentation for details. You will need to profile your code for selecting the more efficient method. I shall refrain from guess-estimating what Matlab does in rank(). Although whether doing rank() implies doing the work in rref can make a good separate question. In cases where rank([col1,col2])<size([col1,col2],2) is true, some rows don't have leading 1s and I believe p will help you trace back to which columns are dependent on which other columns. And you can build the intersect from here. As usual, be alert of numerical errors getting in the way of == statements. We are getting to the point of a different question -- ie. how to get linear intersect from rref() in Matlab, so I am going to leave it here. There is yet another way using fundamental theorem of linear algebra (*sigh at that unfortunate naming): null( [null(col1.').' ; null(col2.').'] ) The formula I got from here. I think ftla is why it should work. If that's not why or if you want to be sure that the formula works (which you probably should), please ask a separate question. Just beware that purely math questions should go on a different stackexchange site. Now I guess we are done! EDIT 1: Let's be extra clear with how ia works with an example. Let's say we named everything with a trailing 1 for A and 2 for B. We need for i=1:numel(ia1) for j=1:numel(ia2) if i==numel(ia1) col1 = W1(:,ia1(end):end); else col1 = W1(:,ia1(i):ia1(i+1)-1); end if j==numel(ia2) col2 = W2(:,ia2(j):ia2(j+1)-1); else col2 = W2(:,ia2(end):end); end check_linear_dependency(col1,col2); end end EDIT 2: I should mention the observation that common eigenvectors should be those in the nullspace of the commutator. Thus, perhaps null(A*B-B*A) yields the same result. But still be alert of numerical errors. With the brute force method, we started with eigenpairs with low tol (see definition in earlier sections) and so we already verified the "eigen" part in the eigenvectors. With null(A*B-B*A), the same should be done as well. Of course, with multiple methods at hand, it's good idea to compare results across methods.
11
5
65,582,001
2021-1-5
https://stackoverflow.com/questions/65582001/latin-1-codec-cant-encode-characters
My code works fine for English text, but doesn't work for for Russian search_text. How can I fix it? Error text UnicodeEncodeError: 'latin-1' codec can't encode characters in position 41-46: Body ('Москва') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8. My code import requests # search_text = "London" # OK: for english text search_text = "Москва" # ERROR: 'latin-1' codec can't encode characters in position 41-46: Body ('Москва') headers = { 'cookie': 'bci=6040686626671285074; _statid=a741e249-8adb-4c9a-8344-6e7e8360700a; viewport=762; _hd=h; tmr_lvid=ea50ffe34e269b16d061756e9a17b263; tmr_lvidTS=1609852383671; AUTHCODE=VCmGBS9d9sIxDnxN-hzApvPxPoLNADWCZLYyW8JOTcolv2dJjwH7ALYd8dNP9ljxZZuLvoKsDXgozEUt-PjSwXYEDt4syizx1I2LS58gb49kCFae-5uIap--mtLsff2ZqGbFqK5r7buboZ0_3; JSESSIONID=adca48748b8f0c58a926f5e4948f42c0c0aa9463798a9240.1f3566ed; LASTSRV=ok.ru; msg_conf=2468555756792551; TZ=6; _flashVersion=0; CDN=; nbp=; tmr_detect=0%7C1609852395541; cudr=0; klos=0; tmr_reqNum=4; TZD=6.200; TD=200', } data = '''{\n "id": 24,\n "parameters": {\n "query": "''' + search_text + '''"\n }\n}''' response = requests.post('https://ok.ru/web-api/v2/search/suggestCities', headers=headers, data=data) json_data = response.json() print(json_data['result'][0]['id']) I tried city_name = city_name.encode('utf-8') but received TypeError: must be str, not bytes
Try adding this after the line where you create the data variable before you post the request data=data.encode() #will produce bytes object encoded with utf-8
17
10
65,579,151
2021-1-5
https://stackoverflow.com/questions/65579151/how-to-check-if-mypy-type-ignore-comments-are-still-valid-and-required
Imagine we have some giant legacy code base with a lot of files with ignored Mypy warnings: def foobar(): x = some_external_class.some_method()[0] # type: ignore[ignore-some-mypy-warning] Time to go... Some parts of code were changed. Some parts of code is still the same. How to check every "ignore" comment to know: will I get an error if I remove it? Desired output: Checked 100500 files! You do not need "ignore" comments anymore in the following files: - spam.py:534 - eggs.py:31 - eggs.py:250 Are there any existing tools to achieve this? Any ideas about custom scripts? The only idea that I have: Write a script that will find and remember a file and a line of every Mypy comment. Find and remove all Mypy comments. Run Mypy check β†’ store results. Compare the Mypy check errors lines with a stored old lines. Find a difference: if a comment was removed, but Mypy does not complain now about that line, then the comment must be removed.
From mypy documentation: --warn-unused-ignores This flag will make mypy report an error whenever your code uses a # type: ignore comment on a line that is not actually generating an error message.
9
17
65,580,052
2021-1-5
https://stackoverflow.com/questions/65580052/pandas-does-uppercase-lowercase-mean-anything-in-dtypes
Float32 vs float32? What is the purpose of uppercase vs lowercase dtypes in Pandas? Uppercase seems more error prone: TypeError: object cannot be converted to a FloatingDtype. dtype = { 'doom_float64': 'Float64' , 'radiance_float32': 'Float32' , 'temperature_float': 'float' , 'moonday_int64': 'Int64' , 'month_int32': 'Int32' , 'color_uint8': 'UInt8' , 'shape_int': 'int' , 'weekday_object': 'object' , 'hour_object': 'string' , 'kingdom_category': 'category' } >>> df.dtypes doom_float64 Float64 radiance_float32 Float32 temperature_float float64 weekday_object object hour_object string moonday_int64 Int64 month_int32 Int32 color_uint8 UInt8 shape_int int64 kingdom_category category dtype: object Pandas v1.2.0
Yes, see here for example. pandas can represent integer data with possibly missing values using arrays.IntegerArray. This is an extension types implemented within pandas. Or the string alias "Int64" (note the capital "I", to differentiate from NumPy’s 'int64' dtype: Capitalized types are pandas types, while uncapitalized types are numpy types. One feature of pandas types is the ability to support nan, which is not a standard IEEE for non floats.
5
8
65,531,387
2021-1-1
https://stackoverflow.com/questions/65531387/tortoise-orm-for-python-no-returns-relations-of-entities-pyndantic-fastapi
I was making a sample Fast Api server with Tortoise ORM as an asynchronous orm library, but I just cannot seem to return the relations I have defined. These are my relations: # Category from tortoise.fields.data import DatetimeField from tortoise.models import Model from tortoise.fields import UUIDField, CharField from tortoise.fields.relational import ManyToManyField from tortoise.contrib.pydantic import pydantic_model_creator class Category(Model): id = UUIDField(pk=True) name = CharField(max_length=255) description = CharField(max_length=255) keywords = ManyToManyField( "models.Keyword", related_name="categories", through="category_keywords" ) created_on = DatetimeField(auto_now_add=True) updated_on = DatetimeField(auto_now=True) Category_dto = pydantic_model_creator(Category, name="Category", allow_cycles = True) # Keyword from models.expense import Expense from models.category import Category from tortoise.fields.data import DatetimeField from tortoise.fields.relational import ManyToManyRelation from tortoise.models import Model from tortoise.fields import UUIDField, CharField from tortoise.contrib.pydantic import pydantic_model_creator class Keyword(Model): id = UUIDField(pk=True) name = CharField(max_length=255) description = CharField(max_length=255) categories: ManyToManyRelation[Category] expenses: ManyToManyRelation[Expense] created_on = DatetimeField(auto_now_add=True) updated_on = DatetimeField(auto_now=True) class Meta: table="keyword" Keyword_dto = pydantic_model_creator(Keyword) The tables have been created correctly. When adding keywords to categories the db state is all good. The problem is when i want to query the categories and include the keywords. I have this code for that: class CategoryRepository(): @staticmethod async def get_one(id: str) -> Category: category_orm = await Category.get_or_none(id=id).prefetch_related('keywords') if (category_orm is None): raise NotFoundHTTP('Category') return category_orm Debugging the category_orm here I have the following: category_orm debug at run-time Which kind of tells me that they are loaded. Then when i cant a Pydantic model I have this code class CategoryUseCases(): @staticmethod async def get_one(id: str) -> Category_dto: category_orm = await CategoryRepository.get_one(id) category = await Category_dto.from_tortoise_orm(category_orm) return category and debugging this, there is no keywords field category (pydantic) debug at run-time Looking at the source code of tortoise orm for the function from_tortoise_orm @classmethod async def from_tortoise_orm(cls, obj: "Model") -> "PydanticModel": """ Returns a serializable pydantic model instance built from the provided model instance. .. note:: This will prefetch all the relations automatically. It is probably what you want. But my relation is just not returned. Anyone have a similar experience ?
The issue occurs when one try to generate pydantic models before Tortoise ORM is initialised. If you look at basic pydantic example you will see that all pydantic_model_creator are called after Tortoise.init. The obvious solution is to create pydantic models after Tortoise initialisation, like so: await Tortoise.init(db_url="sqlite://:memory:", modules={"models": ["__main__"]}) await Tortoise.generate_schemas() Event_Pydantic = pydantic_model_creator(Event) Or a more convenient way, use early model init by means of Tortoise.init_models(). Like so: from tortoise import Tortoise Tortoise.init_models(["__main__"], "models") Tournament_Pydantic = pydantic_model_creator(Tournament) In the case, the main idea is to split pydantic and db models into different modules, so that importing the first does not lead to the creation of the second ahead of time. And ensure calling Tortoise.init_models() before creating pydantic models. A more detailed description with examples can be found here.
9
8
65,568,789
2021-1-4
https://stackoverflow.com/questions/65568789/pycharm-deletes-quotation-marks-in-paramenter-field
I want to set a parameter for a python script by using the parameter field in PyCharm. My config: But the command in the Run console is: python3 path_to_script.py '{app_id: picoballoon_network, dev_id: ferdinand_8c ... and so on and not: python3 path_to_script.py '{"app_id": "picoballoon_network", "dev_id": "ferdinand_8c" ... and so on Basically, it deletes all " in the parameter. Does anyone know how to turn this off? My PyCharm version is: PyCharm 2020.3.1 (Professional Edition) Build #PY-203.6682.86, built on January 4, 2021 Runtime version: 11.0.9.1+11-b1145.37 amd64 VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o. Windows 10 10.0
To avoid the quotation marks being deleted notice the rules to writing parameters that contain quotation marks. Run/Debug Configuration: Python Configuration tab When specifying the script parameters, follow these rules: Use spaces to separate individual script parameters. Script parameters containing spaces should be delimited with double quotes, for example, some" "param or "some param". If script parameter includes double quotes, escape the double quotes with backslashes, for example: -s"main.snap_source_dirs=[\"pcomponents/src/main/python\"]" -s"http.cc_port=8189" -s"backdoor.port=9189" -s"main.metadata={\"location\": \"B\", \"language\": \"python\", \"platform\": \"unix\"}" The case in the question would be a single parameter, lets apply the rules to the example: '{"app_id": "picoballoon_network", "dev_id": "ferdinand_8c"' Because it's a single parameter containing spaces it has to be surounded by quotation marks. Since the content of the parameter also contains quotation marks they must be escaped using a backslash \. So applying the parameter formatting rules gives: "'{\"app_id\": \"picoballoon_network\", \"dev_id\": \"ferdinand_8c\"}'" (Side note): In the example the parameter was surrounded by Apostrophes, this may be unnecessary and will probably have to be stripped later in your Python code (the below example uses the strip method). You can test it with this simple script: import sys import ast your_dictionary = ast.literal_eval(sys.argv[1].strip("'")) (Side note): Your example parameter is a string containing a Python dictionary, there are several ways to convert it, in the example I included the highest voted answer from this question: "Convert a String representation of a Dictionary to a dictionary?" A screenshot showing the parameter and test code in use:
7
5
65,571,890
2021-1-5
https://stackoverflow.com/questions/65571890/unzip-to-temp-in-memory-directory-using-python-mkdtemp
I've looked through the examples out there and don't seem to find one that fits. Looking to unzip a file in-memory to a temporary directory using Python mkdtemp(). Something like this feels intuitive, but I can't find the correct syntax: import zipfile import tempfile zf = zipfile.Zipfile('incoming.zip') with tempfile.mkdtemp() as tempdir: zf.extractall(tempdir) # do stuff on extracted files But this results in: AttributeError Traceback (most recent call last) <ipython-input-5-af39c866a2ba> in <module> 1 zip_file = zipfile.ZipFile('incoming.zip') 2 ----> 3 with tempfile.mkdtemp() as tempdir: 4 zip_file.extractall(tempdir) AttributeError: __enter__
I already mentioned in my comment why the code that you wrote doesn't work. .mkdtemp() returns just a path as a string, but what you really want to have is a context manager. You can easily fix that by using the the correct function .TemporaryDirectory() This function securely creates a temporary directory using the same rules as mkdtemp(). The resulting object can be used as a context manager (see Examples). On completion of the context or destruction of the temporary directory object the newly created temporary directory and all its contents are removed from the filesystem. zf = zipfile.ZipFile('incoming.zip') with tempfile.TemporaryDirectory() as tempdir: zf.extractall(tempdir) This alone would work
6
12
65,571,812
2021-1-5
https://stackoverflow.com/questions/65571812/keep-indices-in-pandas-dataframe-with-a-certain-number-of-non-nan-entires
Lets say I have the following dataframe: df1 = pd.DataFrame(data = [1,np.nan,np.nan,1,1,np.nan,1,1,1], columns = ['X'], index = ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c']) print(df1) X a 1.0 a NaN a NaN b 1.0 b 1.0 b NaN c 1.0 c 1.0 c 1.0 I want to keep only the indices which have 2 or more non-NaN entries. In this case, the 'a' entries only have one non-NaN value, so I want to drop it and have my result be: X b 1.0 b 1.0 b NaN c 1.0 c 1.0 c 1.0 What is the best way to do this? Ideally I want something that works with Dask too, although usually if it works with Pandas it also works in Dask.
Let us try filter out = df.groupby(level=0).filter(lambda x : x.isna().sum()<=1) X b 1.0 b 1.0 b NaN c 1.0 c 1.0 c 1.0 Or we do isin df[df.index.isin(df.isna().sum(level=0).loc[lambda x : x['X']<=1].index)] X b 1.0 b 1.0 b NaN c 1.0 c 1.0 c 1.0
7
10
65,563,922
2021-1-4
https://stackoverflow.com/questions/65563922/how-to-change-subplot-title-after-creation-in-plotly
With matplotlib I could do this: import numpy as np import matplotlib.pyplot as plt fig, axs = plt.subplots(1, 2, figsize=(10, 5)) fig.patch.set_facecolor('white') axs[0].bar(x=[0,1,2], height=[1,2,3]) axs[1].plot(np.random.randint(1, 10, 50)) axs[0].set_title('BARPLOT') axs[1].set_title('WOLOLO') With plotly I know only one way to set subplot titles -- at the creation: fig = make_subplots(rows=1, cols=2, subplot_titles=('title1', 'title2')) Is it possible to set subplots title after creation? Maybe, through getting access to original Axes class of matplotlib (I read that plotly.python based on seaborn which is based on matplotlib)? Or through fig.layout? Please use this code: import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2) fig.add_trace(go.Bar(y=[1, 2, 3]), row=1, col=1) fig.add_trace(go.Scatter(y=np.random.randint(1, 10, 50)), row=1, col=2)
The following code does the trick you want. import numpy as np import plotly.graph_objects as go from plotly.subplots import make_subplots fig = make_subplots(rows=1, cols=2, subplot_titles=("Plot 1", "Plot 2")) fig.add_trace(go.Bar(y=[1, 2, 3]), row=1, col=1) fig.add_trace(go.Scatter(y=np.random.randint(1, 10, 50)), row=1, col=2) fig.layout.annotations[1].update(text="Stackoverflow") fig.show() I got this idea from https://community.plotly.com/t/subplot-title-alignment/33210/2 when it's described how to access each plot's annotation.
12
16
65,561,498
2021-1-4
https://stackoverflow.com/questions/65561498/docker-sdk-for-python-how-to-build-an-image-with-custom-dockerfile-and-custom-c
I'm trying to replicate this command with the Docker SDK for Python: docker build -f path/to/dockerfile/Dockerfile.name -t image:version path/to/context/. path/to/dockerfile and path/to/context are different paths, ie: /opt/project/dockerfile and /opt/project/src/app/. The directory structure is the following: opt β”œβ”€β”€ project β”‚ β”œβ”€β”€ dockerfile β”‚ β”‚ └── Dockerfile.name β”‚ └── src β”‚ └── app β”‚ └── target β”‚ └── app-0.0.1-SNAPSHOT.jar └── script.py The command is working correctly from the CLI, but I'm not able to make it work with the SDK. From the documentation, the images build method has the following parameters: path (str) – Path to the directory containing the Dockerfile (I'm guessing this is the context) custom_context (bool) – Optional if using fileobj fileobj – A file object to use as the Dockerfile. (Or a file-like object) When I use the method like this: client.images.build( path = path_to_context, fileobj=open(path_to_file, 'rb'), custom_context=True, tag='image:version' ) I get this error: Traceback (most recent call last): File "script.py", line 33, in <module> client.images.build( File "/Library/Python/3.8/site-packages/docker/models/images.py", line 298, in build raise BuildError(last_event or 'Unknown', result_stream) docker.errors.BuildError: {'message': 'unexpected EOF'} The content of the Dockerfile is the following: FROM openjdk:16-alpine COPY target/app-0.0.1-SNAPSHOT.jar app.jar CMD ["java", "-jar", "-Dspring.profiles.active=docker", "/app.jar"] but I'm guessing the error is not due to that for the command correctly works with the CLI, it's only breaking with the SDK. Am I doing something wrong? Thanks! Edit: Simply removing custom_context=True does not solve the problem, for the context and build paths are different. In fact it causes another error, relative to the fact that the file does not exist in the current path: Traceback (most recent call last): File "script.py", line 33, in <module> client.images.build( File "/Library/Python/3.8/site-packages/docker/models/images.py", line 287, in build raise BuildError(chunk['error'], result_stream) docker.errors.BuildError: COPY failed: file not found in build context or excluded by .dockerignore: stat target/app-0.0.1-SNAPSHOT.jar: file does not exist
I removed custom_context=True, and the problem went away. EDIT: Using your project tree: import docker client = docker.from_env() client.images.build( path = './project/src/app/target/', dockerfile = '../../../Dockerfile/Dockerfile.name', tag='image:version', )
6
2
65,557,258
2021-1-4
https://stackoverflow.com/questions/65557258/typeerror-cant-pickle-coroutine-objects-when-i-am-using-asyncio-loop-run-in-ex
I am referring to this repo to adapt mmaction2 grad-cam demo from short video offline inference to long video online inference. The script is shown below: Note: to make this script can be easily reproduce, i comment out some codes that needs many dependencies. import cv2 import numpy as np import torchvision.transforms as transforms import sys from PIL import Image #from mmaction.apis import init_recognizer #from utils.gradcam_utils import GradCAM import torch import asyncio from concurrent.futures import ProcessPoolExecutor from functools import partial # sys.path.append('./utils') async def preprocess_img(arr): image = Image.fromarray(np.uint8(arr)) mean = [0.485, 0.456, 0.406] std = [0.229, 0.224, 0.225] transform = transforms.Compose([ transforms.Resize((model_input_height, model_input_width)), transforms.ToTensor(), transforms.Normalize(mean, std, inplace=False), ]) normalized_img = transform(image) img_np = normalized_img.numpy() return img_np async def inference(frame_buffer): print("starting inference") # inputs = {} # input_tensor = torch.from_numpy(frame_buffer).type(torch.FloatTensor) # input_cuda_tensor = input_tensor.cuda() # inputs['imgs'] = input_cuda_tensor # results = gradcam(inputs) # display_buffer = np.squeeze(results[0].cpu().detach().numpy(), axis=0) # return display_buffer async def run_blocking_func(loop_, queue_, frame_buffer): with ProcessPoolExecutor() as pool: blocking_func = partial(inference, frame_buffer) frame = await loop_.run_in_executor(pool, blocking_func) print(frame) await queue_.put(frame) await asyncio.sleep(0.01) async def get_frames(capture): capture.grab() ret, frame = capture.retrieve() if not ret: print("empty frame") return for i in range(32): img = await preprocess_img(frame) expandimg = np.expand_dims(img, axis=(0, 1, 3)) print(f'expandimg.shape{expandimg.shape}') frame_buffer[:, :, :, i, :, :] = expandimg[:, :, :, 0, :, :] return frame_buffer async def show_frame(queue_: asyncio.LifoQueue): display_buffer = await queue_.get() for i in range(32): blended_image = display_buffer[i, :, :, :] cv2.imshow('Grad-CAM VIS', blended_image) if cv2.waitKey(10) & 0xFF == ord('q'): cap.release() cv2.destroyAllWindows() break async def produce(loop_, queue_, cap): while True: frame_buffer = await asyncio.create_task(get_frames(cap)) # Apply Grad-CAM display_buffer = await asyncio.create_task(run_blocking_func(loop_, queue_,frame_buffer)) await queue_.put(display_buffer) async def consume(queue_): while True: if queue_.qsize(): task1 = asyncio.create_task(show_frame(queue_)) await asyncio.wait(task1) if cv2.waitKey(1) == 27: break else: await asyncio.sleep(0.01) async def run(loop_, queue_, cap_): producer_task = asyncio.create_task(produce(loop_, queue_, cap_)) consumer_task = asyncio.create_task(consume(queue_)) await asyncio.gather(producer_task, consumer_task) if __name__ == '__main__': # config = '/home/user/Repo/mmaction2/configs/recognition/i3d/i3d_r50_video_inference_32x2x1_100e_kinetics400_rgb.py' # checkpoint = '/home/user/Repo/mmaction2/checkpoints/i3d_r50_video_32x2x1_100e_kinetics400_rgb_20200826-e31c6f52.pth' # device = torch.device('cuda:0') # model = init_recognizer(config, checkpoint, device=device, use_frames=False) video_path = 'replace_with_your_video.mp4' model_input_height = 256 model_input_width = 340 # target_layer_name = 'backbone/layer4/1/relu' # gradcam = GradCAM(model, target_layer_name) cap = cv2.VideoCapture(video_path) width = cap.get(cv2.CAP_PROP_FRAME_WIDTH) # float height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT) # float frame_buffer = np.zeros((1, 1, 3, 32, model_input_height, model_input_width)) display_buffer = np.zeros((32, model_input_height, model_input_width, 3)) # (32, 256, 340, 3) loop = asyncio.get_event_loop() queue = asyncio.LifoQueue(maxsize=2) try: loop.run_until_complete(run(loop_=loop, queue_=queue, cap_=cap)) finally: print("shutdown service") loop.close() But when i run it, it reports following error : concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/home/user/miniconda3/lib/python3.7/concurrent/futures/process.py", line 205, in _sendback_result exception=exception)) File "/home/user/miniconda3/lib/python3.7/multiprocessing/queues.py", line 358, in put obj = _ForkingPickler.dumps(obj) File "/home/user/miniconda3/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps cls(buf, protocol).dump(obj) TypeError: can't pickle coroutine objects """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/user/Repo/Python-AI-Action-Utils/temp2.py", line 120, in <module> loop.run_until_complete(run(loop_=loop, queue_=queue, cap_=cap)) File "/home/user/miniconda3/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete return future.result() File "/home/user/Repo/Python-AI-Action-Utils/temp2.py", line 94, in run await asyncio.gather(producer_task, consumer_task) File "/home/user/Repo/Python-AI-Action-Utils/temp2.py", line 76, in produce display_buffer = await asyncio.create_task(run_blocking_func(loop_, queue_,frame_buffer)) File "/home/user/Repo/Python-AI-Action-Utils/temp2.py", line 42, in run_blocking_func frame = await loop_.run_in_executor(pool, blocking_func) TypeError: can't pickle coroutine objects Task was destroyed but it is pending! task: <Task pending coro=<consume() running at /home/user/Repo/Python-AI-Action-Utils/temp2.py:88> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f7cf1418cd0>()]> cb=[gather.<locals>._done_callback() at /home/user/miniconda3/lib/python3.7/asyncio/tasks.py:691]> Process finished with exit code 1
If you use run_in_executor, target function should not be async. You need to remove async keyword before def inference().
7
10
65,559,556
2021-1-4
https://stackoverflow.com/questions/65559556/sum-negative-row-values-with-previous-rows-pandas
I'm having trouble finding a good way to find all negative entries in a column and move them up the column, summing them up with the existing entry (i.e. subtracting the negative entry from the present entry) until all values are positive. It is important that there are no negative values for the final dataframe & that all previously negative entries = 0. Also, the table is repeating, which means that I need to aggregate the results both based on the ID as well as on the entries (only do summations of entries of the same ID). Based on an already presented table here: Present: ID Date Entries 1 2013 100 1 2014 0 1 2015 60 1 2016 -30 1 2017 0 1 2018 50 1 2019 0 1 2020 -20 2 2013 100 2 2014 0 2 2015 60 2 2016 -30 2 2017 0 2 2018 50 2 2019 0 2 2020 -20 Desired: ID Date Entries 1 2013 100 1 2014 0 1 2015 30 1 2016 0 1 2017 0 1 2018 30 1 2019 0 1 2020 0 2 2013 100 2 2014 0 2 2015 30 2 2016 0 2 2017 0 2 2018 30 2 2019 0 2 2020 0
You can try reverse cumsum after creating a group, then mask: s = df['Entries'].gt(0).cumsum() u= df['Entries'][::-1].groupby(s).cumsum().mask(df['Entries'].le(0),0) out = df.assign(New_Entries=u) # you can assign to the original column too. print(out) ID Date Entries New_Entries 0 1 2013 100 100 1 1 2014 0 0 2 1 2015 60 30 3 1 2016 -30 0 4 1 2017 0 0 5 1 2018 50 30 6 1 2019 0 0 7 1 2020 -20 0 8 2 2013 100 100 9 2 2014 0 0 10 2 2015 60 30 11 2 2016 -30 0 12 2 2017 0 0 13 2 2018 50 30 14 2 2019 0 0 15 2 2020 -20 0
5
8
65,557,061
2021-1-4
https://stackoverflow.com/questions/65557061/why-does-popping-from-the-original-list-make-reversedoriginal-list-empty
I have the following code: s = [1,2,3] t = reversed(s) for i in t: print(i) # output: 3,2,1 If I pop one element from s (original), then the t (reversed) is emptied: s = [1,2,3] t = reversed(s) s.pop() for i in t: print(i) # expected output: 2, 1 # actual output (nothing): Why does this happen?
Taking a look at the cpython code on GitHub, we can get some intuition as to why it no longer works. The iterator that is returned essentially requires knowing the position of the last index and the length of the array. If the size of the array is changed, the iterator will no longer work. Test 1: Increasing the array length This will not produce the correct results either, but the iterator does run: s = [1,2,3] t = reversed(s) s.append(4) for i in t: print(i) # output: [3, 2, 1] Test 2: Decreasing, then increasing the length s = [1,2,3] t = reversed(s) s.pop() s.append(4) for i in t: print(i) # output: [4, 2, 1] It still works! So there's an internal check to see whether or not the last index is still valid, and if it is, it's a simple for loop down to index 0. If it doesn't work, the iterator returns empty.
32
30
65,525,189
2020-12-31
https://stackoverflow.com/questions/65525189/python-google-cloud-function-missing-log-entries
I'm experimenting with GCP's cloud functions and python for the first time and wanted to get python's logging integrated sufficiently so that they fit well with GCP's logging infrastructure (specifically so that severity levels are recognized, and ideally execution_ids and trace ids also are included. I've been following https://cloud.google.com/logging/docs/setup/python to get this set up. My code: import base64 import logging import google.cloud.logging client = google.cloud.logging.Client() client.get_default_handler() client.setup_logging() logging.getLogger().setLevel(logging.DEBUG) def sample_pubsub(event, context): pubsub_message = base64.b64decode(event['data']).decode('utf-8') print('BEFORE LOG') logging.info(f'Event received: payload data == [{pubsub_message}]') logging.debug('This is debug') logging.warn('This should be a warning') logging.error('This should be an error') print('AFTER LOG') When I run this locally using the function-framework this works perfectly (as far as I can tell) outputting like so to the console: {"message": " * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)", "timestamp": {"seconds": 1609443581, "nanos": 119384527}, "thread": 140519851886400, "severity": "INFO"} {"message": " * Restarting with inotify reloader", "timestamp": {"seconds": 1609443581, "nanos": 149804115}, "thread": 140519851886400, "severity": "INFO"} {"message": " * Debugger is active!", "timestamp": {"seconds": 1609443584, "nanos": 529310703}, "thread": 140233360983872, "severity": "WARNING"} {"message": " * Debugger PIN: 327-539-151", "timestamp": {"seconds": 1609443584, "nanos": 533129930}, "thread": 140233360983872, "severity": "INFO"} BEFORE LOG {"message": "Event received: payload data == []", "timestamp": {"seconds": 1609443585, "nanos": 77324390}, "thread": 140232720623360, "severity": "INFO"} {"message": "This is debug", "timestamp": {"seconds": 1609443585, "nanos": 77804565}, "thread": 140232720623360, "severity": "DEBUG"} {"message": "This should be a warning", "timestamp": {"seconds": 1609443585, "nanos": 78260660}, "thread": 140232720623360, "severity": "WARNING"} {"message": "This should be an error", "timestamp": {"seconds": 1609443585, "nanos": 78758001}, "thread": 140232720623360, "severity": "ERROR"} AFTER LOG {"message": "127.0.0.1 - - [31/Dec/2020 14:39:45] \"\u001b[37mPOST / HTTP/1.1\u001b[0m\" 200 -", "timestamp": {"seconds": 1609443585, "nanos": 82943439}, "thread": 140232720623360, "severity": "INFO"} So... then I deploy it to the cloud and trigger it there thru its associated topic, and I see: So, stdout seems to work fine but the logger output is missing. Final comment: I did create the account key and have put the json file into the function deployment root folder, and created the environment variable GOOGLE_APPLICATION_CREDENTIALS=key.json. On the chance that the problem is that the file isn't being picked up, I also tested this with the value referring to a non-existent file. The deployment fails if I do this so I'm confident the key file is being picked up. Which brings me to my question: what am I doing wrong? EDIT - Adding env details I am deploying the function using the GSDK as follows: gcloud functions deploy sample_pubsub --source=${SOURCE_DIR} --runtime=python38 --trigger-topic=${PUBSUB_TOPIC} --set-env-vars GOOGLE_APPLICATION_CREDENTIALS=key.json,PYTHONUNBUFFERED=1 There is a requirements.txt file in the same folder as the function py file specifying ONLY "google-cloud-logging" without any version constraints. ** For local debugging I have a venv created with python 3.8.5 and I've pip-installed only google-cloud-logging and functions-framework - again without any version constraints. Having said that, if I do a pip freeze within my activated virtual environment: appdirs==1.4.3 CacheControl==0.12.6 cachetools==4.2.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.2 cloudevents==1.2.0 colorama==0.4.3 contextlib2==0.6.0 deprecation==2.1.0 distlib==0.3.0 distro==1.4.0 Flask==1.1.2 functions-framework==2.1.0 google-api-core==1.24.1 google-auth==1.24.0 google-cloud-core==1.5.0 google-cloud-logging==2.0.2 googleapis-common-protos==1.52.0 grpcio==1.34.0 gunicorn==20.0.4 html5lib==1.0.1 idna==2.8 ipaddr==2.2.0 itsdangerous==1.1.0 Jinja2==2.11.2 lockfile==0.12.2 MarkupSafe==1.1.1 msgpack==0.6.2 packaging==20.3 pep517==0.8.2 progress==1.5 proto-plus==1.13.0 protobuf==3.14.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyparsing==2.4.6 pytoml==0.1.21 pytz==2020.5 requests==2.22.0 retrying==1.3.3 rsa==4.6 six==1.14.0 urllib3==1.25.8 watchdog==1.0.2 webencodings==0.5.1 Werkzeug==1.0.1
Looks like it's a known issue with Cloud Functions running Python 3.8. Here's a similar case currently open on issue tracker. I've now attached this thread to the issue tracker but feel free to comment in there as well. As a current workaround, I suggest that you use Python 3.7 until the issue is resolved.
6
4
65,551,469
2021-1-3
https://stackoverflow.com/questions/65551469/operator-index-with-custom-class-instance
I have a simple class below: class MyClass(int): def __index__(self): return 1 According to operator.index documentation: operator.index(a) Return a converted to an integer. Equivalent to a.__index__() But when I use operator.index with MyClass instance, I got 100 instead of 1 (I am getting 1 if I use a.__index__()). Why is that? >>> a = MyClass(100) >>> >>> import operator >>> print(operator.index(a)) 100 >>> print(a.__index__()) 1
This is because your type is an int subclass. __index__ will not be used because the instance is already an integer. That much is by design, and unlikely to be considered a bug in CPython. PyPy behaves the same. In _operator.c: static PyObject * _operator_index(PyObject *module, PyObject *a) /*[clinic end generated code: output=d972b0764ac305fc input=6f54d50ea64a579c]*/ { return PyNumber_Index(a); } Note that operator.py Python code is not used generally, this code is only a fallback in the case that compiled _operator module is not available. That explains why the result a.__index__() differs. In abstract.c, cropped after the relevant PyLong_Check part: /* Return an exact Python int from the object item. Raise TypeError if the result is not an int or if the object cannot be interpreted as an index. */ PyObject * PyNumber_Index(PyObject *item) { PyObject *result = _PyNumber_Index(item); if (result != NULL && !PyLong_CheckExact(result)) { Py_SETREF(result, _PyLong_Copy((PyLongObject *)result)); } return result; } ... /* Return a Python int from the object item. Can return an instance of int subclass. Raise TypeError if the result is not an int or if the object cannot be interpreted as an index. */ PyObject * _PyNumber_Index(PyObject *item) { PyObject *result = NULL; if (item == NULL) { return null_error(); } if (PyLong_Check(item)) { Py_INCREF(item); return item; /* <---- short-circuited here */ } ... } The documentation for operator.index is inaccurate, so this may be considered a minor documentation issue: >>> import operator >>> operator.index.__doc__ 'Same as a.__index__()' So, why isn't __index__ considered for integers? The probable answer is found in PEP 357, under the discussion section titled Speed: Implementation should not slow down Python because integers and long integers used as indexes will complete in the same number of instructions. The only change will be that what used to generate an error will now be acceptable. We do not want to slow down the most common case for slicing with integers, having to check for an nb_index slot every time.
7
5
65,526,149
2020-12-31
https://stackoverflow.com/questions/65526149/pytest-customize-short-test-summary-info-remove-filepath
I'm trying to get more useful output from pytest -tb=no short output. I have integration tests stored in JSON files, so the output all looks extremely similar. tests/test_dit_cli.py .......F............................. [ 29%] ...F...F.FF........F............................F...FFFFFFF [ 75%] FFF.F..................F.....FF [100%] ===================== short test summary info ===================== FAILED tests/test_dit_cli.py::test_dits[dit_json7] - assert "Lin... FAILED tests/test_dit_cli.py::test_dits[dit_json40] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json44] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json46] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json47] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json56] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json85] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json89] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json90] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json91] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json92] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json93] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json94] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json95] - AssertionE... FAILED tests/test_dit_cli.py::test_dits[dit_json96] - assert 'Li... FAILED tests/test_dit_cli.py::test_dits[dit_json97] - assert 'Li... FAILED tests/test_dit_cli.py::test_dits[dit_json98] - assert "Li... FAILED tests/test_dit_cli.py::test_dits[dit_json100] - Assertion... FAILED tests/test_dit_cli.py::test_dits[dit_json119] - assert "L... FAILED tests/test_dit_cli.py::test_dits[dit_json125] - Assertion... FAILED tests/test_dit_cli.py::test_dits[dit_json126] - Assertion... ================= 21 failed, 106 passed in 2.94s ================== Seeing this same tests/test_dit_cli.py::test_dits[dit_json126] 20 times doesn't help me get a gauge on what's going wrong in the project, so I usually just fix errors one test at a time. Each test entry has extra information about the type of test being run and the expected outcome, but I don't know how to get that information into pytest. I would hope for something like this: ===================== short test summary info ===================== FAILED [func, vanilla Python] - assert "Li... FAILED [Thing, value assignment] - assert "Li... FAILED [TypeMismatch, String var assigned to List] - assert "Lin... I actually got close to this, by providing a value for ids in the parametrize call. def pytest_generate_tests(metafunc: Metafunc): for fixture in metafunc.fixturenames: if fixture == "dit_json": test_dicts = list(load_from_json()) titles = [test_dict["title"] for test_dict in test_dicts] metafunc.parametrize(argnames=fixture, argvalues=test_dicts, ids=titles) FAILED tests/test_dit_cli.py::test_dits[TypeMismatch, List var assigned to String] FAILED tests/test_dit_cli.py::test_dits[import, anon import referenced in list assignment] So, I'm really close, I just want to remove the filepath, so that the line is shorter. Is there a way to change the filepath of where it thinks the tests are located? Or a hook that would let me arbitrarily modify the summary output? I tried modifying pytest_collection_modifyitems and changing item.fspath, but it didn't change anything in the output. I've seen ways to modify lots of other things about the output, but nothing regarding specifically that filepath.
If you just want to shorten the nodeids in the short summary info, you can overwrite the nodeid attribute of the report object. A simple example: def pytest_runtest_logreport(report): report.nodeid = "..." + report.nodeid[-10:] placed in your conftest.py, will truncate each nodeid to its last ten chars: =========================== short test summary info =========================== FAILED ...st_spam[0] - assert False FAILED ...st_spam[1] - assert False FAILED ...st_spam[2] - assert False FAILED ...st_spam[3] - assert False FAILED ...st_spam[4] - assert False FAILED ...:test_eggs - assert False If you want a fully customized short test summary lines, you need to implement a custom TerminalReporter and replace the vanilla one early enough in the test run. Example stub: import pytest from _pytest.terminal import TerminalReporter class MyReporter(TerminalReporter): def short_test_summary(self): # your own impl goes here, for example: self.write_sep("=", "my own short summary info") failed = self.stats.get("failed", []) for rep in failed: self.write_line(f"failed test {rep.nodeid}") @pytest.mark.trylast def pytest_configure(config): vanilla_reporter = config.pluginmanager.getplugin("terminalreporter") my_reporter = MyReporter(config) config.pluginmanager.unregister(vanilla_reporter) config.pluginmanager.register(my_reporter, "terminalreporter") This will produce a summary section like ========================== short test summary info =========================== failed test tests/test_spam.py::test_spam[0] failed test tests/test_spam.py::test_spam[1] failed test tests/test_spam.py::test_spam[2] failed test tests/test_spam.py::test_spam[3] failed test tests/test_spam.py::test_spam[4] failed test tests/test_spam.py::test_eggs Note that the above impl of MyReporter.short_test_summary() is not complete and only put for demonstration purposes! For a reference, check out the pytest impl.
6
8
65,548,855
2021-1-3
https://stackoverflow.com/questions/65548855/when-using-f-read-the-iteration-loops-per-letter
I am iterating through my text file, but when I use the read() function, the loop iterates through the letters instead of the sentences. with the following code: for question in questions: # voor elke question moet er door alle lines geiterate worden print(f"Question: {question}") f = open("glad.txt", "r") text = f.read() # text = text.replace("\n", ". ") # text = text.replace(". .", "") # text = text.replace(".. ", ". ") # text = text.replace(".", ".\n") #text = text.strip(".. ") # test = text.replace('[bewerken | brontekst bewerken]', "") # output = re.sub(r'\[\d+\]', '', test) for line in text: text = str(line) #het antwoord moet een string zijn #encoding met tokenizen van de zinnen print(text) The output is: But when I remove the f.read() I receive the expected out: I need to use the read() function, otherwise I cannot use the replace() function. Does anyone how to solve this issue?
Using text = f.read(), you are getting the whole text file into text. When you iterate over a string in Python, it gives you one character per iteration. Since you want to continue using .read(), use splitlines(): text = f.read().splitlines() Now, text is a list which you can freely iterate the same way you are already doing: for line in text:
5
3
65,548,460
2021-1-3
https://stackoverflow.com/questions/65548460/python-how-to-create-an-abc-that-inherits-from-others-abc
I am trying to create a simple abstract base class Abstract that along with its own methods provides the methods of two others abstract base classes: Publisher and Subscriber. When I try to initialize the concrete class Concrete, built on Abstract I get this error: Cannot create a consistent method resolution order (MRO) for bases ABC, Publisher, Subscriber. What is the right way to do it? from abc import ABC, abstractmethod class Publisher(ABC): subscribers = set() def register(self, obj): self.subscribers.add(obj) def unregister(self, obj): self.subscribers.remove(obj) def dispatch(self, event): print("dispatching", event) class Subscriber(ABC): @abstractmethod def handle_event(self, event): raise NotImplementedError class Abstract(ABC, Publisher, Subscriber): @abstractmethod def do_something(self, event): raise NotImplementedError class Concrete(Abstract): def handle_event(self, event): print("handle_event") def do_something(self, event): print("do_something") c = Concrete()
Abstract classes don't have to have abc.ABC in their list of bases. They have to have abc.ABCMeta (or a descendant) as their metaclass, and they have to have at least one abstract method (or something else that counts, like an abstract property), or they'll be considered concrete. (Publisher has no abstract methods, so it's actually concrete.) Inheriting from ABC is just a way to get ABCMeta as your class's metaclass, for people more comfortable with inheritance than metaclasses, but it's not the only way. You can also inherit from another class with ABCMeta as its metaclass, or specify metaclass=ABCMeta explicitly. In your case, inheriting from Publisher and Subscriber will already set Abstract's metaclass to ABCMeta, so inheriting from ABC is redundant. Remove ABC from Abstract's base class list, and everything should work. Alternatively, if you really want ABC in there for some reason, you can move it to the end of the base class list, which will resolve the MRO conflict - putting it first says you want ABC methods to override methods from the other classes, which conflicts with the fact that the other classes are subclasses of ABC.
7
7
65,548,403
2021-1-3
https://stackoverflow.com/questions/65548403/filter-elements-from-list-based-on-true-false-from-another-list
Is there an idiomatic way to mask elements of an array in vanilla Python 3? For example: a = [True, False, True, False] b = [2, 3, 5, 7] b[a] I was hoping b[a] would return [2, 5], but I get an error: TypeError: list indices must be integers or slices, not list In R, this works as I expected (using c() instead of [] to create the lists). I know NumPy has MaskedArray that can do this, I'm looking for an idiomatic way to do this in plain vanilla Python. Of course, I could use a loop and iterate through the mask list and the element list, but I'm hoping there's a more efficient way to mask elements using a higher level abstraction.
You can use itertools.compress: >>> from itertools import compress >>> a = [True, False, True, False] >>> b = [2, 3, 5, 7] >>> list(compress(b, a)) [2, 5] Refer "itertools.compress()" document for more details
5
6
65,547,980
2021-1-3
https://stackoverflow.com/questions/65547980/pandas-how-to-set-hour-of-a-datetime-from-another-column
I have a dataframe including a datetime column for date and a column for hour. like this: min hour date 0 0 2020-12-01 1 5 2020-12-02 2 6 2020-12-01 I need a datetime column including both date and hour. like this : min hour date datetime 0 0 2020-12-01 2020-12-01 00:00:00 0 5 2020-12-02 2020-12-02 05:00:00 0 6 2020-12-01 2020-12-01 06:00:00 How can I do it?
You could also try using apply and np.timedelta64: df['datetime'] = df['date'] + df['hour'].apply(lambda x: np.timedelta64(x, 'h')) print(df) Output: min hour date datetime 0 0 0 2020-12-01 2020-12-01 00:00:00 1 1 5 2020-12-02 2020-12-02 05:00:00 2 2 6 2020-12-01 2020-12-01 06:00:00
6
5
65,547,821
2021-1-3
https://stackoverflow.com/questions/65547821/how-to-add-attribute-to-class-in-python
I have: class A: a=1 b=2 I want to make as setattr(A,'c') then all objects that I create it from class A has c attribute. i did not want to use inheritance
There're two ways of setting an attribute to your class; First, by using setattr(class, variable, value) Code Syntax setattr(A,'c', 'c') print(dir(A)) OUTPUT You can see the structure of the class A within attributes ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'a', 'b', 'c'] [Program finished] Second, you can do it simply by assigning the variable Code Syntax A.d = 'd' print(dir(A)) OUTPUT You can see the structure of the class A within attributes ['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'a', 'b', 'c', 'd'] [Program finished]
8
8
65,523,909
2020-12-31
https://stackoverflow.com/questions/65523909/what-features-of-xgboost-are-affected-by-seed-random-state
The Python API doesn't give much more information other than that the seed= parameter is passed to numpy.random.seed: seed (int) – Seed used to generate the folds (passed to numpy.random.seed). But what features of xgboost use numpy.random.seed? Running xgboost with all default settings still produces the same performance even when altering the seed. I have already been able to verify colsample_bytree does so; different seeds yield different performance. I have been told it is also used by subsample and the other colsample_* features, which seems plausible since any form of sampling requires randomness. What other features of xgboost rely on numpy.random.seed?
Boosted trees are grown sequentially, with tree growth within one iteration being distributed among threads. To avoid overfitting, randomness is induced through the following params: colsample_bytree colsample_bylevel colsample_bynode subsample (note the *sample* pattern) shuffle in CV folder creation for cross validation In addition, you may encounter non-determinism, not controlled by random state, in the following places: [GPU] histogram building is not deterministic due to the nonassociative aspect of floating point summation. Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithm when using GPU ranking objective, the result is not deterministic due to the non-associative aspect of floating point summation. Comment Re: how you know this? For this to know it's helpful: To be aware of how trees are grown: Demystify Modern Gradient Boosting Trees (references may be also helpful) Scanning documentation full text for the terms of interest: random, sample, deterministic, determinism etc. Lastly (firstly?), knowing why you need sampling and similar cases from counterparts like bagged trees (RANDOM FORESTS by Leo Breiman) and neural networks (Deep learning with Python by FranΓ§ois Chollet, chapter on overfitting) may also be helpful.
11
7
65,544,645
2021-1-2
https://stackoverflow.com/questions/65544645/print-out-n-elements-of-a-list-each-time-a-function-is-run
I have a list of strings and I need to create a function that prints out n elements of the list each time it is run. For instance: book1 = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o'] Expected output first time I run the function if n = 5: a b c d e second time: f g h i j I tried this: def print_book(book): printed = book while printed != []: for i in range(0, len(book), 5): new_list = (book[i:i+5]) for el in new_list: print(el) break del(printed[i:i+10]) And I get either the entire list printed out, or I end up printing first n elements each time I run the function. If this question has already been asked, please point it out to me, I would really apreciate it. Thanks!
I guess you can try the following user function which applied to iterator book def print_book(book): cnt = 0 while cnt < 5: try: print(next(book)) except StopIteration: print("You have reached the end!") break cnt += 1 such that >>> bk1 = iter(book1) >>> print_book(bk1) a b c d e >>> print_book(bk1) f g h i j >>> print_book(bk1) k l m n o >>> print_book(bk1) You have reached the end!
6
4