question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
70,083,371
2021-11-23
https://stackoverflow.com/questions/70083371/can-i-use-experimental-types-from-typing-extensions
To be more specific: To solve questions like How do I type hint a method with the type of the enclosing class? PEP 673 introduces typing.Self. The PEP is a Draft, but it currently an experimental type in typing_extensions 4.0.0 I tried using this in python 3.8 @dataclasses.dataclass class MenuItem: url: str title: str description: str = "" items: typing.List[typing_extensions.Self] = dataclasses.field(default_factory=list) But it raises TypeError: Plain typing_extensions.Self is not valid as type argument I could just use the literal string "MenuItem" instead. But I was wondering why this doesn't work.
Yes you can, but be aware of the uses of the package: The typing_extensions module serves two related purposes: Enable use of new type system features on older Python versions. For example, typing.TypeGuard is new in Python 3.10, but typing_extensions allows users on previous Python versions to use it too. Enable experimentation with new type system PEPs before they are accepted and added to the typing module. This specific case was a bug in typing_extensions. It's being planned to get fixed in 4.0.1.
5
3
70,088,232
2021-11-23
https://stackoverflow.com/questions/70088232/calculate-centroid-of-entire-geodataframe-of-points
I would like to import some waypoints/markers from a geojson file. Then determine the centroid of all of the points. My code calculates the centroid of each point not the centroid of all points in the series. How do I calculate the centroid of all points in the series? import geopandas filepath = r'Shiloh.json' gdf = geopandas.read_file(filepath) xyz = gdf['geometry'].to_crs('epsg:3587') print(type(xyz)) print(xyz) # xyz is a geometry containing POINT Z c = xyz.centroid # instead of calculating the centroid of the collection of points # centroid has calculated the centroid of each point. # i.e. basically the same X and Y data as the POINT Z. The output from print(type(xyz)) and print(xyz) <class 'geopandas.geoseries.GeoSeries'> 0 POINT Z (2756810.617 248051.052 0.000) 1 POINT Z (2757659.756 247778.482 0.000) 2 POINT Z (2756907.786 248422.534 0.000) 3 POINT Z (2756265.710 248808.235 0.000) 4 POINT Z (2757719.694 248230.174 0.000) 5 POINT Z (2756260.291 249014.991 0.000) 6 POINT Z (2756274.410 249064.239 0.000) 7 POINT Z (2757586.742 248437.232 0.000) 8 POINT Z (2756404.511 249247.296 0.000) Name: geometry, dtype: geometry the variable 'c' reports as (centroid of each point, not the centroid of the 9 POINT Z elements) : 0 POINT (2756810.617 248051.052) 1 POINT (2757659.756 247778.482) 2 POINT (2756907.786 248422.534) 3 POINT (2756265.710 248808.235) 4 POINT (2757719.694 248230.174) 5 POINT (2756260.291 249014.991) 6 POINT (2756274.410 249064.239) 7 POINT (2757586.742 248437.232) 8 POINT (2756404.511 249247.296) dtype: geometry
first dissolve the GeoDataFrame to get a single shapely.geometry.MultiPoint object, then find the centroid: In [8]: xyz.dissolve().centroid Out[8]: 0 POINT (2756876.613 248561.582) dtype: geometry From the geopandas docs: dissolve() can be thought of as doing three things: it dissolves all the geometries within a given group together into a single geometric feature (using the unary_union method), and it aggregates all the rows of data in a group using groupby.aggregate, and it combines those two results. Note that if you have rows with duplicate geometries, a centroid calculated with this method will not appropriately weight the duplicates, as dissolve will first de-duplicate the records before calculating the centroid: In [9]: gdf = gpd.GeoDataFrame({}, geometry=[ ...: shapely.geometry.Point(0, 0), ...: shapely.geometry.Point(1, 1), ...: shapely.geometry.Point(1, 1), ...: ]) In [10]: gdf.dissolve().centroid Out[10]: 0 POINT (0.50000 0.50000) dtype: geometry To accurately calculate the centroid of a collection of points including duplicates, create a shapely.geometry.MultiPoint collection directly: In [11]: mp = shapely.geometry.MultiPoint(gdf.geometry) In [12]: mp.centroid.xy Out[12]: (array('d', [0.6666666666666666]), array('d', [0.6666666666666666]))
9
16
70,039,190
2021-11-19
https://stackoverflow.com/questions/70039190/how-to-read-the-values-of-iconlayouts-reg-binary-registry-file
I want to make a program that gets the positions of the icons on the screen. And with some research I found out that the values I needed were in a registry binary file called IconLayouts (Located in HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\Shell\Bags\1\Desktop) I used python to get the positions using the winreg module. And succeeded on getting the values. from winreg import * aReg = ConnectRegistry(None, HKEY_CURRENT_USER) aKey = OpenKey(aReg, r"Software\Microsoft\Windows\Shell\Bags\1\Desktop", REG_BINARY) name, value, type_ = EnumValue(aKey, 9) value = value.replace(b'\x00', b'') This is the code I have. But the problem is I don't know what to do with these values. The program returns something like: b'\x03\x01\x01\x01\x04,::{20D04FE0-3AEA-1069-A2D8-08002B30309D}> ,::{645FF040-5081-101B-9F08-00AA002F954E}> \x13Timetables.jpeg> \nfolder>\\ \x01\x02\x01\x01\x02\x01\x0c\x04\x01\x04\x80?\x01@\x020A\x03' I would appreciate if you would help me decipher this output and get the positions from it.
The following code snippet could help. It's very, very simplified code combined from Windows Shellbag Forensics article and shellbags.py script (the latter is written in Python 2.7 hence unserviceable for me). import struct from winreg import * aReg = ConnectRegistry(None, HKEY_CURRENT_USER) aKey = OpenKey(aReg, r"Software\Microsoft\Windows\Shell\Bags\1\Desktop", REG_BINARY) name, value, type_ = EnumValue(aKey, 9) offset = 0x10 head = [struct.unpack_from("<H", value[offset:],0)[0], struct.unpack_from("<H", value[offset:],2)[0], struct.unpack_from("<H", value[offset:],4)[0], struct.unpack_from("<H", value[offset:],6)[0]] number_of_items = struct.unpack_from("<I", value[offset:],8)[0] # 4 bytes (dword) offset += 12 for x in range( number_of_items): uint16_size = struct.unpack_from("<H", value[offset:],0)[0]; uint16_flags = struct.unpack_from("<H", value[offset:],2)[0]; uint32_filesize = struct.unpack_from("<I", value[offset:],4)[0]; dosdate_date = struct.unpack_from("<H", value[offset:],8)[0]; dostime_time = struct.unpack_from("<H", value[offset:],10)[0]; fileattr16_ = struct.unpack_from("<H", value[offset:],12)[0]; offset += 12 entry_name = value[offset:(offset + (2 * uint32_filesize - 8))].decode('utf-16-le') offset += (2 * uint32_filesize - 4 ) if offset % 2: offset += 1 print( x, uint32_filesize, entry_name) print( '\nThere is', (len(value) - offset), 'bytes left' ) Output (truncated): .\SO\70039190.py 0 44 ::{59031A47-3F72-44A7-89C5-5595FE6B30EE} 1 44 ::{20D04FE0-3AEA-1069-A2D8-08002B30309D} 2 44 ::{5399E694-6CE5-4D6C-8FCE-1D8870FDCBA0} 3 31 Software602 Form Filler.lnk 4 8 test 5 32 Virtual Russian Keyboard.lnk … 49 22 WTerminalAdmin.lnk 50 22 AVG Secure VPN.lnk 51 44 ::{645FF040-5081-101B-9F08-00AA002F954E} 52 29 powershell - Shortcut.lnk There is 1176 bytes left Honestly, I don't fully comprehend offset around entry_name… Edit I have found (partial) structure for the rest of value. There are two tables containing row and column along with an index to desktop_items list for each desktop icon. Current row and column assignment is in the second table (see the picture below). The first table supposedly contains default assignments for automatic sort by (from desktop context menu). Unfortunately, I have no clue for interpretation of row and column values (e.g. 16256, 16384) to icon indexes (3rd row, 2nd column). import struct import winreg import pprint aReg = winreg.ConnectRegistry(None, winreg.HKEY_CURRENT_USER) aKey = winreg.OpenKey(aReg, r"Software\Microsoft\Windows\Shell\Bags\1\Desktop", winreg.REG_BINARY) name, value, type_ = winreg.EnumValue(aKey, 9) aKey.Close() aReg.Close() offset = 0x10 head = [struct.unpack_from("<H", value[offset:],0)[0], # 2 bytes (word) struct.unpack_from("<H", value[offset:],2)[0], struct.unpack_from("<H", value[offset:],4)[0], struct.unpack_from("<H", value[offset:],6)[0], struct.unpack_from("<I", value[offset:],8)[0] # 4 bytes (dword) ] number_of_items = head[-1] offset += 12 desktop_items = [] for x in range( number_of_items): uint16_size = struct.unpack_from("<H", value[offset:],0)[0]; uint16_flags = struct.unpack_from("<H", value[offset:],2)[0]; uint32_filesize = struct.unpack_from("<I", value[offset:],4)[0]; dosdate_date = struct.unpack_from("<H", value[offset:],8)[0]; dostime_time = struct.unpack_from("<H", value[offset:],10)[0]; fileattr16_ = struct.unpack_from("<H", value[offset:],12)[0]; offset += 12 entry_name = value[offset:(offset + (2 * uint32_filesize - 8))].decode('utf-16-le') offset += (2 * uint32_filesize - 4 ) # uint16_size = location # 0x20 = %PUBLIC%\Desktop # 0x7c = %USERPROFILE%\Desktop desktop_items.append([x, '{:04x}'.format(uint16_size), 0, 0, '{:04x}'.format(fileattr16_), entry_name]) print('{:2}'.format(x), '{:04x}'.format(uint16_size), # '{:04x}'.format(uint16_flags), # always zero # '{:04x}'.format(dosdate_date), # always zero # '{:04x}'.format(dostime_time), # always zero '{:04x}'.format(fileattr16_), entry_name) print( '\nThere is', (len(value) - offset), 'bytes left' ) print('head (12 bytes):', head) offs = offset head2 = [] for x in range( 32): head2.append(struct.unpack_from("<H", value[offs:],2*x)[0]) offs += 64 print( 'head2 (64 bytes):', head2) for x in range( number_of_items): item_list = [ struct.unpack_from("<H", value[offs:],0)[0], # 0 struct.unpack_from("<H", value[offs:],2)[0], # column struct.unpack_from("<H", value[offs:],4)[0], # 0 struct.unpack_from("<H", value[offs:],6)[0], # row struct.unpack_from("<H", value[offs:],8)[0] ] # index to desktop_items # print( x, item_list) desktop_items[item_list[-1]][2] = int( item_list[1]) desktop_items[item_list[-1]][3] = int( item_list[3]) offs += 10 print(len(value), offset, offs, (offs - offset), '1st table, from start:') table_1st = desktop_items table_1st.sort(key=lambda k: (k[2], k[3])) pprint.pprint(table_1st) #pprint.pprint(desktop_items) # 2nd table from behind offs = len(value) for x in range( number_of_items): offs -= 10 item_list = [ struct.unpack_from("<H", value[offs:],0)[0], # 0 struct.unpack_from("<H", value[offs:],2)[0], # column struct.unpack_from("<H", value[offs:],4)[0], # 0 struct.unpack_from("<H", value[offs:],6)[0], # row struct.unpack_from("<H", value[offs:],8)[0] ] # index to desktop_items # print(item_list) desktop_items[item_list[-1]][2] = int( item_list[1]) desktop_items[item_list[-1]][3] = int( item_list[3]) print(len(value), offset, offs, (offs - offset), '2nd table, from behind:') table_2nd = desktop_items table_2nd.sort(key=lambda k: (k[2], k[3])) pprint.pprint(table_2nd) # pprint.pprint(desktop_items) Result (an illustrative picture):
4
4
70,041,819
2021-11-19
https://stackoverflow.com/questions/70041819/plotly-jupyterdash-is-not-removing-irrelevant-columns-when-applying-categoryorde
I'm trying to sort the axis of my chart based on the values in the "sales" column. Once I apply fig.update_layout(yaxis={'categoryorder':'total descending'}) to my figure, the chart no longer removes irrelevant rows when filtering the dataframe by selecting the "099" value for the RadioItems callback option. code to recreate issue: import pandas as pd from plotly import tools import plotly.express as px import plotly.graph_objects as go import dash import dash_core_components as dcc import dash_html_components as html from jupyter_dash import JupyterDash import dash_bootstrap_components as dbc from dash.dependencies import Input, Output data = {'name': {0: 'G P', 1: 'D L', 2: 'T B', 3: 'N A', 4: 'P O', 5: 'Ho A'}, 'team': {0: '099', 1: '099', 2: '073', 3: '073', 4: '073', 5: '099'}, 'sales': {0: 88946, 1: 8123, 2: 6911, 3: 74796, 4: 8532, 5: -31289} } df1 = pd.DataFrame.from_dict(data) app = JupyterDash(external_stylesheets=[dbc.themes.SLATE]) template = 'plotly_dark' controls = dbc.Card( [ dbc.FormGroup( [ dbc.Label("Teams"), dcc.RadioItems( id='slm-radio', options=[{'label': 'All', 'value': 'All'}] + [{'label': k, 'value': k} for k in df1['team'].unique()], value='All', ), ], ) ] ) app.layout = dbc.Container( [ dbc.Row([ dbc.Col([controls],xs = 4), dbc.Col([ dbc.Row([ dbc.Col(dcc.Graph(id="sales_graph")), ]) ]), ]), html.Br(), ], fluid=True, ) @app.callback( Output("sales_graph", "figure"), [ Input("slm-radio", "value") ], ) def history_graph(team): dataset = df1 if not team == 'All': dataset = dataset[dataset['team']==team] else: dataset = dataset v_cat = dataset['name'] x_val = dataset['sales'] fig = go.Figure(go.Bar( x=x_val, y=v_cat, marker_color="#ff0000", orientation='h', width=0.25 )) fig.update_layout(template='plotly_dark') fig.update_layout(yaxis={'categoryorder':'total descending'}) return fig app.run_server(mode='inline', port = 8009) The desired results is to remove the N A, P O, and T B columns when '099' is selected I get the desired results if: the value for 'Ho A' is changed to positive sorting the axis is removed if graph is rendered using plotly locally without dash
I was able to work around this issue by computing the ranking independently of the graphing step and placing the rankings into a series, rankings. I then passed the ranked series to fig.update_layout(yaxis={'categoryorder':'array', 'categoryarray':rankings['team_rank']})
4
3
70,036,378
2021-11-19
https://stackoverflow.com/questions/70036378/pytest-full-cleanup-between-tests
In a module, I have two tests: @pytest.fixture def myfixture(request): prepare_stuff() yield 1 clean_stuff() # time.sleep(10) # in doubt, I tried that, did not help def test_1(myfixture): a = somecode() assert a==1 def test_2(myfixture): b = somecode() assert b==1 case 1 When these two tests are executed individually, all is ok, i.e. both pytest ./test_module.py:test_1 and immediately after: pytest ./test_module.py:test_2 run until completion and pass with success. case 2 But: pytest ./test_module.py -k "test_1 or test_2" reports: collected 2 items test_module.py . and hangs forever (after investigation: test_1 completed successfully, but the second call to prepare_stuff hangs). question In my specific setup prepare_stuff, clean_stuff and somecode are quite evolved, i.e. they create and delete some shared memory segments, which when done wrong can results in some hanging. So some issue here is possible. But my question is: are there things occurring between two calls of pytest (case 1) that do not occur between the call of test_1 and test_2 from the same "pytest process" (case 2), which could explain why "case 1" works ok while "case 2" hangs between test_1 and test_2 ? If so, is there a way to "force" the same "cleanup" to occur between test_1 and test_2 for "case 2" ? Note: I already tried to specify the scope of "myfixture" to "function", and also double checked that "clean_stuff" is called after "test_1", even in "case 2".
Likely something is happening in your prepare_stuff and/or clean_stuff functions. When you run tests as: pytest ./test_module.py -k "test_1 or test_2" they are running in the same execution context, same process, etc. So, if, for example, clean_stuff doesn't do a proper clean up then execution of a next test can fail. When you run tests as: pytest ./test_module.py:test_1 pytest ./test_module.py:test_2 they are running in different execution contexts, i.e. they start in an absolutely clean environment, and, unless you're modifying some external resources, you can easily remove clean_stuff in this case and they will pass anyway. To rule out pytest issue just try to run: prepare_stuff() a = somecode() assert a==1 clean_stuff() prepare_stuff() b = somecode() assert b==1 clean_stuff() I'm pretty sure you'll have the same problem, which would confirm that the issue is in your code, but not in the pytest.
6
0
70,020,874
2021-11-18
https://stackoverflow.com/questions/70020874/ffmpegs-xstack-command-results-in-out-of-sync-sound-is-it-possible-to-mix-the
I wrote a python script that generates a xstack complex filter command. The video inputs is a mixture of several formats described here: I have 2 commands generated, one for the xstack filter, and one for the audio mixing. Here is the stack command: (sorry the text doesn't wrap!) 'c:/ydl/ffmpeg.exe', '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-filter_complex', '[0]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf0];[rsclbf0]fps=24[rscl0];[1]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf1];[rsclbf1]fps=24[rscl1];[2]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf2];[rsclbf2]fps=24[rscl2];[3]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf3];[rsclbf3]fps=24[rscl3];[4]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf4];[rsclbf4]fps=24[rscl4];[5]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf5];[rsclbf5]fps=24[rscl5];[6]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf6];[rsclbf6]fps=24[rscl6];[7]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf7];[rsclbf7]fps=24[rscl7];[8]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf8];[rsclbf8]fps=24[rscl8];[9]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf9];[rsclbf9]fps=24[rscl9];[10]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf10];[rsclbf10]fps=24[rscl10];[11]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf11];[rsclbf11]fps=24[rscl11];[12]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf12];[rsclbf12]fps=24[rscl12];[13]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf13];[rsclbf13]fps=24[rscl13];[14]scale=480:270:force_original_aspect_ratio=decrease,pad=480:270:(ow-iw)/2:(oh-ih)/2, setsar=1[rsclbf14];[rsclbf14]fps=24[rscl14];[rscl0][rscl1][rscl2][rscl3][rscl4]concat=n=5[cct0];[rscl5][rscl6][rscl7]concat=n=3[cct1];[rscl8][rscl9][rscl10]concat=n=3[cct2];[rscl11][rscl12][rscl13][rscl14]concat=n=4[cct3];[cct0][cct1][cct2][cct3]xstack=inputs=4:layout=0_0|w0_0|0_h0|w0_h0', 'output.mp4', Here is the mix_audio command: 'c:/ydl/ffmpeg.exe', '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-i', 'inputX.mp4' '-filter_complex', '[0:a][1:a][2:a][3:a][4:a]concat=n=5:v=0:a=1[cct_a0];[5:a][6:a][7:a]concat=n=3:v=0:a=1[cct_a1];[8:a][9:a][10:a]concat=n=3:v=0:a=1[cct_a2];[11:a][12:a][13:a][14:a]concat=n=4:v=0:a=1[cct_a3];[cct_a0][cct_a1][cct_a2][cct_a3]amix=inputs=4[all_aud]', '-map', '15:v', '-map', '[all_aud]', '-c:v', 'copy', 'output.mp4', Of course those are sample commands, I actually use many more videos as input, this sample is shorter for the sake or readability. Here are the videos I use, with relevant ffprobe data, in some HTML table: I'm getting this warning: [swscaler @ 0000020bac5a19c0] Warning: data is not aligned! This can lead to a speed loss I think this is unrelated to audio desyncing this unaligned data is about x264 resolutions being multiple of 16, but my filter takes this into account already. There is a perceptible audio desyncing, which is the main problem I am having. FFMPEG doesn't seem to get other errors. Is it because I use 2 commands to mix the audio after? How could I proceed to to the xstack stage and the audio mixing in a single stage? I'm a bit confused as how FFMPEG handles diverse framerates. I was told to reencode all the video inputs before performing the xstack stage, but I would create some disk overhead, so I'd rather do it in a single ffmpeg job it possible.
I'm a bit confused as how FFMPEG handles diverse framerates It doesn't, which would cause a misalignment in your case. The vast majority of filters (any which deal with multiple sources and make use of frames, essentially), including the Concatenate filter require that be the sources have the same framerate. For the concat filter to work, the inputs have to be of the same frame dimensions (e.g., 1920⨉1080 pixels) and should have the same framerate. (emphasis added) The documentation also adds: Therefore, you may at least have to add a ​scale or ​scale2ref filter before concatenating videos. A handful of other attributes have to match as well, like the stream aspect ratio. Refer to the documentation of the filter for more info. You should convert your sources to the same framerate first.
10
3
70,081,399
2021-11-23
https://stackoverflow.com/questions/70081399/automatically-update-python-source-code-imports
We are refactoring our code base. Old: from a.b import foo_method New: from b.d import bar_method Both methods (foo_method() and bar_method()) are the same. It just changed the name an the package. Since above example is just one example of many ways a method can be imported, I don't think a simple regular expression can help here. How to refactor the importing of a module with a command line tool? A lot of source code lines need to be changed, so that an IDE does not help here.
Behind the scenes, IDEs are no much more than text editors with bunch of windows and attached binaries to make different kind of jobs, like compiling, debugging, tagging code, linting, etc. Eventually one of those libraries can be used to refactor code. One such library is Jedi, but there is one that was specifically made to handle refactoring, which is rope. pip3 install rope A CLI solution You can try using their API, but since you asked for a command line tool and there wasn't one, save the following file anywhere reachable (a known relative folder your user bin, etc) and make it executable chmod +x pyrename.py. #!/usr/bin/env python3 from rope.base.project import Project from rope.refactor.rename import Rename from argparse import ArgumentParser def renamodule(old, new): prj.do(Rename(prj, prj.find_module(old)).get_changes(new)) def renamethod(mod, old, new, instance=None): mod = prj.find_module(mod) modtxt = mod.read() pos, inst = -1, 0 while True: pos = modtxt.find('def '+old+'(', pos+1) if pos < 0: if instance is None and prepos > 0: pos = prepos+4 # instance=None and only one instance found break print('found', inst, 'instances of method', old+',', ('tell which to rename by using an extra integer argument in the range 0..' if (instance is None) else 'could not use instance=')+str(inst-1)) pos = -1 break if (type(instance) is int) and inst == instance: pos += 4 break # found if instance is None: if inst == 0: prepos = pos else: prepos = -1 inst += 1 if pos > 0: prj.do(Rename(prj, mod, pos).get_changes(new)) argparser = ArgumentParser() #argparser.add_argument('moduleormethod', choices=['module', 'method'], help='choose between module or method') subparsers = argparser.add_subparsers() subparsermod = subparsers.add_parser('module', help='moduledottedpath newname') subparsermod.add_argument('moduledottedpath', help='old module full dotted path') subparsermod.add_argument('newname', help='new module name only') subparsermet = subparsers.add_parser('method', help='moduledottedpath oldname newname') subparsermet.add_argument('moduledottedpath', help='module full dotted path') subparsermet.add_argument('oldname', help='old method name') subparsermet.add_argument('newname', help='new method name') subparsermet.add_argument('instance', nargs='?', help='instance count') args = argparser.parse_args() if 'moduledottedpath' in args: prj = Project('.') if 'oldname' not in args: renamodule(args.moduledottedpath, args.newname) else: renamethod(args.moduledottedpath, args.oldname, args.newname) else: argparser.error('nothing to do, please choose module or method') Let's create a test environment with the exact the scenario shown in the question (here assuming a linux user): cd /some/folder/ ls pyrename.py # we are in the same folder of the script # creating your test project equal to the question in prj child folder: mkdir prj; cd prj; cat << EOF >> main.py #!/usr/bin/env python3 from a.b import foo_method foo_method() EOF mkdir a; touch a/__init__.py; cat << EOF >> a/b.py def foo_method(): print('yesterday i was foo, tomorrow i will be bar') EOF chmod +x main.py # testing: ./main.py # yesterday i was foo, tomorrow i will be bar cat main.py cat a/b.py Now using the script for renaming modules and methods: # be sure that you are in the project root folder # rename package (here called module) ../pyrename.py module a b # package folder 'a' renamed to 'b' and also all references # rename module ../pyrename.py module b.b d # 'b.b' (previous 'a.b') renamed to 'd' and also all references also # important - oldname is the full dotted path, new name is name only # rename method ../pyrename.py method b.d foo_method bar_method # 'foo_method' in package 'b.d' renamed to 'bar_method' and also all references # important - if there are more than one occurence of 'def foo_method(' in the file, # it is necessary to add an extra argument telling which (zero-indexed) instance to use # you will be warned if multiple instances are found and you don't include this extra argument # testing again: ./main.py # yesterday i was foo, tomorrow i will be bar cat main.py cat b/d.py This example did exact what the question did. Only renaming of modules and methods were implemented because it is the question scope. If you need more, you can increment the script or create a new one from scratch, learning from their documentation and from this script itself. For simplicity we are using current folder as the project folder, but you can add an extra parameter in the script to make it more flexible.
6
2
70,081,767
2021-11-23
https://stackoverflow.com/questions/70081767/how-do-i-cleanly-test-equality-of-objects-in-mypy-without-producing-errors
I have the following function: import pandas as pd def eq(left: pd.Timestamp, right: pd.Timestamp) -> bool: return left == right I get the following error when I run it through Mypy: error: Returning Any from function declared to return "bool" I believe this is because Mypy doesn't know about pd.Timestamp so treats it as Any. (Using the Mypy reveal_type function shows that Mypy treats left and right as Any.) What is the correct way to deal with this to stop Mypy complaining?
you can cast it as a bool. import pandas as pd def eq(left: pd.Timestamp, right: pd.Timestamp) -> bool: return bool(left == right) if mypy doesn't like that you can import cast from typing and use that to cast it to a bool. import pandas as pd from typing import cast def eq(left: pd.Timestamp, right: pd.Timestamp) -> bool: result = bool(left == right) return cast(bool, result)
9
6
70,069,026
2021-11-22
https://stackoverflow.com/questions/70069026/how-to-use-files-in-the-answer-api-of-openai
As finally OpenAI opened the GPT-3 related API publicly, I am playing with it to explore and discover his potential. I am trying the Answer API, the simple example that is in the documentation: https://beta.openai.com/docs/guides/answers I upload the .jsonl file as indicated, and I can see it succesfully uploaded with the openai.File.list() api. When I try to use it, unfortunately, I always get the same error: >>> openai.File.create(purpose='answers', file=open('example.jsonl') ) <File file id=file-xxx at 0x7fbc9eca5e00> JSON: { "bytes": 140, "created_at": 1637597242, "filename": "example.jsonl", "id": "file-xxx", "object": "file", "purpose": "answers", "status": "uploaded", "status_details": null } #Use the file in the API: openai.Answer.create( search_model="ada", model="curie", question="which puppy is happy?", file="file-xxx", examples_context="In 2017, U.S. life expectancy was 78.6 years.", examples=[["What is human life expectancy in the United States?", "78 years."]], max_rerank=10, max_tokens=5, stop=["\n", "<|endoftext|>"] ) <some exception, then> openai.error.InvalidRequestError: File is still processing. Check back later. I have waited several hours, and I do not think this content deserve such a long wait... Do you know if it is a normal behaviour, or if I miss something? Thanks
After a few hours (the day after) the file metadata status changed from uploaded to processed and the file could be used in the Answer API as stated in the documentation. I think this need to be better documented in the original OpenAI API reference.
4
3
70,079,504
2021-11-23
https://stackoverflow.com/questions/70079504/django-duplicated-queries-in-nested-models-querying-with-manytomanyfield
How do I get rid of the duplicated queries as in the screenshot? I have two models as following, class Genre(MPTTModel): name = models.CharField(max_length=50, unique=True) parent = TreeForeignKey('self', on_delete=models.CASCADE, null=True, blank=True, related_name='children') def __str__(self): return self.name class Game(models.Model): name = models.CharField(max_length=50) genre = models.ManyToManyField(Genre, blank=True, related_name='games') def __str__(self): return self.name and have a serializer and views, class GameSerializer(serializers.ModelSerializer): class Meta: model = Game exclude = ['genre', ] class GenreGameSerializer(serializers.ModelSerializer): children = RecursiveField(many=True) games = GameSerializer(many=True,) class Meta: model = Genre fields = ['id', 'name', 'children', 'games'] class GamesByGenreAPI(APIView): queryset = Genre.objects.root_nodes() serializer_class = GenreGameSerializer def get(self, request, *args, **kwargs): ser = GenreGameSerializer(data=Genre.objects.root_nodes() .prefetch_related('children__children', 'games'), many=True) if ser.is_valid(): pass return Response(ser.data) so basically the model populated when serialized looks like this The result is what I am expecting but there are n duplicated queries for each of the genre. How can I fix it? Thanks.. here is a paste https://pastebin.com/xfRdBaF4 with all code, if you want to reproduce the issue. Also add path('games/', GamesByGenreAPI.as_view()), in urls.py which is omitted in paste. Update tried logging queries to check if its issue with debug toolbar, but it is NOT, the queries are duplicated.. here is the screenshot.
From debug toolbar output I will asume that you have two level of nesting in your Genre model (root, Level 1). I do not know if the Level 1 has any children, i.e. there are Level 2 genres, since I can't view the query results (but this is not relevant for the current problem). The root level Genres are (1, 4, 7), the Level 1 are (2, 3, 5, 6, 8, 9). The prefetch worked for these lookups prefetch_related("children__children") as the queries are grouped in two separate queries, as it should be. The next query for games related to root level genres (prefetch_related("games")) are also prefetched. It is the fourth query in the debug toolbar output. The next queries as you can see are getting the games for each of Level 1 genres in a separate query, which I presume are triggered from the serialiser fields, since there are no lookups specified in the view, that could prefetch those records. Adding another prefetch lookup targeted at those records should solve the problem. ser = GenreGameSerializer(data=Genre.objects.root_nodes() .prefetch_related( 'children__children', 'games' # prefetching games for Level 1 genres 'children__games'), many=True) Note, that if there are more nested genres, the same logic should be applied for each nesting level. For example, if there are Level 2 genres, then you should prefetch the related games for those genres with: ser = GenreGameSerializer(data=Genre.objects.root_nodes() .prefetch_related( 'children__children', 'games' 'children__games', 'children__children__games'), many=True)
4
2
70,008,909
2021-11-17
https://stackoverflow.com/questions/70008909/pem-certificate-tls-verification-against-rest-api
I have been provided with a pem certificate to authenticate with a third party. Authenticating using certificates is a new concept for me. Inside are two certificates and a private key. The issuer has advised they do not support SSL verification but use TLS(1.1/1.2). I have run a script as below: import requests as req import json url = 'https://url.com/call' certificate_file = "C:/certs/cert.pem" headers = {"Content-Type": "application/json"} req_body ={ "network":{ "network_id": 12345 }, "branch":{ "branch_id": 12345, }, "export_period":{ "start_date_time": "16-11-2021 00:00:00", "end_date_time": "17-11-2021 00:00:00" } } jsonObject = json.dumps(req_body) response = req.post(url,headers=headers,params=jsonObject,verify=certificate_file) I'm getting the following error: SSLError: HTTPSConnectionPool(host='url.com, port=443): Max retries exceeded with url: /call?%7B%22network%22:%20%7B%22network_id%22:%2012345%7D,%20%22branch%22:%20%7B%22branch_id%22:%2012345%7D,%20%22export_period%22:%20%7B%22start_date_time%22:%20%2216-11-2021%2000:00:00%22,%20%22end_date_time%22:%20%2217-11-2021%2000:00:00%22%7D%7D (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)'))) Would appreciate guidance, my gut says I should be doing something specific for TLS hence the SSL error.
The issuer is using an up to date version of HTTPS - SSL is the commonly used term but TLS is the more correct one. Sounds like their setup is correct - meaning you need to call it with a trusted client certificate, as part of an HTTPS request. I would recommend doing it with the curl tool first so that you can verify that the API works as expected. curl -s -X GET "https://api.example.com/test" \ --cert ./certs/example.client.pem \ --key ./certs/example.client.key \ --cacert ./certs/ca.pem \ -H "Content-Type: application/json" Split the certs into separate files as above. Sounds like one of them is a root certificate authority that you need to tell the client tech stack to trust - for curl this is done using the cacert parameter as above. Once this is working you can follow the same approach in the Python requests library. I believe this uses cert and verify parameters like this. So it looks like your code is not far off. result = requests.get( 'https://api.example.com', cert=('example.pem', 'example.key'), verify='ca.pem') MORE ABOUT MUTUAL TLS Out of interest, if you ever want to demystify Mutual TLS and understand more, have a look at these advanced Curity resources: Mutual TLS Secured API Code Sample These include an OpenSSL Script you can run, to see what the separated certificate files should look like.
5
8
70,017,034
2021-11-18
https://stackoverflow.com/questions/70017034/determine-whether-the-columns-of-a-dataset-are-invariant-under-any-given-scikit
Given an sklearn tranformer t, is there a way to determine whether t changes columns/column order of any given input dataset X, without applying it to the data? For example with t = sklearn.preprocessing.StandardScaler there is a 1-to-1 mapping between the columns of X and t.transform(X), namely X[:, i] -> t.transform(X)[:, i], whereas this is obviously not the case for sklearn.decomposition.PCA. A corollary of that would be: Can we know, how the columns of the input will change by applying t, e.g. which columns an already fitted sklearn.feature_selection.SelectKBest chooses. I am not looking for solutions to specific transformers, but a solution applicable to all or at least a wide selection of transformers. Feel free to implement your own Pipeline class or wrapper if necessary.
Not all your "transformers" would have the .get_feature_names_out method. Its implementation is discussed in the sklearn github. In the same link, you can see there is, to quote @thomasjpfan, a _OneToOneFeatureMixin class used by transformers with a simple one-to-one correspondence between input and output features Restricted to sklearn, we can check whether the transformer or estimator is a subclass of _OneToOneFeatureMixin , for example: from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.feature_selection import SelectKBest from sklearn.base import _OneToOneFeatureMixin tf = {'pca':PCA(),'standardscaler':StandardScaler(),'kbest':SelectKBest()} [i+":"+str(issubclass(type(tf[i]),_OneToOneFeatureMixin)) for i in tf.keys()] ['pca:False', 'standardscaler:True', 'kbest:False'] These would the source code for _OneToOneFeatureMixin
5
2
70,087,344
2021-11-23
https://stackoverflow.com/questions/70087344/python-in-docker-runtimeerror-cant-start-new-thread
I'm unable to debug one error myself. I'm running python 3.8.12 inside docker image on Fedora release 35 (Thirty Five) and I'm unable to spawn threads from python. It's required for boto3 transfer to run in parallel and it uses concurrent.features to do so. The simplest example which replicates my issue without any dependencies is (copied from python docs) import concurrent.futures import urllib.request URLS = ['http://www.foxnews.com/', 'http://www.cnn.com/', 'http://europe.wsj.com/', 'http://www.bbc.co.uk/', 'http://some-made-up-domain.com/'] def load_url(url, timeout): with urllib.request.urlopen(url, timeout=timeout) as conn: return conn.read() with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: future_to_url = {executor.submit(load_url, url, 60): url for url in URLS} for future in concurrent.futures.as_completed(future_to_url): url = future_to_url[future] try: data = future.result() except Exception as exc: pass sadly output of these lines is Traceback (most recent call last): File "<stdin>", line 2, in <module> File "<stdin>", line 2, in <dictcomp> File "/usr/lib64/python3.8/concurrent/futures/thread.py", line 188, in submit self._adjust_thread_count() File "/usr/lib64/python3.8/concurrent/futures/thread.py", line 213, in _adjust_thread_count t.start() File "/usr/lib64/python3.8/threading.py", line 852, in start _start_new_thread(self._bootstrap, ()) RuntimeError: can't start new thread That's all I have. Is there place where should I look? I've already checked ulimit which says unlimited. I'm kind of despair where to look or what to change to debug this issue.
Solution to this problem was to upgrade docker from version 18.06.1-ce to 20.10.7. Why? This is because the default seccomp profile of Docker 20.10.9 is not adjusted to support the clone() syscall wrapper of glibc 2.34 adopted in Ubuntu 21.10 and Fedora 35. Source: ubuntu:21.10 and fedora:35 do not work on the latest Docker (20.10.9)
36
59
70,089,199
2021-11-23
https://stackoverflow.com/questions/70089199/how-to-set-a-different-linestyle-for-each-hue-group-in-a-kdeplot-displot
How can each hue group of a seaborn.kdeplot, or seaborn.displot with kind='kde' be given a different linestyle? Both axes-level and figure-level options will accept a str for linestyle/ls, which applies to all hue groups. import seaborn as sns import matplotlib.pyplot as plt # load sample data iris = sns.load_dataset("iris") # convert data to long form im = iris.melt(id_vars='species') # axes-level plot works with 1 linestyle fig = plt.figure(figsize=(6, 5)) p1 = sns.kdeplot(data=im, x='value', hue='variable', fill=True, ls='-.') # figure-level plot works with 1 linestyle p2 = sns.displot(kind='kde', data=im, x='value', hue='variable', fill=True, ls='-.') kdeplot displot Reviewed Questions How to set the line style for each kdeplot in a jointgrid doesn't deal with hue groups. How to automatically alternate or cycle linestyles in seaborn regplot? doesn't deal with hue groups and iterates through each unique group. Dotted Seaborn distplot doesn't deal with hue groups and iterates through each unique group. change line style in seaborn facet grid hue_kws isn't a valid option.
With fill=True the object to update is in .collections With fill=False the object to update is in .lines Updating the legend is fairly simple: handles = p.legend_.legendHandles[::-1] extracts and reverses the legend handles. They're reversed to update because they're in the opposite order in which the plot linestyle is updated Note that figure-level plots extract the legend with ._legend, with the axes-level plots use .legend_. Tested in python 3.8.12, matplotlib 3.4.3, seaborn 0.11.2 kdeplot: axes-level Extract and iterate through .collections or .lines from the axes object and use .set_linestyle fill=True fig = plt.figure(figsize=(6, 5)) p = sns.kdeplot(data=im, x='value', hue='variable', fill=True) lss = [':', '--', '-.', '-'] handles = p.legend_.legendHandles[::-1] for line, ls, handle in zip(p.collections, lss, handles): line.set_linestyle(ls) handle.set_ls(ls) fill=False fig = plt.figure(figsize=(6, 5)) p = sns.kdeplot(data=im, x='value', hue='variable') lss = [':', '--', '-.', '-'] handles = p.legend_.legendHandles[::-1] for line, ls, handle in zip(p.lines, lss, handles): line.set_linestyle(ls) handle.set_ls(ls) displot: figure-level Similar to the axes-level plot, but each axes must be iterated through The legend handles could be updated in for line, ls, handle in zip(ax.collections, lss, handles), but that applies the update for each subplot. Therefore, a separate loop is created to update the legend handles only once. fill=True g = sns.displot(kind='kde', data=im, col='variable', x='value', hue='species', fill=True, common_norm=False, facet_kws={'sharey': False}) axes = g.axes.flat lss = [':', '--', '-.'] for ax in axes: for line, ls in zip(ax.collections, lss): line.set_linestyle(ls) handles = g._legend.legendHandles[::-1] for handle, ls in zip(handles, lss): handle.set_ls(ls) fill=False g = sns.displot(kind='kde', data=im, col='variable', x='value', hue='species', common_norm=False, facet_kws={'sharey': False}) axes = g.axes.flat lss = [':', '--', '-.'] for ax in axes: for line, ls in zip(ax.lines, lss): line.set_linestyle(ls) handles = g._legend.legendHandles[::-1] for handle, ls in zip(handles, lss): handle.set_ls(ls)
5
8
70,087,659
2021-11-23
https://stackoverflow.com/questions/70087659/how-to-convert-multiline-json-to-single-line
I am trying to convert a multiline json to a single line json. So the existing json file I have looks like this: { "a": [ "b", "bill", "clown", "circus" ], "vers": 1.0 } When I load the file, it comes in a dictionary and not sure how to strip blank spaces in the dictionary. f = open('test.json') data = json.load(f) What I would like it to come out is the following: {"a":["b","bill","clown","circus"],"vers":1}
Using the standard library json you can get: import json with open('test.json') as handle: data = json.load(handle) text = json.dumps(data, separators=(',', ':')) print(text) Result: {"a":["b","bill","clown","circus"],"vers":1.0} Remark: 1.0 is not simplified to 1, probably, because this would change the type from float to int at Python level.
7
10
70,081,691
2021-11-23
https://stackoverflow.com/questions/70081691/how-to-close-other-windows-when-the-main-window-is-closed-in-pyqt5
I want to close all other windows opened by the main window when the main window is closed. Please find below the min. code that I was testing: from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton, QLabel, QVBoxLayout, QWidget import sys class AnotherWindow(QWidget): """ This "window" is a QWidget. If it has no parent, it will appear as a free-floating window as we want. """ def __init__(self): super().__init__() layout = QVBoxLayout() self.label = QLabel("Another Window") layout.addWidget(self.label) self.setLayout(layout) class MainWindow(QMainWindow): def __init__(self): super().__init__() self.button = QPushButton("Push for Window") self.button.clicked.connect(self.show_new_window) self.setCentralWidget(self.button) def show_new_window(self, checked): self.w = AnotherWindow() self.w.show() def close_another_window(self): if self.w: self.w.close() app = QApplication(sys.argv) w = MainWindow() app.aboutToQuit.connect(w.close_another_window) w.show() app.exec() As shown above I tried using the aboutToQuit option of the QApplication, but it only gets called when the another window also is closed. I want to close the another window automaticaly when the mainwindow is closed.
Implement the closeEvent: class MainWindow(QMainWindow): w = None # ... def closeEvent(self, event): if self.w: self.w.close() Note that you can also use QApplication.closeAllWindows() to close any top level window, even without having any direct reference, but if any of those windows ignores the closeEvent() the function will stop trying to close the remaining. To avoid that, you can cycle all windows using QApplication.topLevelWidgets(); windows ignoring the closeEvent will still keep themselves open, but all the others will be closed: def closeEvent(self, event): for window in QApplication.topLevelWidgets(): window.close()
9
13
70,085,655
2021-11-23
https://stackoverflow.com/questions/70085655/django-how-to-over-ride-created-date
We have a base model that sets a created and modified field: class BaseModel(models.Model): created = models.DateTimeField(_('created'), auto_now_add=True) modified = models.DateTimeField(_('modified'), auto_now=True) ... other default properties class Meta: abstract = True We use this class to extend our models: class Event(BaseModel): Is there a way to over ride the created date when creating new Events? This is a stripped down version of our code. We are sending an array of event objects containing a created timestamp in our request payload. After the objects are added to the db, the created property is set to now and not the value from the payload. I would like to still extend from the BaseModel as other areas of the code may not explicitly set a created value, in which case it should default to now. events = [] for e in payload['events']: event = Event( created=datetime.datetime.fromisoformat(e['created']) name='foo' ) events.append(event) Event.objects.bulk_create(events)
You can override the created field for your Event model with: from django.utils.timezone import now class Event(BaseModel): created = models.DateTimeField(default=now) If you do not want this to show up for ModelForms and ModelAdmins by default, you can make use of editable=False [Django-doc]: from django.utils.timezone import now class Event(BaseModel): created = models.DateTimeField(default=now, editable=False)
4
3
70,080,062
2021-11-23
https://stackoverflow.com/questions/70080062/how-to-correctly-use-imagedatagenerator-in-keras
I am playing with augmentation of data in Keras lately and I am using basic ImageDataGenerator. I learned the hard way it is actually a generator, not iterator (because type(train_aug_ds) gives <class 'keras.preprocessing.image.DirectoryIterator'> I thought it is an iterator). I also checked few blogs about using it, but they don't answer all my questions. So, I loaded my data like this: train_aug = ImageDataGenerator( rescale=1./255, horizontal_flip=True, height_shift_range=0.1, width_shift_range=0.1, brightness_range=(0.5,1.5), zoom_range = [1, 1.5], ) train_aug_ds = train_aug.flow_from_directory( directory='./train', target_size=image_size, batch_size=batch_size, ) And to train my model I did the following: model.fit( train_aug_ds, epochs=150, validation_data=(valid_aug_ds,), ) And it worked. I am a bit confused how it works, because train_aug_ds is generator, so it should give infinitely big dataset. And documentation says: When passing an infinitely repeating dataset, you must specify the steps_per_epoch argument. Which I didn't do, yet, it works. Does it somehow infer number of steps? Also, does it use only augmented data, or it also uses non-augmented images in batch? So basically, my question is how to use this generator correctly with function fit to have all data in my training set, including original, non-augmented images and augmented images, and to cycle through it several times/steps (right now it seems it does only one step per epoch)?
I think the documentation can be quite confusing and I imagine the behavior is different depending on your Tensorflow and Keras version. For example, in this post, the user is describing the exact behavior you are expecting. Generally, the flow_from_directory() method allows you to read the images directly from a directory and augment them while your model is being trained and as already stated here, it iterates for every sample in each folder every epoch. Using the following example, you can check that this is the case (on TF 2.7) by looking at the steps per epoch in the progress bar: import tensorflow as tf BATCH_SIZE = 64 flowers = tf.keras.utils.get_file( 'flower_photos', 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True) img_gen = tf.keras.preprocessing.image.ImageDataGenerator( rescale=1./255, horizontal_flip=True, ) train_ds = img_gen.flow_from_directory(flowers, batch_size=BATCH_SIZE, shuffle=True, class_mode='sparse') num_classes = 5 model = tf.keras.Sequential([ tf.keras.layers.Conv2D(16, 3, padding='same', activation='relu', input_shape=(256, 256, 3)), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(32, 3, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Conv2D(64, 3, padding='same', activation='relu'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(num_classes) ]) model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)) epochs=10 history = model.fit( train_ds, epochs=epochs ) Found 3670 images belonging to 5 classes. Epoch 1/10 6/58 [==>...........................] - ETA: 3:02 - loss: 2.0608 If you wrap flow_from_directory with tf.data.Dataset.from_generator like this: train_ds = tf.data.Dataset.from_generator( lambda: img_gen.flow_from_directory(flowers, batch_size=BATCH_SIZE, shuffle=True, class_mode='sparse'), output_types=(tf.float32, tf.float32)) You will notice that the progress bar looks like this because steps_per_epoch has not been explicitly defined: Epoch 1/10 Found 3670 images belonging to 5 classes. 29/Unknown - 104s 4s/step - loss: 2.0364 And if you add this parameter, you will see the steps in the progress bar: history = model.fit( train_ds, steps_per_epoch = len(from_directory), epochs=epochs ) Found 3670 images belonging to 5 classes. Epoch 1/10 3/58 [>.............................] - ETA: 3:19 - loss: 4.1357 Finally, to your question: How to use this generator correctly with function fit to have all data in my training set, including original, non-augmented images and augmented images, and to cycle through it several times/step? You can simply increase the steps_per_epoch beyond number of samples // batch_size by multiplying by some factor: history = model.fit( train_ds, steps_per_epoch = len(from_directory)*2, epochs=epochs ) Found 3670 images belonging to 5 classes. Epoch 1/10 1/116 [..............................] - ETA: 12:11 - loss: 1.5885 Now instead of 58 steps per epoch you have 116.
4
5
70,076,213
2021-11-23
https://stackoverflow.com/questions/70076213/how-to-add-95-confidence-interval-for-a-line-chart-in-plotly
I have Benford test results, test_show Expected Counts Found Dif AbsDif Z_score Sec_Dig 0 0.119679 4318 0.080052 -0.039627 0.039627 28.347781 1 0.113890 2323 0.043066 -0.070824 0.070824 51.771489 2 0.108821 1348 0.024991 -0.083831 0.083831 62.513122 3 0.104330 1298 0.024064 -0.080266 0.080266 60.975864 4 0.100308 3060 0.056730 -0.043579 0.043579 33.683738 5 0.096677 6580 0.121987 0.025310 0.025310 19.884178 6 0.093375 10092 0.187097 0.093722 0.093722 74.804141 7 0.090352 9847 0.182555 0.092203 0.092203 74.687841 8 0.087570 8439 0.156452 0.068882 0.068882 56.587749 9 0.084997 6635 0.123007 0.038010 0.038010 31.646817 I'm trying to plot the Benford result using Plotly as below, Here is the code that I tried so far import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Bar(x=test_show.index, y=test_show.Found, name='Found', marker_color='rgb(55, 83, 109)', # color="color" )) fig.add_trace(go.Scatter(x=test_show.index, y=test_show.Expected, mode='lines+markers', name='Expected' )) fig.update_layout( title='Benfords Law', xaxis=dict( title='Digits', tickmode='linear', titlefont_size=16, tickfont_size=14), yaxis=dict( title='% Percentage', titlefont_size=16, tickfont_size=14, ), legend=dict( x=0, y=1.0, bgcolor='rgba(255, 255, 255, 0)', bordercolor='rgba(255, 255, 255, 0)' )) fig.show() How to add the confidence interval to the plot for test_show["Expected"]?
As of Python 3.8 you can use NormalDist to calculate a confidence interval as explained in detail here. With a slight adjustment to that approach you can include it in your setup with fig.add_traces() using two go.Scatter() traces, and then set fill='tonexty', fillcolor = 'rgba(255, 0, 0, 0.2)') for the last one like this: CI = confidence_interval(df.Expected, 0.95) fig.add_traces([go.Scatter(x = df.index, y = df['Expected']+CI, mode = 'lines', line_color = 'rgba(0,0,0,0)', showlegend = False), go.Scatter(x = df.index, y = df['Expected']-CI, mode = 'lines', line_color = 'rgba(0,0,0,0)', name = '95% confidence interval', fill='tonexty', fillcolor = 'rgba(255, 0, 0, 0.2)')]) Please not that this approach calculates a confidence interval from the very limited df.Expected series. And that might not be what you're looking to do here. So let me know how this initial suggestion works out for you and then we can take it from there. Plot Complete code: import plotly.graph_objects as go import pandas as pd from statistics import NormalDist def confidence_interval(data, confidence=0.95): dist = NormalDist.from_samples(data) z = NormalDist().inv_cdf((1 + confidence) / 2.) h = dist.stdev * z / ((len(data) - 1) ** .5) return h df = pd.DataFrame({'Expected': {0: 0.119679, 1: 0.11389, 2: 0.108821, 3: 0.10432999999999999, 4: 0.10030800000000001, 5: 0.096677, 6: 0.093375, 7: 0.090352, 8: 0.08757000000000001, 9: 0.084997}, 'Counts': {0: 4318, 1: 2323, 2: 1348, 3: 1298, 4: 3060, 5: 6580, 6: 10092, 7: 9847, 8: 8439, 9: 6635}, 'Found': {0: 0.080052, 1: 0.043066, 2: 0.024991, 3: 0.024064, 4: 0.056729999999999996, 5: 0.12198699999999998, 6: 0.187097, 7: 0.182555, 8: 0.156452, 9: 0.12300699999999999}, 'Dif': {0: -0.039626999999999996, 1: -0.070824, 2: -0.08383099999999999, 3: -0.08026599999999999, 4: -0.043579, 5: 0.02531, 6: 0.093722, 7: 0.092203, 8: 0.068882, 9: 0.03801}, 'AbsDif': {0: 0.039626999999999996, 1: 0.070824, 2: 0.08383099999999999, 3: 0.08026599999999999, 4: 0.043579, 5: 0.02531, 6: 0.093722, 7: 0.092203, 8: 0.068882, 9: 0.03801}, 'Z_scoreSec_Dig': {0: 28.347781, 1: 51.771489, 2: 62.513121999999996, 3: 60.975864, 4: 33.683738, 5: 19.884178, 6: 74.804141, 7: 74.687841, 8: 56.587749, 9: 31.646817}}) test_show = df fig = go.Figure() fig.add_trace(go.Bar(x=test_show.index, y=test_show.Found, name='Found', marker_color='rgb(55, 83, 109)', # color="color" )) fig.add_trace(go.Scatter(x=test_show.index, y=test_show.Expected, mode='lines+markers', name='Expected' )) fig.update_layout( title='Benfords Law', xaxis=dict( title='Digits', tickmode='linear', titlefont_size=16, tickfont_size=14), yaxis=dict( title='% Percentage', titlefont_size=16, tickfont_size=14, ), legend=dict( x=0, y=1.0, bgcolor='rgba(255, 255, 255, 0)', bordercolor='rgba(255, 255, 255, 0)' )) CI = confidence_interval(df.Expected, 0.95) fig.add_traces([go.Scatter(x = df.index, y = df['Expected']+CI, mode = 'lines', line_color = 'rgba(0,0,0,0)', showlegend = False), go.Scatter(x = df.index, y = df['Expected']-CI, mode = 'lines', line_color = 'rgba(0,0,0,0)', name = '95% confidence interval', fill='tonexty', fillcolor = 'rgba(255, 0, 0, 0.2)')]) fig.show()
5
8
70,084,903
2021-11-23
https://stackoverflow.com/questions/70084903/redis-lpop-wrong-number-of-arguments-in-python
I have a simple Redis command that does the following: redis_conn.lpop(queue_name, batch_size) According to the Redis documentation and their Python SDK documentation, this should be a valid request. And, yet, I get the following error: redis.exceptions.ResponseError: wrong number of arguments for 'lpop' command Maybe I'm being obtuse and making a noobie mistake because it's 2:00AM but, this should work. So why doesn't it?
Well, I was being obtuse. The documentation I linked states that the count argument is available from version 6.2. However, since I'm running Windows I don't get the newest version, ergo the failure.
5
11
70,078,170
2021-11-23
https://stackoverflow.com/questions/70078170/find-the-indices-where-a-sorted-list-of-integer-changes
Assuming a sorted list of integers as below: data = [1] * 3 + [4] * 5 + [5] * 2 + [9] * 3 # [1, 1, 1, 4, 4, 4, 4, 4, 5, 5, 9, 9, 9] I want to find the indices where the values changes, i.e. [3, 8, 10, 13] One approach is to use itertools.groupby: cursor = 0 result = [] for key, group in groupby(data): cursor += sum(1 for _ in group) result.append(cursor) print(result) Output [3, 8, 10, 13] This approach is O(n). Another possible approach is to use bisect.bisect_left: cursor = 0 result = [] while cursor < len(data): cursor = bisect_left(data, data[cursor] + 1, cursor, len(data)) result.append(cursor) print(result) Output [3, 8, 10, 13] This approach is O(k*log n) where k is the number of distinct elements. A variant of this approach is to use an exponential search. Is there any faster or more performant way of doing this?
When it comes to asymptotic complexity I think you can improve slightly on the binary search on average when you apply a more evenly spread divide-and-conquer approach: try to first pinpoint the value-change that occurs closer to the middle of the input list, thereby splitting the range in approximately two halves, which would reduce the next binary search operation path by about one. Yet, because this is Python, the gain might not be noticeable, because of the Python-code overhead (like for yield, yield from, the recursion, ...). It might even perform worse for the list sizes you work with: from bisect import bisect_left def locate(data, start, end): if start >= end or data[start] == data[end - 1]: return mid = (start + end) // 2 val = data[mid] if val == data[start]: start = mid val += 1 i = bisect_left(data, val, start + 1, end) yield from locate(data, start, i) yield i yield from locate(data, i, end) data = [1] * 3 + [4] * 5 + [5] * 2 + [9] * 3 print(*locate(data, 0, len(data))) # 3 8 10 Note that this only outputs valid indices, so 13 is not included for this example input.
6
5
70,046,832
2021-11-20
https://stackoverflow.com/questions/70046832/collapse-a-list-of-dictionary-of-list-to-a-single-dictionary-of-list
I have data arriving as dictionaries of lists. In fact, I read in a list of them... data = [ { 'key1': [101, 102, 103], 'key2': [201, 202, 203], 'key3': [301, 302, 303], }, { 'key2': [204], 'key3': [304, 305], 'key4': [404, 405, 406], }, { 'key1': [107, 108], 'key4': [407], }, ] Each dictionary can have different keys. Each key associates to a list, of variable length. What I'd like to do is to make a single dictionary, by concatenating the lists that share a key... desired_result = { 'key1': [101, 102, 103, 107, 108], 'key2': [201, 202, 203, 204], 'key3': [301, 302, 303, 304, 305], 'key4': [404, 405, 406, 407], } Notes: Order does not matter There are hundreds of dictionaries There are dozens of keys per dictionary Totalling hundreds of keys in the result set Each source list contains dozens of items I can do this, with comprehensions, but it feels very clunky, and it's actually very slow (looping through all the possible keys for every possible dictionary yield more 'misses' than 'hits')... { key: [ item for d in data if key in d for item in d[key] ] for key in set( key for d in data for key in d.keys() ) } # TimeIt gives 3.2, for this small data set A shorter, easier to read/maintain option is just to loop through everything. But performance still sucks (possibly due to the large number of calls to extend(), forcing frequent reallocation of memory as the over-provisioned lists fill-up?)... from collections import defaultdict result = defaultdict(list) for d in data: for key, val in d.items(): result[key].extend(val) # TimeIt gives 1.7, for this small data set Is there a better way? more 'pythonic'? more concise? more performant? Alternatively, is there a more applicable data structure for this type of process? I'm sort of making a hash map Where each entry is guaranteed to have multiple collisions and so always be a list Edit: Timing for small data set added No timings for real world data, as I don't have access to it from here (err, ooops/sorry...)
I have a solution that seems to achieve good results when the lists in your dictionaries are long. Although it is not the case in your situation I still mention it. The idea as mentioned in my comments is to use append instead of extend in a first loop and then concatenate all lists in the resulting dictionary using itertools.chain. This is the function chain defined below. I also added @mrvol's code in my answer for comparison. Here is the code: import timeit import itertools from collections import defaultdict REPEAT = 5 # parameter repeat of timeit NUMBER = 100 # parameter number of timeit DICTS = 100 # number of dictionaries in our data KEYS = 12 # size of the dictionaries in our data LEN = 1_000 # size of the lists in our dictionaries data = [ { f'key{x}': [x * 100 + y for y in range(LEN)] for x in range(KEYS) } for _ in range(DICTS) ] def setdefault(): res = {} for d in data: for k, v in d.items(): res.setdefault(k, []).extend(v) return res def default(): res = defaultdict(list) for d in data: for k, v in d.items(): res[k].extend(v) return res def chain(): res = dict() for d in data: for k, v in d.items(): res.setdefault(k, []).append(v) for key in res: res[key] = list(itertools.chain.from_iterable(res[key])) return res # check that all produce the same result assert chain() == default() assert setdefault() == default() if __name__ == '__main__': for name, fun in [ ('default', default), ('setdefault', setdefault), ('chain', chain) ]: print(name, timeit.repeat( stmt=fun, repeat=REPEAT, number=NUMBER, globals={'data': data} )) All tests below have been made with python 3.10. Here are the results: default [3.0608591459999843, 2.9533347530000356, 3.204700414999934, 2.934139603999938, 2.854463246000023] setdefault [2.7814459759999863, 2.801596405000055, 2.796927817000096, 2.797430740999971, 2.795393482999998] chain [2.336767712999972, 2.33148793700002, 2.3378432869999415, 2.3322470529999464, 2.3312841169999956] If we increase LEN to 10_000 the difference is more impressive: default [33.63351462200012, 33.598145768999984, 33.83524595699987, 33.732721158000004, 33.785992579999856] setdefault [33.658237180000015, 33.51113319399997, 33.25321677000011, 33.23780467200004, 33.467723277999994] chain [23.47564513400016, 23.542697918999693, 23.520614959999875, 23.498439506000068, 23.582990831999723] But with LEN=100, the chain function is sligthly slower: default [0.20926385200004916, 0.23037391399998342, 0.21281876400007604, 0.21195233899993582, 0.21580142600009822] setdefault [0.22843905199988512, 0.2232434430000012, 0.2187928880000527, 0.22453147500004889, 0.21708852799997658] chain [0.24585279899997659, 0.23280389700016713, 0.2262972040000477, 0.24113659099998586, 0.2370573980001609] So, once again, chain should not fit your needs as your lists tend to be small with dozens of items, but I mention this solution for the sake of completeness.
4
1
70,073,711
2021-11-22
https://stackoverflow.com/questions/70073711/is-there-a-way-extension-to-exclude-certain-lines-from-format-on-save-in-vis
I am working on a Python project where the Formatting is set to Autopep8. Is there any extension or possible settings where can define to exclude a certain line(s) from formatting when VS is set to format modified code/file on save or with keyboard shortcuts?
Add the comment # noqa to the end of each line you want VS Code to leave alone. For instance, to prevent VS Code from changing RED = 0 YELLOW = 1 to RED = 0 YELLOW = 1 just do the following: RED = 0 # noqa YELLOW = 1 # noqa
5
5
70,073,310
2021-11-22
https://stackoverflow.com/questions/70073310/return-the-indices-of-false-values-in-a-boolean-array
I feel like this is a really simple question but I can't find the solution. Given a boolean array of true/false values, I need the output of all the indices with the value "false". I have a way to do this for true: test = [ True False True True] test1 = np.where(test)[0] This returns [0,2,3], in other words the corresponding index for each true value. Now I need to just get the same thing for the false, where the output would be [1]. Anyone know how?
Use np.where(~test) instead of np.where(test).
4
10
70,072,867
2021-11-22
https://stackoverflow.com/questions/70072867/how-to-remove-spyders-vertical-line-on-right-side-of-editor-pane
Spyder has a text wrap feature which includes a gray line at the end of the line. How do you remove it?
On Spyder 5.0.0, You can change the line limit and remove the vertical bar by going to Settings -> Completion and linting -> Code style and formatting -> Line length -> Show vertical line at that length
10
16
70,016,169
2021-11-18
https://stackoverflow.com/questions/70016169/how-can-i-copy-a-file-from-colab-to-github-repo-directly-it-is-possible-to-sav
How can I save a file generated by colab notebook directly to github repo? It can be assumed that the notebook was opened from the github repo and can be (the same notebook) saved to the same github repo.
Google Colaboratory's integrating with github tends to be lacking, however you can run bash commands from inside the notebook. These allow you to access and modify any data generated. You'll need to generate a token on github to allow access to the repository you want to save data to. See here for how to create a personal access token. Once you have that token, you run git commands from inside the notebook to clone the repository, add whatever files you need to, and then upload them. This post here provides an overview of how to do it in depth. That being said, this approach is kind of cumbersome, and it might be preferable to configure colab to work over an SSH connection. Once you do that, you can mount a folder on the colab instance to a folder on your local machine using sshfs. This will allow you to access the colab as though it were any other folder on your machine, including opening it in your IDE, viewing files in a file browser, and cloning or updating git repositories. This goes more in depth on that. These are the best options I was able to identify, and I hope one of them can be made to work for you.
8
10
70,068,407
2021-11-22
https://stackoverflow.com/questions/70068407/modulenotfounderror-no-module-named-wtforms-fields-html5
I have a flask app that uses wtforms. I have a file which does: from wtforms.fields.html5 import DateField, EmailField, TelField # rest of the file I just wanted to rebuild my docker container and now I have this error: ModuleNotFoundError: No module named 'wtforms.fields.html5' I have in my requirements.txt: flask flask-login flask_sqlalchemy Flask-Mail pyodbc requests waitress wtforms I tried to add flask_WTF but it did not fix it. Any idea what's going on? I thought of upgrading wtforms but it seems like I have the newest version: pip install wtforms Requirement already satisfied: wtforms in /usr/local/lib/python3.9/site-packages (3.0.0) Requirement already satisfied: MarkupSafe in /usr/local/lib/python3.9/site-packages (from wtforms) (2.0.1)
Downgrading WTForms==2.3.3 solved the issue for me. Thread referenced here.
18
8
70,071,231
2021-11-22
https://stackoverflow.com/questions/70071231/how-does-one-check-if-all-rows-in-a-dataframe-match-another-dataframe
Say you have 2 dataframes with the same columns. But say dataframe A has 10 rows, and dataframe B has 100 rows, but the 10 rows in dataframe A are in dataframe B. The 10 rows may not be in the same row numbers as dataframe B. How do we determine that those 10 rows in df A are fully contained in df B? For example. Say we have this for df A (only using 1 row) A | B | C 1 | 2 | 3 and df B is: A | B | C 2 | 5 | 5 3 | 2 | 7 1 | 2 | 3 5 | 1 | 5 How do we check that df A is contained in B? Assume that the rows will always be unique in the sense that there will always be a unique A+B combination
Is a Dataframe a subset of another: You can try solving this using merge and then comparison. The inner-join of the 2 dataframes would be the same as the smaller dataframe if the second one is a superset for the first. import pandas as pd # df1 - smaller dataframe, df2 - larger dataframe df1 = pd.DataFrame({'A ': [1], ' B ': [2], ' C': [3]}) df2 = pd.DataFrame({'A ': [2, 3, 1, 5], ' B ': [5, 2, 2, 1], ' C': [5, 7, 3, 5]}) df1.merge(df2).shape == df1.shape True If you have duplicates, then drop duplicates first - df1.merge(df2).drop_duplicates().shape == df1.drop_duplicates().shape More details here.
4
3
70,069,983
2021-11-22
https://stackoverflow.com/questions/70069983/how-can-i-use-value-counts-only-for-certain-values
I want to extract how many positive reviews by brand are in a dataset which includes reviews from thousands of products. I used this code and I got a table including percentaje of positive and non-positive reviews. How can I get only the percentage of positive reviews by brand? I only want the "True" results in positive_review. Thanks! df_reviews_ok.groupby("brand")["positive_review"].value_counts(normalize=True).mul(100).round(2) brand positive_review Belkin False 70.00 True 30.00 Bowers & Wilkins False 67.65 True 32.35 Corsair False 75.22 True 24.78 Definitive Technology False 68.29 True 31.71 Dell False 60.87 True 39.13 DreamWave False 100.00 House of Marley False 100.00 JBL False 58.43 True 41.57 Kicker True 66.67 False 33.33 Lenovo False 76.92 True 23.08 Logitech False 75.75 True 24.25 MEE audio False 53.80 True 46.20 Microsoft False 67.86 True 32.14 Midland False 72.09 True 27.91 Motorola False 72.92 True 27.08 Netgear False 72.30 True 27.70 Pny False 68.78 True 31.22 Power Acoustik False 100.00 SVS False 100.00 Samsung False 61.94 True 38.06 Sanus False 75.93 True 24.07 Sdi Technologies, Inc. False 55.63 True 44.37 Siriusxm False 73.33 True 26.67 Sling Media False 67.16 True 32.84 Sony False 55.40 True 44.60 Toshiba False 56.52 True 43.48 Ultimate Ears False 70.21 True 29.79 Verizon Wireless False 75.86 True 24.14 WD False 58.33 True 41.67 Yamaha False 61.15 True 38.85 Name: positive_review, dtype: float64
You can unstack the output and slice the True (df.groupby('brand') ['positive_review'].value_counts(normalize=True) .mul(100).round(2) .unstack(fill_value=0) [True] )
4
2
70,068,720
2021-11-22
https://stackoverflow.com/questions/70068720/jupyter-shell-commands-in-a-function
I'm attempting to create a function to load Sagemaker models within a jupyter notebook using shell commands. The problem arises when I try to store the function in a utilities.py file and source it for multiple notebooks. Here are the contents of the utilities.py file that I am sourcing in a jupyter lab notebook. def get_aws_sagemaker_model(model_loc): """ TO BE USED IN A JUPYTER NOTEBOOK extracts a sagemaker model that has ran and been completed deletes the copied items and leaves you with the model note that you will need to have the package installed with correct versioning for whatever model you have trained ie. if you are loading an XGBoost model, have XGBoost installed Args: model_loc (str) : s3 location of the model including file name Return: model: unpacked and loaded model """ import re import tarfile import os import pickle as pkl # extract the filename from beyond the last backslash packed_model_name = re.search("(.*\/)(.*)$" , model_loc)[2] # copy and paste model file locally command_string = "!aws s3 cp {model_loc} ." exec(command_string) # use tarfile to extract tar = tarfile.open(packed_model_name) # extract filename from tarfile unpacked_model_name = tar.getnames()[0] tar.extractall() tar.close() model = pkl.load(open(unpacked_model_name, 'rb')) # cleanup copied files and unpacked model os.remove(packed_model_name) os.remove(unpacked_model_name) return model The error occurs when trying to execute the command string: Traceback (most recent call last): File "/home/ec2-user/anaconda3/envs/env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3444, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "/tmp/ipykernel_10889/996524724.py", line 1, in <module> model = get_aws_sagemaker_model("my-model-loc") File "/home/ec2-user/SageMaker/env/src/utilities/model_helper_functions.py", line 167, in get_aws_sagemaker_model exec(command_string) File "<string>", line 1 !aws s3 cp my-model-loc . ^ SyntaxError: invalid syntax It seems like jupyter isn't receiving the command before exec checks the syntax. Is there a way around this besides copying the function into each jupyter notebook that I use? Thank you!
You can use transform_cell method of IPython's shell to transform the IPython syntax into valid plain-Python: from IPython import get_ipython ipython = get_ipython() code = ipython.transform_cell('!ls') print(code) which will show: get_ipython().system('!ls') You can use that as input for exec: exec(code) Or directly: exec(ipython.transform_cell('!ls'))
4
5
70,064,901
2021-11-22
https://stackoverflow.com/questions/70064901/hydra-access-name-of-config-file-from-code
I have a config tree such as: config.yaml model/ model_a.yaml model_b.yaml model_c.yaml Where config.yaml contains: # @package _global_ defaults: - _self_ - model: model_a.yaml some_var: 42 I would like to access the name of the model config file used (either the default or overridden) from my python code or from the file itself. Something like: @hydra.main(...) def main(config): model_name = config.model.__filename__ or (from e.g. model_a.yaml) dropout: true dense_layers: 128 model_name: ${__filename__} Thanks in advance!
The a look at the hydra.runtime.choices variable mentioned in the Configuring Hydra - Introduction page of the Hydra docs. This variable stores a mapping that describes each of the choices that Hydra has made in composing the output config. Using your example from above with model: model_a.yaml in the defaults list: # my_app.py import hydra from pprint import pprint from hydra.core.hydra_config import HydraConfig from omegaconf import OmegaConf @hydra.main(config_path=".", config_name="config") def main(config): hydra_cfg = HydraConfig.get() print("choice of model:") pprint(OmegaConf.to_container(hydra_cfg.runtime.choices)) main() At the command line: $ python3 app.py choices used: {'hydra/callbacks': None, 'hydra/env': 'default', 'hydra/help': 'default', 'hydra/hydra_help': 'default', 'hydra/hydra_logging': 'default', 'hydra/job_logging': 'default', 'hydra/launcher': 'basic', 'hydra/output': 'default', 'hydra/sweeper': 'basic', 'model': 'model_a.yaml'} As you can see, in this example the config option model_a.yaml is stored in the Hydra config at hydra_cfg.runtime.choices.model.
6
9
70,067,588
2021-11-22
https://stackoverflow.com/questions/70067588/valueerror-input-0-of-layer-sequential-is-incompatible-with-the-layer-expect
Here is the little project of Cancer detection, and it has already has the dataset and colab code, but I get an error when I execute model.fit(x_train, y_train, epochs=1000) The error is: ValueError: Input 0 of layer "sequential" is incompatible with the layer: expected shape=(None, 455, 30), found shape=(None, 30) When I look at the comments, other people are having this issue
The Tensorflow model expects the first dimension of the input to be the batch size, in the model declaration however they set the input shape to be the same shape as the input. To fix this you can change the input shape of the model to be the number of feature in the dataset. model.add(tf.keras.layers.Dense(256, input_shape=(x_train.shape[1],), activation='sigmoid')) The number of rows in the .csv files will be the number of samples in your dataset. Since you're not using batches, the model will evaluate the whole dataset at once every epoch
6
5
70,062,383
2021-11-22
https://stackoverflow.com/questions/70062383/cleaner-dockerfile-for-continuumio-miniconda3-environment
I have a Python3.9 / Quart / Hypercorn microservice which runs in a conda environment configured with an environment.yml file. The base image is continuumio/miniconda3. It took a lot of hacks to get this launching because of conda init issues etc. Is there a cleaner way to get a conda environment installed and activated within Docker without having to resort to conda run commands and override the default SHELL commands? FROM continuumio/miniconda3 COPY . /api/ WORKDIR /api/src # See this tutorial for details https://pythonspeed.com/articles/activate-conda-dockerfile/ RUN conda env create -f /api/conda_environment_production.yml SHELL ["conda", "run", "-n", "ms-amazing-environment", "/bin/bash", "-c"] ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "ms-amazing-environment", "hypercorn", "--bind", "0.0.0.0:5000", "QuartAPI:app"] EXPOSE 5000
An alternative approach is described here. Basically you can activate conda environment within bash script and run you commands there. entrypoint.sh: #!/bin/bash --login # The --login ensures the bash configuration is loaded, # Temporarily disable strict mode and activate conda: set +euo pipefail conda activate ms-amazing-environment # enable strict mode: set -euo pipefail # exec the final command: exec hypercorn --bind 0.0.0.0:5000 QuartAPI:app Dockerfile: FROM continuumio/miniconda3 COPY . /api/ WORKDIR /api/src # See this tutorial for details https://pythonspeed.com/articles/activate-conda-dockerfile/ RUN conda env create -f /api/conda_environment_production.yml # The code to run when container is started: COPY entrypoint.sh ./ ENTRYPOINT ["./entrypoint.sh"] EXPOSE 5000
4
5
70,048,874
2021-11-20
https://stackoverflow.com/questions/70048874/twitter-api-v2-403-forbidden-using-tweepy
I'm not able to authenticate using my twitter developer account even though my account active import tweepy consumer_key= 'XX1' consumer_secret= 'XX2' access_token= 'XX3' access_token_secret= 'XX4' auth = tweepy.OAuthHandler(consumer_key, consumer_secret) auth.set_access_token(access_token, access_token_secret) api = tweepy.API(auth) api.update_status("Hello Tweepy") i'm getting error : Forbidden: 403 Forbidden 453 - You currently have Essential access which includes access to Twitter API v2 endpoints only. If you need access to this endpoint, you’ll need to apply for Elevated access via the Developer Portal. You can learn more here: https://developer.twitter.com/en/docs/twitter-api/getting-started/about-twitter-api#v2-access-leve There is no option to move to Essential to Elevated on the developer portal. any suggestion ?
I found out that Essential access can use only Twitter API v2 The code should be import tweepy consumer_key= 'x' consumer_secret= 'xx' access_token= 'xxx' access_token_secret= 'xxxx' client = tweepy.Client(consumer_key= consumer_key,consumer_secret= consumer_secret,access_token= access_token,access_token_secret= access_token_secret) query = 'news' tweets = client.search_recent_tweets(query=query, max_results=10) for tweet in tweets.data: print(tweet.text) Thanks to https://twittercommunity.com/t/403-forbidden-using-tweepy/162435/2
4
5
70,060,263
2021-11-22
https://stackoverflow.com/questions/70060263/pytube-attributeerror-nonetype-object-has-no-attribute-span
I just downloaded pytube (version 11.0.1) and started with this code snippet from here: from pytube import YouTube YouTube('https://youtu.be/9bZkp7q19f0').streams.first().download() which gives this error: AttributeError Traceback (most recent call last) <ipython-input-29-0bfa08b87614> in <module> ----> 1 YouTube('https://youtu.be/9bZkp7q19f0').streams.first().download() ~/anaconda3/lib/python3.8/site-packages/pytube/__main__.py in streams(self) 290 """ 291 self.check_availability() --> 292 return StreamQuery(self.fmt_streams) 293 294 @property ~/anaconda3/lib/python3.8/site-packages/pytube/__main__.py in fmt_streams(self) 175 # https://github.com/pytube/pytube/issues/1054 176 try: --> 177 extract.apply_signature(stream_manifest, self.vid_info, self.js) 178 except exceptions.ExtractError: 179 # To force an update to the js file, we clear the cache and retry ~/anaconda3/lib/python3.8/site-packages/pytube/extract.py in apply_signature(stream_manifest, vid_info, js) 407 408 """ --> 409 cipher = Cipher(js=js) 410 411 for i, stream in enumerate(stream_manifest): ~/anaconda3/lib/python3.8/site-packages/pytube/cipher.py in __init__(self, js) 42 43 self.throttling_plan = get_throttling_plan(js) ---> 44 self.throttling_array = get_throttling_function_array(js) 45 46 self.calculated_n = None ~/anaconda3/lib/python3.8/site-packages/pytube/cipher.py in get_throttling_function_array(js) 321 322 array_raw = find_object_from_startpoint(raw_code, match.span()[1] - 1) --> 323 str_array = throttling_array_split(array_raw) 324 325 converted_array = [] ~/anaconda3/lib/python3.8/site-packages/pytube/parser.py in throttling_array_split(js_array) 156 # Handle functions separately. These can contain commas 157 match = func_regex.search(curr_substring) --> 158 match_start, match_end = match.span() 159 160 function_text = find_object_from_startpoint(curr_substring, match.span()[1]) AttributeError: 'NoneType' object has no attribute 'span' and I wonder why? Can anyone help me? I am running this snippet in an ipython console (IPython version 7.22.0) with Python 3.8.8 in a conda environment.
Found this issue, pytube v11.0.1. It's a little late for me, but if no one has submitted a fix tomorrow I'll check it out. in C:\Python38\lib\site-packages\pytube\parser.py Change this line: 152: func_regex = re.compile(r"function\([^)]+\)") to this: 152: func_regex = re.compile(r"function\([^)]?\)") The issue is that the regex expects a function with an argument, but I guess youtube added some src that includes non-paramterized functions.
16
17
70,059,794
2021-11-22
https://stackoverflow.com/questions/70059794/get-function-name-when-contextdecorator-is-used-as-a-decorator
I have the following context manager and decorator to time any given function or code block: import time from contextlib import ContextDecorator class timer(ContextDecorator): def __init__(self, label: str): self.label = label def __enter__(self): self.start_time = time.perf_counter() return self def __exit__(self, *exc): net_time = time.perf_counter() - self.start_time print(f"{self.label} took {net_time:.1f} seconds") return False You can use it as a context manager: with timer("my code block"): time.sleep(2) # my code block took 2.0 seconds You can also use it as a decorator: @timer("my_func") def my_func(): time.sleep(3) my_func() # my_func took 3.0 seconds The only thing I don't like is having to manually pass the function name as the label when it's used as a decorator. I would love for the decorator to automatically use the function name if no label is passed: @timer() def my_func(): time.sleep(3) my_func() # my_func took 3.0 seconds Is there any way to do this?
If you also override the __call__() method inherited from the ContextDecorator base class in your timer class, and add a unique default value to the initializer for the label argument, you can check for that and grab the function's __name__ when it's called: import time from contextlib import ContextDecorator class timer(ContextDecorator): def __init__(self, label: str=None): self.label = label def __call__(self, func): if self.label is None: # Label was not provided self.label = func.__name__ # Use function's name. return super().__call__(func) def __enter__(self): self.start_time = time.perf_counter() return self def __exit__(self, *exc): net_time = time.perf_counter() - self.start_time print(f"{self.label} took {net_time:.1f} seconds") return False @timer() def my_func(): time.sleep(3) my_func() # -> my_func took 3.0 seconds
7
10
70,059,478
2021-11-21
https://stackoverflow.com/questions/70059478/no-module-named-web3-even-though-i-installed-web3-py-i-am-using-a-venv
pip freeze output: aiohttp==3.8.1 aiosignal==1.2.0 alembic==1.7.5 aniso8601==9.0.1 async-timeout==4.0.1 attrs==21.2.0 base58==2.1.1 bitarray==1.2.2 certifi==2021.10.8 charset-normalizer==2.0.7 click==8.0.3 cytoolz==0.11.2 eth-abi==2.1.1 eth-account==0.5.6 eth-hash==0.3.2 eth-keyfile==0.5.1 eth-keys==0.3.3 eth-rlp==0.2.1 eth-typing==2.2.2 eth-utils==1.10.0 Flask==2.0.2 flask-marshmallow==0.14.0 Flask-Migrate==3.1.0 Flask-RESTful==0.3.9 Flask-Script==2.0.6 Flask-SQLAlchemy==2.5.1 frozenlist==1.2.0 hexbytes==0.2.2 idna==3.3 ipfshttpclient==0.8.0a2 itsdangerous==2.0.1 Jinja2==3.0.3 jsonschema==3.2.0 lru-dict==1.1.7 Mako==1.1.6 MarkupSafe==2.0.1 marshmallow==3.14.1 marshmallow-sqlalchemy==0.26.1 multiaddr==0.0.9 multidict==5.2.0 netaddr==0.8.0 parsimonious==0.8.1 protobuf==3.19.1 psycopg2==2.9.2 pycryptodome==3.11.0 pyrsistent==0.18.0 pytz==2021.3 requests==2.26.0 rlp==2.0.1 six==1.16.0 SQLAlchemy==1.4.27 toolz==0.11.2 typing_extensions==4.0.0 urllib3==1.26.7 varint==1.0.2 web3==5.25.0 websockets==9.1 Werkzeug==2.0.2 yarl==1.7.2 Python version: 3.10.0 I installed Web3 using the pip install web3 command in my venv. To create my venv, I did virtualenv -p python3 venv, so I don't think there is an issue with the virtual env. However in my test.py when I do the following: from web3 import Web3 I get a traceback error that there is no module named "web3"
Are you sourcing your venv before running test.py? If so, then try this, source venv/bin/activate pip uninstall web3==5.25.0 pip install web3==5.25.0 python test.py (Since your pip freeze is correct), try this as well which python This should give you the python bin that is currently being used by your shell. (Check if the path you get is the venv one).
4
4
70,058,288
2021-11-21
https://stackoverflow.com/questions/70058288/how-to-write-a-function-which-has-2-nested-functions-inside-and-let-computes-sum
I have to write a function which has 2 nested functions inside and computes only sum or difference of as many numbers as we want. Every equation end with "=". I wrote sth like this but it still doesn't work. What I am doing wrong? def calculator(x: int): def operation(operator: str): def calculation(y: int): while operator != "=": if operators[i] == "=": result -= digits[j] elif operators[i] == "+": result += digits[j] else: break return result return operator return calculator(result) Function should work like that: calculator(1)('+')(4)('-')(2)('=') The result is 3. I can't use any package import or global variables.
You can also use a dictionary to define your operations, and make a very readable and simple function with space for expansion: def calc(a: int): return lambda op: { '+': lambda b: calc(a+b), '-': lambda b: calc(a-b), '/': lambda b: calc(a/b), '*': lambda b: calc(a*b), }.get(op, a) print(calc(2)('+')(3)('-')(10)('/')(10)('*')(-100)('='))
4
4
70,057,975
2021-11-21
https://stackoverflow.com/questions/70057975/how-to-get-cosine-similarity-of-word-embedding-from-bert-model
I was interesting in how to get the similarity of word embedding in different sentences from BERT model (actually, that means words have different meanings in different scenarios). For example: sent1 = 'I like living in New York.' sent2 = 'New York is a prosperous city.' I want to get the cos(New York, New York)'s value from sent1 and sent2, even if the phrase 'New York' is same, but it appears in different sentence. I got some intuition from https://discuss.huggingface.co/t/generate-raw-word-embeddings-using-transformer-models-like-bert-for-downstream-process/2958/2 But I still do not know which layer's embedding I need to extract and how to caculate the cos similarity for my above example. Thanks in advance for any suggestions!
Okay let's do this. First you need to understand that BERT has 13 layers. The first layer is basically just the embedding layer that BERT gets passed during the initial training. You can use it but probably don't want to since that's essentially a static embedding and you're after a dynamic embedding. For simplicity I'm going to only use the last hidden layer of BERT. Here you're using two words: "New" and "York". You could treat this as one during preprocessing and combine it into "New-York" or something if you really wanted. In this case I'm going to treat it as two separate words and average the embedding that BERT produces. This can be described in a few steps: Tokenize the inputs Determine where the tokenizer has word_ids for New and York (suuuuper important) Pass through BERT Average Cosine similarity First, what you need to import: from transformers import AutoTokenizer, AutoModel Now we can create our tokenizer and our model: tokenizer = AutoTokenizer.from_pretrained('bert-base-cased') model = model = AutoModel.from_pretrained('bert-base-cased', output_hidden_states=True).eval() Make sure to use the model in evaluation mode unless you're trying to fine tune! Next we need to tokenize (step 1): tok1 = tokenizer(sent1, return_tensors='pt') tok2 = tokenizer(sent2, return_tensors='pt') Step 2. Need to determine where the index of the words match # This is where the "New" and "York" can be found in sent1 sent1_idxs = [4, 5] sent2_idxs = [0, 1] tok1_ids = [np.where(np.array(tok1.word_ids()) == idx) for idx in sent1_idxs] tok2_ids = [np.where(np.array(tok2.word_ids()) == idx) for idx in sent2_idxs] The above code checks where the word_ids() produced by the tokenizer overlap the word indices from the original sentence. This is necessary because the tokenizer splits rare words. So if you have something like "aardvark", when you tokenize it and look at it you actually get this: In [90]: tokenizer.convert_ids_to_tokens( tokenizer('aardvark').input_ids) Out[90]: ['[CLS]', 'a', '##ard', '##var', '##k', '[SEP]'] In [91]: tokenizer('aardvark').word_ids() Out[91]: [None, 0, 0, 0, 0, None] Step 3. Pass through BERT Now we grab the embeddings that BERT produces across the token ids that we've produced: with torch.no_grad(): out1 = model(**tok1) out2 = model(**tok2) # Only grab the last hidden state states1 = out1.hidden_states[-1].squeeze() states2 = out2.hidden_states[-1].squeeze() # Select the tokens that we're after corresponding to "New" and "York" embs1 = states1[[tup[0][0] for tup in tok1_ids]] embs2 = states2[[tup[0][0] for tup in tok2_ids]] Now you will have two embeddings. Each is shape (2, 768). The first size is because you have two words we're looking at: "New" and "York. The second size is the embedding size of BERT. Step 4. Average Okay, so this isn't necessarily what you want to do but it's going to depend on how you treat these embeddings. What we have is two (2, 768) shaped embeddings. You can either compare New to New and York to York or you can combine New York into an average. I'll just do that but you can easily do the other one if it works better for your task. avg1 = embs1.mean(axis=0) avg2 = embs2.mean(axis=0) Step 5. Cosine sim Cosine similarity is pretty easy using torch: torch.cosine_similarity(avg1.reshape(1,-1), avg2.reshape(1,-1)) # tensor([0.6440]) This is good! They point in the same direction. They're not exactly 1 but that can be improved in several ways. You can fine tune on a training set You can experiment with averaging different layers rather than just the last hidden layer like I did You can try to be creative in combining New and York. I took the average but maybe there's a better way for your exact needs.
6
23
70,055,063
2021-11-21
https://stackoverflow.com/questions/70055063/how-is-memory-handled-once-touched-for-the-first-time-in-numpy-zeros
I recently saw that when creating a numpy array via np.empty or np.zeros, the memory of that numpy array is not actually allocated by the operating system as discussed in this answer (and this question), because numpy utilizes calloc to allocate the array's memory. In fact, the OS isn't even "really" allocating that memory until you try to access it. Therefore, l = np.zeros(2**28) does not increase the utilized memory the system reports, e.g., in htop. Only once I touch the memory, for instance by executing np.add(l, 0, out=l) the utilized memory is increased. Because of that behaviour I have got a couple of questions: 1. Is touched memory copied under the hood? If I touch chunks of the memory only after a while, is the content of the numpy array copied under the hood by the operating system to guarantee that the memory is contiguous? i = 100 f[:i] = 3 while True: ... # Do stuff f[i] = ... # Once the memory "behind" the already allocated chunk of memory is filled # with other stuff, does the operating system reallocate the memory and # copy the already filled part of the array to the new location? i = i + 1 2. Touching the last element As the memory of the numpy array is continguous in memory, I tought f[-1] = 3 might require the enitre block of memory to be allocated (without touching the entire memory). However, it does not, the utilized memory in htop does not increase by the size of the array. Why is that not the case?
OS isn't even "really" allocating that memory until you try to access it This is dependent of the target platform (typically the OS and its configuration). Some platform directly allocates page in physical memory (eg. AFAIK the XBox does as well as some embedded platforms). However, mainstream platforms actually do that indeed. 1. Is touched memory copied under the hood? If I touch chunks of the memory only after a while, is the content of the numpy array copied under the hood by the operating system to guarantee that the memory is contiguous? Allocations are perform in virtual memory. When a first touch is done on a given memory page (chunk of fixed sized, eg. 4 KiB), the OS maps the virtual page to a physical one. So only one page will be physically map when you set only one item of the array (unless the item cross two pages which only happens in pathological cases). The physical pages may not be contiguous for a contiguous set of virtual pages. However, this is not a problem and you should not care about it. This is mainly the job of the OS. That being said, modern processors have a dedicated unit called TLB to translate virtual address (the one you could see with a debugger) to physical ones (since this translation is relatively expensive and performance critical). The content of the Numpy array is not reallocated nor copied thanks to paging (at least from the user point-of-view, ie. in virtual memory). 2. Touching the last element I thought f[-1] = 3 might require the entire block of memory to be allocated (without touching the entire memory). However, it does not, the utilized memory in htop does not increase by the size of the array. Why is that not the case? Only the last page in virtual memory associated to the Numpy array is mapped thanks to paging. This is why you do not see a big change in htop. However, you should see a slight change (the size of a page on your platform) if you look carefully. Otherwise, this should mean the page has been already mapped due to other previous recycled allocations. Indeed, the allocation library can preallocate memory area to speed up allocations (by reducing the number of slow requests to the OS). The library could also keep the memory mapped when it is freed by Numpy in order to speed the next allocations up (since the memory do not have to be unmapped to be then mapped again). This is unlikely to occur for huge arrays in practice because the impact on memory consumption would be too expensive.
4
3
70,045,339
2021-11-20
https://stackoverflow.com/questions/70045339/what-is-analog-of-setwindowflags-in-pyqt6
While trying to migrate pyqt5 code to pyqt6, i have occured a problem with setWindowFlags: self.setWindowFlags(Qt.WindowStaysOnTopHint) returns an error: AttributeError: type object 'Qt' has no attribute 'WindowStaysOnTopHint'. So wat is the similar in PyQt6?
QtCore.Qt.WindowType.WindowStaysOnTopHint
5
12
70,044,476
2021-11-20
https://stackoverflow.com/questions/70044476/pandas-unique-values-per-row-variable-number-of-columns-with-data
Consider the below dataframe: import pandas as pd from numpy import nan data = [ (111, nan, nan, 111), (112, 112, nan, 115), (113, nan, nan, nan), (nan, nan, nan, nan), (118, 110, 117, nan), ] df = pd.DataFrame(data, columns=[f'num{i}' for i in range(len(data[0]))]) num0 num1 num2 num3 0 111.0 NaN NaN 111.0 1 112.0 112.0 NaN 115.0 2 113.0 NaN NaN NaN 3 NaN NaN NaN NaN 4 118.0 110.0 117.0 NaN Assuming my index is unique, I'm looking to retrieve the unique values per index row, to an output like the one below. I wish to keep the empty rows. num1 num2 num3 0 111.0 NaN NaN 1 112.0 115.0 NaN 2 113.0 NaN NaN 3 NaN NaN NaN 4 110.0 117.0 118.0 I have a working, albeit slow, solution, see below. The output number order is not relevant, as long all values are presented to the leftmost column and nulls to the right. I'm looking for best practices and potential ideas to speed up the code. Thank you in advance. def arrange_row(row): values = list(set(row.dropna(axis=1).values[0])) values = [nan] if not values else values series = pd.Series(values, index=[f"num{i}" for i in range(1, len(values)+1)]) return series df.groupby(level=-1).apply(arrange_row).unstack(level=-1) pd.version == '1.2.3'
Another option, albeit longer: outcome = (df.melt(ignore_index= False) # keep the index as a tracker .reset_index() # get the unique rows .drop_duplicates(subset=['index','value']) .dropna() # use this to build the new column names .assign(counter = lambda df: df.groupby('index').cumcount() + 1) .pivot('index', 'counter', 'value') .add_prefix('num') .reindex(df.index) .rename_axis(columns=None) ) outcome num1 num2 num3 0 111.0 NaN NaN 1 112.0 115.0 NaN 2 113.0 NaN NaN 3 NaN NaN NaN 4 118.0 110.0 117.0 If you want it to exactly match your output, you can dump it into numpy, sort and return to pandas: pd.DataFrame(np.sort(outcome, axis = 1), columns = outcome.columns) num1 num2 num3 0 111.0 NaN NaN 1 112.0 115.0 NaN 2 113.0 NaN NaN 3 NaN NaN NaN 4 110.0 117.0 118.0 Another option is to do the sorting within numpy before reshaping in Pandas: (pd.DataFrame(np.sort(df, axis = 1)) .apply(pd.unique, axis=1) .apply(pd.Series) .dropna(how='all',axis=1) .set_axis(['num1', 'num2','num3'], axis=1) ) num1 num2 num3 0 111.0 NaN NaN 1 112.0 115.0 NaN 2 113.0 NaN NaN 3 NaN NaN NaN 4 110.0 117.0 118.0
4
2
70,002,709
2021-11-17
https://stackoverflow.com/questions/70002709/how-to-apply-transaction-logic-in-fastapi-realworld-example-app
I am using nsidnev/fastapi-realworld-example-app. I need to apply transaction logic to this project. In one API, I am calling a lot of methods from repositories and doing updating, inserting and deleting operations in many tables. If there is an exception in any of these operations, how can I roll back changes? (Or if everything is correct then commit.)
nsidnev/fastapi-realworld-example-app is using asyncpg. There are two ways to use Transactions. 1. async with statement async with conn.transaction(): await repo_one.update_one(...) await repo_two.insert_two(...) await repo_three.delete_three(...) # This automatically rolls back the transaction: raise Exception 2. start, rollback, commit statements tx = conn.transaction() await tx.start() try: await repo_one.update_one(...) await repo_two.insert_two(...) await repo_three.delete_three(...) except: await tx.rollback() raise else: await tx.commit() Getting the connection conn in routes Inject conn: Connection = Depends(_get_connection_from_pool). from asyncpg.connection import Connection from fastapi import Depends from app.api.dependencies.database import _get_connection_from_pool @router.post( ... ) async def create_new_article( ... conn: Connection = Depends(_get_connection_from_pool), # Add this ) -> ArticleInResponse:
9
4
70,041,386
2021-11-19
https://stackoverflow.com/questions/70041386/make-a-new-column-based-on-group-by-conditionally-in-python
I have a dataframe: id group x1 A x1 B x2 A x2 A x3 B I would like to create a new column new_group with the following conditions: If there are 2 unique group values within in the same id such as group A and B from rows 1 and 2, new_group should have "two" as its value. If there are only 1 unique group values within the same id such as group A from rows 3 and 4, the value for new_group should be that same group A. Otherwise, specify B. This is what I am looking for: id group new_group x1 A two x1 B two x2 A A x2 A A x3 B B I tried something like this but don't know how to capture all the if-else conditions df.groupby("id")["group"].filter(lambda x: x.nunique() == 2)
Almost there. Change filter to transform and use a condition: df['new_group'] = df.groupby("id")["group"] \ .transform(lambda x: 'two' if (x.nunique() == 2) else x) print(df) # Output: id group new_group 0 x1 A two 1 x1 B two 2 x2 A A 3 x2 A A 4 x3 B B
4
4
70,039,745
2021-11-19
https://stackoverflow.com/questions/70039745/turning-a-dataframe-into-a-nested-dictionary
I have a dataframe like below. How can I get it into a nested dictionary like Guest GuestCode ProductName Quantity Invoice No 0 Maria NaN Pro Plus Cream 2 OBFL22511 1 Maria NaN Soothe Stress Cream 1 OBFL22511 2 Sanchez OBFLG3108 Pro Plus Cream 1 OBFL22524 3 Karen OBFLG1600 Soothe Stress Cream 1 OBFL22525 4 Karen OBFLG1600 Pro Plus Cream 1 OBFL22525 I want the dataframe converted into the following dictionary format: {"Guest": {"GuestCode": {"Invoice No": {"ProductName": Quantity}}} For example: {"Karen": {"OBFLG160": {"OBFL22525": {"Soothe Stress Cream": 1, "Pro Plus Cream": 1}}} I tried this: for index, row in df.iterrows(): my_dict[row['Guest']] = {row['GuestCode']: {row['Invoice No']: {row['ProductName']}}} But it does not list all the items if a guest has multiple products. I also tried and played around with this, but don't really understand dictionary comprehension: d = {k: v.groupby('GuestCode')['Invoice No','ProductName' , 'Quantity'].apply(list).to_dict() for k, v in df.groupby('Guest')}
my_dict = {k[0]: {k[1]: {k[2]: {p: q for p, q in row[['ProductName', 'Quantity']].values}}} for k, row in df.fillna('<NA>').groupby(['Guest', 'GuestCode', 'Invoice No'])} Output: >>> my_dict {'Karen': {'OBFLG1600': {'OBFL22525': {'Soothe Stress Cream': 1, 'Pro Plus Cream': 1}}}, 'Maria': {'<NA>': {'OBFL22511': {'Pro Plus Cream': 2, 'Soothe Stress Cream': 1}}}, 'Sanchez': {'OBFLG3108': {'OBFL22524': {'Pro Plus Cream': 1}}}} >>> import json >>> print(json.dumps(my_dict, indent=2)) { "Karen": { "OBFLG1600": { "OBFL22525": { "Soothe Stress Cream": 1, "Pro Plus Cream": 1 } } }, "Maria": { "<NA>": { "OBFL22511": { "Pro Plus Cream": 2, "Soothe Stress Cream": 1 } } }, "Sanchez": { "OBFLG3108": { "OBFL22524": { "Pro Plus Cream": 1 } } } }
4
4
70,031,766
2021-11-19
https://stackoverflow.com/questions/70031766/receiving-stream-encountered-http-error-403-when-using-twitter-api-what-is-c
I am very new to using the Twitter API and was testing some Python code (below) from tweepy import OAuthHandler from tweepy import Stream import twitter_credentials class StdOutListener(Stream): def on_data(self, data): print(data) return True def on_error(self, status): print(status) twitter_stream = StdOutListener( twitter_credentials.CONSUMER_KEY, twitter_credentials.CONSUMER_KEY_SECERET, twitter_credentials.ACCESS_TOKEN, twitter_credentials.ACCESS_TOKEN_SECERET ) twitter_stream.filter(track=['#bitcoin']) but whenever I try to run it, it would give this repeating error "Stream encountered HTTP error: 403". I checked the Twitter API response codes and error 403 was listed as the request is forbidden. Here are some troubleshooting steps I already took: tried new access keys/consumer keys tried to create a new developer account uninstall and reinstall tweepy None of these worked for me. So what is causing this error and how can I fix it? Thank you.
If you have Essential access, you won’t be able to access Twitter API v1.1. See the FAQ section about this in Tweepy's documentation for more information.
5
1
70,027,573
2021-11-18
https://stackoverflow.com/questions/70027573/error-bars-not-displaying-seaborn-relplot
Using this code I created a seaborn plot to visualize multiple variables in a long format dataset. import pandas as pd import seaborn as sns data = {'Patient ID': [11111, 11111, 11111, 11111, 22222, 22222, 22222, 22222, 33333, 33333, 33333, 33333, 44444, 44444, 44444, 44444, 55555, 55555, 55555, 55555], 'Lab Attribute': ['% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)'], 'Baseline': [46.0, 94.0, 21.0, 18.0, 56.0, 104.0, 31.0, 12.0, 50.0, 100.0, 33.0, 18.0, 46.0, 94.0, 21.0, 18.0, 46.0, 94.0, 21.0, 18.0], '3 Month': [33.0, 92.0, 19.0, 25.0, 33.0, 92.0, 21.0, 11.0, 33.0, 102.0, 18.0, 17.0, 23.0, 82.0, 13.0, 17.0, 23.0, 82.0, 13.0, 17.0], '6 Month': [34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0]} df = pd.DataFrame(data) # reshape the dataframe dfm = df_labs.melt(id_vars=['Patient_ID', 'Lab_Attribute'], var_name='Months') # change the Months values to numeric dfm.Months = dfm.Months.map({'Baseline': 0, '3 Month': 3, '6 Month': 6}) # plot a figure level line plot with seaborn p = sns.relplot(data=dfm, col='Lab_Attribute', x='Months', y='value', hue='Patient_ID', kind='line', col_wrap=5, marker='o', palette='husl',facet_kws={'sharey': False, 'sharex': True},err_style="bars", ci=95,) plt.savefig('gmb_nw_labs.jpg') The plots work great, though for some reason the error bars are not displaying, even after adding: err_style="bars", ci=95, to sns.replot() p = sns.relplot(data=dfm, col='Lab_Attribute', x='Months', y='value', hue='Patient_ID', kind='line', col_wrap=5, marker='o', palette='husl',facet_kws={'sharey': False, 'sharex': True},err_style="bars", ci=95,) Can anyone tell me why this is, are there maybe just too few data points in my data set?
Each datapoint is separated by hue, so there are no error bars because no data is being combined. Remove hue='Patient ID' to only show the mean line and error bars. Alternatively, seaborn.lineplot can be mapped onto the seaborn.relplot. By not specifying hue the API will create the error bars linestyle='' is specified so the mean line is not drawn Tested in python 3.8.12, pandas 1.3.4, matplotlib 3.4.3, seaborn 0.11.2 data = {'Patient ID': [11111, 11111, 11111, 11111, 22222, 22222, 22222, 22222, 33333, 33333, 33333, 33333, 44444, 44444, 44444, 44444, 55555, 55555, 55555, 55555], 'Lab Attribute': ['% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)', '% Saturation- Iron', 'ALK PHOS', 'ALT(SGPT)', 'AST (SGOT)'], 'Baseline': [46.0, 94.0, 21.0, 18.0, 56.0, 104.0, 31.0, 12.0, 50.0, 100.0, 33.0, 18.0, 46.0, 94.0, 21.0, 18.0, 46.0, 94.0, 21.0, 18.0], '3 Month': [33.0, 92.0, 19.0, 25.0, 33.0, 92.0, 21.0, 11.0, 33.0, 102.0, 18.0, 17.0, 23.0, 82.0, 13.0, 17.0, 23.0, 82.0, 13.0, 17.0], '6 Month': [34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0, 34.0, 65.0, 10.0, 14.0]} df = pd.DataFrame(data) # reshape the dataframe dfm = df.melt(id_vars=['Patient ID', 'Lab Attribute'], var_name='Months') # change the Months values to numeric dfm.Months = dfm.Months.map({'Baseline': 0, '3 Month': 3, '6 Month': 6}) # plot a figure level line plot with seaborn p = sns.relplot(data=dfm, col='Lab Attribute', x='Months', y='value', hue='Patient ID', kind='line', col_wrap=3, marker='o', palette='husl', facet_kws={'sharey': False, 'sharex': True}, err_style="bars", ci=95,) p.map(sns.lineplot, 'Months', 'value', linestyle='', err_style="bars", color='k') Original implementation without hue='Patient ID' p = sns.relplot(data=dfm, col='Lab Attribute', x='Months', y='value', kind='line', col_wrap=3, marker='o', palette='husl', facet_kws={'sharey': False, 'sharex': True}, err_style="bars", ci=95)
4
3
70,024,746
2021-11-18
https://stackoverflow.com/questions/70024746/how-to-draw-custom-error-bars-with-plotly
I have a data frame with one column that describes y-axis values and two more columns that describe the upper and lower bounds of a confidence interval. I would like to use those values to draw error bars using plotly. Now I am aware that plotly offers the possibility to draw confidence intervals (using the error_y and error_y_minus keyword-arguments) but not in the logic that I need, because those keywords are interpreted as additions and subtractions from the y-values. Instead, I would like to directly define the upper and lower positions: For instance, how could I use plotly and this example data frame import pandas as pd import plotly.express as px df = pd.DataFrame({'x':[0, 1, 2], 'y':[6, 10, 2], 'ci_upper':[8,11,2.5], 'ci_lower':[5,9,1.5]}) to produce a plot like this?
used Plotly Express to create bar chart used https://plotly.com/python/error-bars/#asymmetric-error-bars for generation of error bars using appropriate subtractions with your required outcome import pandas as pd import plotly.express as px df = pd.DataFrame( {"x": [0, 1, 2], "y": [6, 10, 2], "ci_upper": [8, 11, 2.5], "ci_lower": [5, 9, 1.5]} ) px.bar(df, x="x", y="y").update_traces( error_y={ "type": "data", "symmetric": False, "array": df["ci_upper"] - df["y"], "arrayminus": df["y"] - df["ci_lower"], } )
4
6
70,021,547
2021-11-18
https://stackoverflow.com/questions/70021547/using-pytorch-tensors-with-scikit-learn
Can I use PyTorch tensors instead of NumPy arrays while working with scikit-learn? I tried some methods from scikit-learn like train_test_split and StandardScalar, and it seems to work just fine, but is there anything I should know when I'm using PyTorch tensors instead of NumPy arrays? According to this question on https://scikit-learn.org/stable/faq.html#how-can-i-load-my-own-datasets-into-a-format-usable-by-scikit-learn : numpy arrays or scipy sparse matrices. Other types that are convertible to numeric arrays such as pandas DataFrame are also acceptable. Does that mean using PyTorch tensors is completely safe?
I don't think PyTorch tensors are directly supported by scikit-learn. But you can always get the underlying numpy array from PyTorch tensors my_nparray = my_tensor.numpy() and then use it with scikit learn functions.
6
7
70,019,337
2021-11-18
https://stackoverflow.com/questions/70019337/how-do-i-group-a-pandas-column-to-create-a-new-percentage-column
I've got a pandas dataframe that looks like this: mydict ={ 'person': ['Jenny', 'Jenny', 'David', 'David', 'Max', 'Max'], 'fruit': ['Apple', 'Orange', 'Apple', 'Orange', 'Apple', 'Orange'], 'eaten': [25, 75, 15, 5, 10, 10] } df = pd.DataFrame(mydict) person fruit eaten Jenny Apple 25 Jenny Orange 75 David Apple 15 David Orange 5 Max Apple 10 Max Orange 10 Which I'd like to convert into: person apple_percentage orange_percentage Jenny 0.25 0.75 David 0.75 0.25 Max 0.50 0.50 I'm guessing that I'll have to use groupby in some capacity to do this, but can't figure out a clean Pythonic way of doing so?
Use DataFrame.pivot with division by sums: df = df.pivot('person','fruit','eaten').add_suffix('_percentage') df = df.div(df.sum(axis=1), axis=0) print (df) fruit Apple_percentage Orange_percentage person David 0.75 0.25 Jenny 0.25 0.75 Max 0.50 0.50
4
6
70,003,829
2021-11-17
https://stackoverflow.com/questions/70003829/poetry-installed-but-poetry-command-not-found
I've had a million and one issues with Poetry recently. I got it fully installed and working yesterday, but after a restart of my machine I'm back to having issues with it ;( Is there anyway to have Poetry consistently recognised in my Terminal, even after reboot? System Specs: Windows 10, Visual Studio Code, Bash - WSL Ubuntu CLI, Python 3.8. Terminal: me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py poetry: command not found me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python3 Retrieving Poetry metadata This installer is deprecated. Poetry versions installed using this script will not be able to use 'self update' command to upgrade to 1.2.0a1 or later. Latest version already installed. me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ poetry run python3 cli.py poetry: command not found me@PF2DCSXD:/mnt/c/Users/me/Documents/GitHub/workers-python/workers/data_simulator/src$ Please let me know if there is anything else I can add to post to help further clarify.
When I run this, after shutdown of bash Terminal: export PATH="$HOME/.poetry/bin:$PATH" poetry command is then recognised. However, this isn't enough alone; as every time I shutdown the terminal I need to run the export. Possibly needs to be saved in a file.
88
48
70,015,750
2021-11-18
https://stackoverflow.com/questions/70015750/python-typing-issue-for-child-classes
This question is to clarify my doubts related to python typing from typing import Union class ParentClass: parent_prop = 1 class ChildA(ParentClass): child_a_prop = 2 class ChildB(ParentClass): child_b_prop = 3 def method_body(val) -> ParentClass: if val: return ChildA() else: return ChildB() def another_method() -> ChildA: return method_body(True) print(another_method().child_a_prop) In the above piece of code, the linting tool I used is printing error as below error: Incompatible return value type (got "ParentClass", expected "ChildA") (where I do method_body(True)) I have also set the method_body return type as Union[ChildA, ChildB]. This will result error: Incompatible return value type (got "Union[ChildA, ChildB]", expected "ChildA") I am looking for a better way to do this. If anyone knows the solution, your help will be very much appreciated.
mypy does not do runtime analysis so it cannot guess that calling method_body with argument True will always result in a ChildA object. So the error it produces does make sense. You have to guide mypy in some way to tell him that you know what you are doing and that another_method indeed produces a ChildA object when called with argument True. One is to use a cast: from typing import cast def another_method() -> ChildA: return cast(ChildA, method_body(True)) Another one is to add an assertion: def another_method() -> ChildA: result = method_body(True) assert isinstance(result, ChildA) return result The difference between the two is that the cast does not have any runtime implication. You can think of it as a comment put here to guide mypy in its checks, but the cast function only returns its second parameter, ie, here is the body of cast: def cast(typ, val): return val Whereas the assert can naturally raise an AssertionError error (not in that case obviously, but in general).
4
6
70,015,993
2021-11-18
https://stackoverflow.com/questions/70015993/what-is-the-difference-between-concatenate-and-stack-in-numpy
I am bit confused between both the methods : concatenate and stack The concatenate and stack provides exactly same output , what is the difference between both of them ? Using : concatenate import numpy as np my_arr_1 = np.array([ [1,4] , [2,7] ]) my_arr_2 = np.array([ [0,5] , [3,8] ]) join_array=np.concatenate((my_arr_1,my_arr_2),axis=0) print(join_array) Using : stack import numpy as np my_arr_1 = np.array([ [1,4] , [2,7] ]) my_arr_2 = np.array([ [0,5] , [3,8] ]) join1_array=np.stack((my_arr_1,my_arr_2),axis=0) print(join1_array) Output for both is same : [[[1 4] [2 7]] [[0 5] [3 8]]]
In [160]: my_arr_1 = np.array([ [1,4] , [2,7] ]) ...: my_arr_2 = np.array([ [0,5] , [3,8] ]) ...: ...: join_array=np.concatenate((my_arr_1,my_arr_2),axis=0) In [161]: join_array Out[161]: array([[1, 4], [2, 7], [0, 5], [3, 8]]) In [162]: _.shape Out[162]: (4, 2) concatenate joined the 2 arrays on an existing axis, so the (2,2) become (4,2). In [163]: join1_array=np.stack((my_arr_1,my_arr_2),axis=0) In [164]: join1_array Out[164]: array([[[1, 4], [2, 7]], [[0, 5], [3, 8]]]) In [165]: _.shape Out[165]: (2, 2, 2) stack joined them on a new axis. It actually made them both (1,2,2) shape, and then used concatenate. The respective docs should make this clear.
5
6
70,015,634
2021-11-18
https://stackoverflow.com/questions/70015634/how-to-test-async-function-using-pytest
@pytest.fixture def d_service(): c = DService() return c # @pytest.mark.asyncio # tried it too async def test_get_file_list(d_service): files = await d_service.get_file_list('') print(files) However, it got the following error? collected 0 items / 1 errors =================================== ERRORS ==================================== ________________ ERROR collecting tests/e2e_tests/test_d.py _________________ ..\..\..\..\..\anaconda3\lib\site-packages\pluggy\__init__.py:617: in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) ..\..\..\..\..\anaconda3\lib\site-packages\pluggy\__init__.py:222: in _hookexec return self._inner_hookexec(hook, methods, kwargs) ..\..\..\..\..\anaconda3\lib\site-packages\pluggy\__init__.py:216: in firstresult=hook.spec_opts.get('firstresult'), ..\..\..\..\..\anaconda3\lib\site-packages\_pytest\python.py:171: in pytest_pycollect_makeitem res = outcome.get_result() ..\..\..\..\..\anaconda3\lib\site-packages\anyio\pytest_plugin.py:98: in pytest_pycollect_makeitem marker = collector.get_closest_marker('anyio') E AttributeError: 'Module' object has no attribute 'get_closest_marker' !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 1 error in 2.53 seconds =========================== I installed the following package. The error is gone but the test is skipped. pip install pytest-asyncio (base) PS>pytest -s tests\e2e_tests\test_d.py ================================================================================================================== test session starts =================================================================================================================== platform win32 -- Python 3.6.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: C:\Users\X01324908\source\rds\research_data_science\sftp\file_handler plugins: anyio-3.3.4, asyncio-0.16.0 collected 1 item tests\e2e_tests\test_d.py s ==================================================================================================================== warnings summary ==================================================================================================================== tests/e2e_tests/test_d.py::test_get_file_list c:\users\x01324908\anaconda3\lib\site-packages\_pytest\python.py:172: PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped. You need to install a suitable plugin for your async framework, for example: - anyio - pytest-asyncio - pytest-tornasync - pytest-trio - pytest-twisted warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid))) -- Docs: https://docs.pytest.org/en/stable/warnings.html =============
This works for me, please try: import asyncio import pytest pytest_plugins = ('pytest_asyncio',) @pytest.mark.asyncio async def test_simple(): await asyncio.sleep(0.5) Output of pytest -v confirms it passes: collected 1 item test_async.py::test_simple PASSED And I have installed: pytest 6.2.5 pytest-asyncio 0.16.0 # anyio not installed
49
62
70,013,345
2021-11-18
https://stackoverflow.com/questions/70013345/multiple-column-groupby-with-pandas-to-find-maximum-value-for-each-group
I have a dataframe like below: Feature value frequency label age_45_and_above No 2700 negative age_45_and_above No 1707 positive age_45_and_above No 83 other age_45_and_above Yes 222 negative age_45_and_above Yes 15 positive age_45_and_above Yes 8 other age_45_and_above [Null] 323 negative age_45_and_above [Null] 8 other age_45_and_above [Null] 5 positive talk No 20 negative talk No 170 positive talk No 500 other talk Yes 210 negative talk Yes 1500 positive talk Yes 809 other talk [Null] 234 negative talk [Null] 43 other talk [Null] 85 positive and so on. for each feature group, I want to find the maximum frequency with all its related row data, like if the feature is age_45_and_above then by looking for NO group we have 3 rows with different frequency and label, I want to report the maximum one with it's related data. I've tried groupby in different ways: result.groupby(['Feature', 'Value'])['Frequency', 'Predict'].max() or this one, with this one, I'm getting multi-Index dataframe which is not the desired results: result.groupby(['Feature', 'Value', 'Predict'])['Frequency'].max() and so many failed attempts with idxmax, transfrom and ... . the intended output I'm looking for looks like this: Feature value frequency label age_45_and_above No 2700 negative age_45_and_above Yes 222 negative age_45_and_above [Null] 323 negative talk No 500 other talk Yes 1500 positive talk [Null] 234 negative Also, I wonder how to sum the frequencies for each <<Feature-value>> group except the max row as I don't know how to locate the max row, like in here for the first feature and value, <<age_45_and_above-No>> max is 2700, so the sum would be 1707+83. Thanks for your time.
I would do it by using merge on the grouped data. Based on this data: df = pd.DataFrame({'Feature':['age']*9+['talk']*9, 'value':(['No']*3+['Yes']*3+['[Null]']*3)*2, 'frequency':[2700,1707,83,222,15,8,323,8,5,20,170,500,210,1500,809,234,43,85], 'label':['N','P','O']*6}) Using: df.groupby(['Feature','value'],as_index=False)['frequency'].max().merge(df,on=['Feature','Value','frequency']) Outputs: Feature value frequency label 0 age No 2700 N 1 age Yes 222 N 2 age [Null] 323 N 3 talk No 500 O 4 talk Yes 1500 P 5 talk [Null] 234 N Adding the extra column can be done via a simple assignment: df_1['sum_no_max'] = df.groupby(['Feature','value'])['frequency'].sum().values - df_1['frequency'].values Finally outputting: Feature value frequency label sum_no_max 0 age No 2700 N 1790 1 age Yes 222 N 23 2 age [Null] 323 N 13 3 talk No 500 O 190 4 talk Yes 1500 P 1019 5 talk [Null] 234 N 128
5
4
70,002,365
2021-11-17
https://stackoverflow.com/questions/70002365/fillna-if-all-the-values-of-a-column-are-null-in-pandas
I have to fill a column only if all the values of that column are null. For example c df = pd.DataFrame(data = {"col1":[3, np.nan, np.nan, 21, np.nan], "col2":[4, np.nan, 12, np.nan, np.nan], "col3":[33, np.nan, 55, np.nan, np.nan], "col4":[np.nan, np.nan, np.nan, np.nan, np.nan]}) >>> df col1 col2 col3 col4 0 3.0 4.0 33.0 NaN 1 NaN NaN NaN NaN 2 NaN 12.0 55.0 NaN 3 21.0 NaN NaN NaN 4 NaN NaN NaN NaN In the above example, I have to replace the values of col4 with 100 since all the values are null/NaN. So for the above example. I have to get the output as below. col1 col2 col3 col4 0 3.0 4.0 33.0 100 1 NaN NaN NaN 100 2 NaN 12.0 55.0 100 3 21.0 NaN NaN 100 4 NaN NaN NaN 100 Tried using the below command. But its replacing values of a column only if it contains atleast 1 non-nan value df.where(df.isnull().all(axis=1), df.fillna(100), inplace=True) Could you please let me know how to do this. Thanks
Use indexing: df.loc[:, df.isna().all()] = 100 print(df) # Output: col1 col2 col3 col4 0 3.0 4.0 33.0 100.0 1 NaN NaN NaN 100.0 2 NaN 12.0 55.0 100.0 3 21.0 NaN NaN 100.0 4 NaN NaN NaN 100.0
4
2
69,997,857
2021-11-17
https://stackoverflow.com/questions/69997857/implementation-of-the-max-function-in-python
Consider: def my_max(*a): n = len(a) max_v = a[0] for i in range (1,n): if a[i] > max_v: max_v = a[i] return max_v def my_min(*a): n = len(a) min_v = a[0] for i in range (1,n): if a[i] < min_v: min_v = a[i] return min_v test = [7, 4, 2, 6, 8] assert max(test) == my_max(test) and min(test) == my_min(test) assert max(7, 4, 2, 5) == my_max(7, 4, 2, 5) and min(7, 4, 2, 5) == my_min(7, 4, 2, 5) print("pass") I am trying to write the max() function of Python in code. If I add the asterisk in front of the input, it won't pass the first assertion. If I don't it wouldn't pass the second assertion. What should I write in the input for it to pass both assertions, like it does in the max() function of Python?
Short answer: Use a star to collect the arguments in a tuple and then add a special case for a tuple of length one to handle a single iterable argument. Source material: The C code that handles the logic can be found at: https://github.com/python/cpython/blob/da20d7401de97b425897d3069f71f77b039eb16f/Python/bltinmodule.c#L1708 Simplified pure python code: If you ignore the default and key keyword arguments, what's left simplifies to: def mymax(*args): if len(args) == 0: raise TypeError('max expected at least 1 argument, got 0') if len(args) == 1: args = tuple(args[0]) largest = args[0] for x in args[1:]: if x > largest: largest = x return largest There are other nuances, but this should get you started. Documentation: The special handling for the length one case versus other cases is documented here: Return the largest item in an iterable or the largest of two or more arguments. If one positional argument is provided, it should be an iterable. The largest item in the iterable is returned. If two or more positional arguments are provided, the largest of the positional arguments is returned. More complete version: This includes some of aforementioned nuances like the key and default keyword arguments and the use of iterators instead of slices: sentinel = object() def mymax(*args, default=sentinel, key=None): """max(iterable, *[, default=obj, key=func]) -> value max(arg1, arg2, *args, *[, key=func]) -> value With a single iterable argument, return its biggest item. The default keyword-only argument specifies an object to return if the provided iterable is empty. With two or more arguments, return the largest argument. """ if not args: raise TypeError('max expected at least 1 argument, got 0') if len(args) == 1: it = iter(args[0]) else: if default is not sentinel: raise TypeError('Cannot specify a default for max() with multiple positional arguments') it = iter(args) largest = next(it, sentinel) if largest is sentinel: if default is not sentinel: return default raise ValueError('max() arg is an empty sequence') if key is None: for x in it: if x > largest: largest = x return largest largest_key = key(largest) for x in it: kx = key(x) if kx > largest_key: largest = x largest_key = kx return largest # This makes the tooltips nicer # but isn't how the C code actually works # and it is only half correct. mymax.__text_signature__ = '($iterable, /, *, default=obj, key=func)'
22
46
69,919,970
2021-11-10
https://stackoverflow.com/questions/69919970/no-module-named-distutils-util-but-distutils-is-installed
I was wanting to upgrade my Python version (to 3.10 in this case), so after installing Python 3.10, I proceeded to try adding some modules I use, e.g., opencv, which ran into: python3.10 -m pip install opencv-python Output: Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/usr/lib/python3/dist-packages/pip/__main__.py", line 16, in <module> from pip._internal.cli.main import main as _main # isort:skip # noqa File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module> from pip._internal.cli.autocompletion import autocomplete File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module> from pip._internal.cli.main_parser import create_main_parser File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module> from pip._internal.cli import cmdoptions File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 19, in <module> from distutils.util import strtobool ModuleNotFoundError: No module named 'distutils.util' And sudo apt-get install python3-distutils Output: [sudo] password for jeremy: Reading package lists... Done Building dependency tree Reading state information... Done python3-distutils is already the newest version (3.8.10-0ubuntu1~20.04). ... 0 upgraded, 0 newly installed, 0 to remove and 4 not upgraded. Since distutils already seems to be installed, I can't grok how to proceed.
It looks like distutils has versioning, so after the following, it seems to be able to proceed. sudo apt-get install python3.10-distutils Output: Reading package lists... Done Building dependency tree Reading state information... Done ... Setting up python3.10-lib2to3 (3.10.0-1+focal1) ... Setting up python3.10-distutils (3.10.0-1+focal1) ... And: python3.10 -m pip install opencv-python
180
53
69,918,818
2021-11-10
https://stackoverflow.com/questions/69918818/how-can-i-run-python-code-after-a-dbt-run-or-a-specific-model-is-completed
I would like to be able to run an ad-hoc python script that would access and run analytics on the model calculated by a dbt run, are there any best practices around this?
We recently built a tool that could that caters very much to this scenario. It leverages the ease of referencing tables from dbt in Python-land. It's called dbt-fal. The idea is that you would define the python scripts you would like to run after your dbt models are run: # schema.yml models: - name: iris meta: owner: "@matteo" fal: scripts: - "notify.py" And then the file notify.py is called if the iris model was run in the last dbt run: # notify.py import os from slack_sdk import WebClient from slack_sdk.errors import SlackApiError CHANNEL_ID = os.getenv("SLACK_BOT_CHANNEL") SLACK_TOKEN = os.getenv("SLACK_BOT_TOKEN") client = WebClient(token=SLACK_TOKEN) message_text = f"""Model: {context.current_model.name} Status: {context.current_model.status} Owner: {context.current_model.meta['owner']}""" try: response = client.chat_postMessage( channel=CHANNEL_ID, text=message_text ) except SlackApiError as e: assert e.response["error"] Each script is ran with a reference to the current model for which it is running in a context variable. To start using fal, just pip install fal and start writing your python scripts.
7
5
69,933,583
2021-11-11
https://stackoverflow.com/questions/69933583/how-to-customize-bar-annotations-to-not-show-selected-values
I have the following data set: data = [6.92, 1.78, 0.0, 0.0, 3.5, 8.82, 3.06, 0.0, 0.0, 5.54, -10.8, -6.03, 0.0, 0.0, -6.8, 13.69, 8.61, 9.98, 0.0, 9.42, 4.91, 3.54, 2.62, 5.65, 1.95, 8.91, 11.46, 5.31, 6.93, 6.42] Is there a way to remove the 0.0 labels from the bar plot? I tried df = df.replace(0, "") but then I get a list index out of range error code. My code: import pandas as pd import matplotlib.pyplot as plt import numpy as np data = [6.92, 1.78, 0.0, 0.0, 3.5, 8.82, 3.06, 0.0, 0.0, 5.54, -10.8, -6.03, 0.0, 0.0, -6.8, 13.69, 8.61, 9.98, 0.0, 9.42, 4.91, 3.54, 2.62, 5.65, 1.95, 8.91, 11.46, 5.31, 6.93, 6.42] df = pd.DataFrame(np.array(data).reshape(6,5), columns=['Bank1', 'Bank2', 'Bank3', 'Bank4', 'Bank5'], index =['2016', '2017', '2018', '2019', '2020', '2021']) print(df) ax = df.plot(kind='bar', rot=0, xlabel='Year', ylabel='Total Return %', title='Overall Performance', figsize=(15, 10)) ax.bar_label(ax.containers[0], fmt='%.1f', fontsize=8, padding=3) ax.bar_label(ax.containers[1], fmt='%.1f', fontsize=8, padding=3) ax.bar_label(ax.containers[2], fmt='%.1f', fontsize=8, padding=3) ax.bar_label(ax.containers[3], fmt='%.1f', fontsize=8, padding=3) ax.bar_label(ax.containers[4], fmt='%.1f', fontsize=8, padding=3) ax.legend(title='Columns', bbox_to_anchor=(1, 1.02), loc='upper left') plt.show()
labels passed to matplotlib.pyplot.bar_label must be customized Adjust the comparison (!= 0) value or range as needed. labels = [f'{v.get_height():0.0f}' if v.get_height() != 0 else '' for v in c ] without the assignment expression (:=). See this answer for additional details and examples using .bar_label. Tested in python v3.12.0, pandas v2.1.2, matplotlib v3.8.1. import pandas as pd import matplotlib.pyplot as plt data = [6.92, 1.78, 0.0, 0.0, 3.5, 8.82, 3.06, 0.0, 0.0, 5.54, -10.8, -6.03, 0.0, 0.0, -6.8, 13.69, 8.61, 9.98, 0.0, 9.42, 4.91, 3.54, 2.62, 5.65, 1.95, 8.91, 11.46, 5.31, 6.93, 6.42] df = pd.DataFrame(np.array(data).reshape(6,5), columns=['Bank1', 'Bank2', 'Bank3', 'Bank4', 'Bank5'], index =['2016', '2017', '2018', '2019', '2020', '2021']) ax = df.plot(kind='bar', rot=0, xlabel='Year', ylabel='Total Return %', title='Overall Performance', width=0.9, figsize=(15, 10)) for c in ax.containers: # customize the label to account for cases when there might not be a bar section labels = [f'{h:0.1f}' if (h := v.get_height()) != 0 else '' for v in c ] # set the bar label ax.bar_label(c, labels=labels, fontsize=8, padding=3) # from matplotlib 3.7, the following options with fmt also work # ax.bar_label(c, fmt=lambda x: np.where(x != 0, x, ''), fontsize=8, padding=3) # ax.bar_label(c, fmt=lambda x: f'{x:0.2f}' if x != 0 else '', fontsize=8, padding=3) ax.legend(title='Banks', bbox_to_anchor=(1, 0.5), loc='center left', frameon=False) plt.show()
5
4
69,916,682
2021-11-10
https://stackoverflow.com/questions/69916682/python-httpx-how-does-httpx-clients-connection-pooling-work
Consider this function that makes a simple GET request to an API endpoint: import httpx def check_status_without_session(url : str) -> int: response = httpx.get(url) return response.status_code Running this function will open a new TCP connection every time the function check_status_without_session is called. Now, this section of HTTPX documentation recommends using the Client API while making multiple requests to the same URL. The following function does that: import httpx def check_status_with_session(url: str) -> int: with httpx.Client() as client: response = client.get(url) return response.status_code According to the docs using Client will ensure that: ... a Client instance uses HTTP connection pooling. This means that when you make several requests to the same host, the Client will reuse the underlying TCP connection, instead of recreating one for every single request. My question is, in the second case, I have wrapped the Client context manager in a function. If I call check_status_with_session multiple times with the same URL, wouldn't that just create a new pool of connections each time the function is called? This implies it's not actually reusing the connections. As the function stack gets destroyed after the execution of the function, the Client object should be destroyed as well, right? Is there any advantage in doing it like this or is there a better way?
Is there any advantage in doing it like this or is there a better way? No, there is no advantage using httpx.Client in the way you've shown. In fact the httpx.<method> API, e.g. httpx.get, does exactly the same thing! The "pool" is a feature of the transport manager held by Client, which is HTTPTransport by default. The transport is created at Client initialization time and stored as the instance property self._transport. Creating a new Client instance means a new HTTPTransport instance, and transport instances have their own TCP connection pool. By creating a new Client instance each time and using it only once, you get no benefit over using e.g. httpx.get directly. And that might be OK! Connection pooling is an optimization over creating a new TCP connection for each request. Your application may not need that optimization, it may be performant enough already for your needs. If you are making many requests to the same endpoint in a tight loop, iterating within the context of the loop may give you some throughput gains, e.g. with httpx.Client(base_url="https://example.com") as client: results = [client.get(f"/api/resource/{idx}") for idx in range(100)] For such I/O-heavy workloads you may do even better by executing results in parallel, e.g. using httpx.AsyncClient.
6
9
69,989,521
2021-11-16
https://stackoverflow.com/questions/69989521/unexpected-types-int-int-possible-types-supportsindex-none-slice-i
What's wrong with this code: split_list = [3600, 3600, 3600, 3600, 3600, 3600, 3600, 3600, 45] split_list2 = [None, None, None, None, None, None, None, None, None, None, None, None] result = [3600, 3600, 3600, 3600, 3600, 3600, 3600, 3600, 45, None, None, None] for i in range(len(split_list)): split_list2[i] = split_list[i] In PyCharm it issues a warning; Unexpected type(s): (int, int) Possible type(s): (SupportsIndex, None) (slice, Iterable[None]) But the script runs just fine and this code works exactly as I expected. I don't like warnings in my IDE though, any quick fixes?
This warning is solved by updating to PyCharm to 2021.2.2. It seems to be a bug in earlier versions of the IDE's static type checker. One user reported in the comments that this bug regressed in the PyCharm 2021.2.3 release. I just tested it again using PyCharm 2022.1 Professional Edition and the bug has again been solved. Here's a screenshot:
8
7
69,912,264
2021-11-10
https://stackoverflow.com/questions/69912264/python-3-9-8-fails-using-black-and-importing-typed-ast-ast3
Since updating to Python 3.9.8, we get an error while using Black in our CI pipeline. black....................................................................Failed - hook id: black - exit code: 1 Traceback (most recent call last): File "../.cache/pre-commit/repol9drvp84/py_env-python3/bin/black", line 5, in <module> from black import patched_main File "../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/black/__init__.py", line 52, in <module> from typed_ast import ast3, ast27 File "../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/ast3.py", line 40, in <module> from typed_ast import _ast3 ImportError: ../.cache/pre-commit/repol9drvp84/py_env-python3/lib/python3.9/site-packages/typed_ast/_ast3.cpython-39-x86_64-linux-gnu.so: undefined symbol: _PyUnicode_DecodeUnicodeEscape The error can be easily reproduced with: % pip install typed_ast % python3 -c 'from typed_ast import ast3' Traceback (most recent call last): File "<string>", line 1, in <module> ImportError: /usr/lib/python3/dist-packages/typed_ast/_ast3.cpython-39-x86_64-linux-gnu.so: undefined symbol: _PyUnicode_DecodeUnicodeEscape Currently the only workaround is downgrading to Python 3.9.7. Is another fix available? See also Bug#998854: undefined symbol: _PyUnicode_DecodeUnicodeEscape
The initial error was a failing Python Black pipeline. Black failed because it was pinned to an older version which now fails with Python 3.9.8. Updating Black to the latest version, 21.10b0, fixed the error for me. See also typed_ast issue #169: For others who may find this in a search, I ran into this problem via black because I had black pinned to an older version. The current version of black appears to no longer use typed-ast and thus won't encounter this issue. Using the latest typed-ast version >=1.5.0 seems to work as well. E.g., pip install typed-ast --upgrade
28
37
69,949,591
2021-11-12
https://stackoverflow.com/questions/69949591/modulenotfounderror-no-module-named-apt-pkg-installing-deadsnakes-repository
I want to install Python 3.10 on Ubuntu 18.04 (I'm currently on Python 3.8) from the deadsnakes repository with the following set of commands I found on the internet: sudo add-apt-repository ppa:deadsnakes/ppa sudo apt update sudo apt install python3.10 But I got the error sudo: add-apt-repository: command not found. More net research led me to this set of commands at "ModuleNotFoundError: No module named 'apt_pkg'" appears in various commands - Ask Ubuntu: sudo apt remove python3-apt sudo apt autoremove sudo apt autoclean sudo apt install python3-apt Other web sources said the same thing, so I did that, but I still get the error message when I run sudo add-apt-repository ppa:deadsnakes/ppa. Then I found How to Fix 'add-apt-repository command not found' on Ubuntu & Debian - phoenixNAP, which advised this set of commands: sudo apt update sudo apt install software-properties-common sudo apt update so I did that, but when I run sudo add-apt-repository ppa:deadsnakes/ppa I now get this error message: ~$ sudo add-apt-repository ppa:deadsnakes/ppa Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 12, in <module> from softwareproperties.SoftwareProperties import SoftwareProperties, shortcut_handler File "/usr/lib/python3/dist-packages/softwareproperties/SoftwareProperties.py", line 28, in <module> import apt_pkg ModuleNotFoundError: No module named 'apt_pkg' I have found some web links that show a wide variety of solutions with earlier versions of Python. I'm currently on Python 3.8. Before I do anything more I want to ask what is the best way to solve the ModuleNotFoundError: No module named 'apt_pkg' error message when trying to install the deadsnakes repository to install Python 3.10, given the number of possible solutions I have seen. Thanks very much.
This worked for me: sudo apt-get install python3-apt --reinstall cd /usr/lib/python3/dist-packages sudo cp apt_pkg.cpython-38-x86_64-linux-gnu.so apt_pkg.so The 38 in the filename above can be different for you.
9
33
69,938,570
2021-11-12
https://stackoverflow.com/questions/69938570/md4-hashlib-support-in-python-3-8
I am trying to implement a soap client for a server that uses NTLM authentication. The libraries that I use (requests-ntlm2 which relies on ntlm-auth) implement the MD4 algorithm that lies in the core of the NTLM protocol via the standard library's hashlib. Although hashlib seems to support MD4: >>> import hashlib >>> hashlib.algorithms_available {'md5-sha1', 'md4', 'shake_128', 'md5', 'blake2s', 'sha3_512', 'ripemd160', 'sha512', 'mdc2', 'blake2b', 'sha3_256', 'sha3_224', 'sha512_224', 'sha1', 'sha384', 'sha256', 'sha224', 'whirlpool', 'sha512_256', 'sha3_384', 'shake_256', 'sm3'} >>> and so does the openssl library in my system: (victory) C:\code\python\services>openssl help: [...] Message Digest commands (see the `dgst' command for more details) blake2b512 blake2s256 md4 md5 mdc2 rmd160 sha1 sha224 sha256 sha3-224 sha3-256 sha3-384 sha3-512 sha384 sha512 sha512-224 sha512-256 shake128 shake256 sm3 [...] when the authentication tries to run python produces an ValueError: unsupported hash type md4 error. Here is the relevant part of the traceback: C:\ProgramData\Miniconda3\envs\victory\lib\site-packages\ntlm_auth\compute_hash.py in _ntowfv1(password) 165 return nt_hash 166 --> 167 digest = hashlib.new('md4', password.encode('utf-16-le')).digest() 168 169 return digest C:\ProgramData\Miniconda3\envs\victory\lib\hashlib.py in __hash_new(name, data, **kwargs) 161 # This allows for SHA224/256 and SHA384/512 support even though 162 # the OpenSSL library prior to 0.9.8 doesn't provide them. --> 163 return __get_builtin_constructor(name)(data) 164 165 C:\ProgramData\Miniconda3\envs\victory\lib\hashlib.py in __get_builtin_constructor(name) 118 return constructor 119 --> 120 raise ValueError('unsupported hash type ' + name) 121 122 ValueError: unsupported hash type md4 Even when I try to merely call the MD4 from hashlib, I get the same result: >>> import hashlib >>> hashlib.new('md4') Traceback (most recent call last): File "C:\ProgramData\Miniconda3\envs\victory\lib\hashlib.py", line 157, in __hash_new return _hashlib.new(name, data) ValueError: [digital envelope routines] initialization error During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\ProgramData\Miniconda3\envs\victory\lib\hashlib.py", line 163, in __hash_new return __get_builtin_constructor(name)(data) File "C:\ProgramData\Miniconda3\envs\victory\lib\hashlib.py", line 120, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md4 Any insights about what's going on and/or any help would be immensely appreciated.
Well, it seems that there was something corrupted in my conda environment. I created a new identical one, and it's been working ever since without having to change anything else.
11
2
69,913,781
2021-11-10
https://stackoverflow.com/questions/69913781/is-it-true-that-inplace-true-activations-in-pytorch-make-sense-only-for-infere
According to the discussions on PyTorch forum : What’s the difference between nn.ReLU() and nn.ReLU(inplace=True)? Guidelines for when and why one should set inplace = True? The purpose of inplace=True is to modify the input in place, without allocating memory for additional tensor with the result of this operation. This allows to be more efficient in memory usage but prohibits the possibility to make a backward pass, at least if the operation decreases the amount of information. And the backpropagation algorithm requires to have intermediate activations saved in order to update the weights. Can one say, that this mode, should be turned on in layers only if the model is already trained, and one doesn't want to modify it anymore?
nn.ReLU(inplace=True) saves memory during both training and testing. However, there are some problems we may face when we use nn.ReLU(iplace=True) while calculating gradients. Sometimes, the original values are needed when calculating gradients. Because inplace destroys some of the original values, some usages may be problematic: def forward(self, x): skip = x x = self.relu(x) x += skip # inplace addition # Error! The above two consecutive inplace operations will produce an error. However, it is fine to use first addition, then activation function with inplace=True: def forward(self, x): skip = x x += skip # inplace addition x = self.relu(x) # No error!
10
11
69,955,838
2021-11-13
https://stackoverflow.com/questions/69955838/saving-model-on-tensorflow-2-7-0-with-data-augmentation-layer
I am getting an error when trying to save a model with data augmentation layers with Tensorflow version 2.7.0. Here is the code of data augmentation: input_shape_rgb = (img_height, img_width, 3) data_augmentation_rgb = tf.keras.Sequential( [ layers.RandomFlip("horizontal"), layers.RandomFlip("vertical"), layers.RandomRotation(0.5), layers.RandomZoom(0.5), layers.RandomContrast(0.5), RandomColorDistortion(name='random_contrast_brightness/none'), ] ) Now I build my model like this: # Build the model input_shape = (img_height, img_width, 3) model = Sequential([ layers.Input(input_shape), data_augmentation_rgb, layers.Rescaling((1./255)), layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, data_format='channels_last'), layers.MaxPooling2D(), layers.BatchNormalization(), layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4 layers.MaxPooling2D(), layers.BatchNormalization(), layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3 layers.MaxPooling2D(), layers.BatchNormalization(), layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3 layers.MaxPooling2D(), layers.BatchNormalization(), layers.Flatten(), layers.Dense(128, activation='relu'), # best 1 layers.Dropout(0.1), layers.Dense(128, activation='relu'), # best 1 layers.Dropout(0.1), layers.Dense(64, activation='relu'), # best 1 layers.Dropout(0.1), layers.Dense(num_classes, activation = 'softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=metrics) model.summary() Then after the training is done I just make: model.save("./") And I'm getting this error: --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-84-87d3f09f8bee> in <module>() ----> 1 model.save("./") /usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb /usr/local/lib/python3.7/dist- packages/tensorflow/python/saved_model/function_serialization.py in serialize_concrete_function(concrete_function, node_ids, coder) 66 except KeyError: 67 raise KeyError( ---> 68 f"Failed to add concrete function '{concrete_function.name}' to object-" 69 f"based SavedModel as it captures tensor {capture!r} which is unsupported" 70 " or not reachable from root. " KeyError: "Failed to add concrete function 'b'__inference_sequential_46_layer_call_fn_662953'' to object-based SavedModel as it captures tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which is unsupported or not reachable from root. One reason could be that a stateful object or a variable that the function depends on is not assigned to an attribute of the serialized trackable object (see SaveTest.test_captures_unreachable_variable)." I inspected the reason of getting this error by changing the architecture of my model and I just found that reason came from the data_augmentation layer since the RandomFlip and RandomRotation and others are changed from layers.experimental.prepocessing.RandomFlip to layers.RandomFlip, but still the error appears.
This seems to be a bug in Tensorflow 2.7 when using model.save combined with the parameter save_format="tf", which is set by default. The layers RandomFlip, RandomRotation, RandomZoom, and RandomContrast are causing the problems, since they are not serializable. Interestingly, the Rescaling layer can be saved without any problems. A workaround would be to simply save your model with the older Keras H5 format model.save("test", save_format='h5'): import tensorflow as tf import numpy as np class RandomColorDistortion(tf.keras.layers.Layer): def __init__(self, contrast_range=[0.5, 1.5], brightness_delta=[-0.2, 0.2], **kwargs): super(RandomColorDistortion, self).__init__(**kwargs) self.contrast_range = contrast_range self.brightness_delta = brightness_delta def call(self, images, training=None): if not training: return images contrast = np.random.uniform( self.contrast_range[0], self.contrast_range[1]) brightness = np.random.uniform( self.brightness_delta[0], self.brightness_delta[1]) images = tf.image.adjust_contrast(images, contrast) images = tf.image.adjust_brightness(images, brightness) images = tf.clip_by_value(images, 0, 1) return images def get_config(self): config = super(RandomColorDistortion, self).get_config() config.update({"contrast_range": self.contrast_range, "brightness_delta": self.brightness_delta}) return config input_shape_rgb = (256, 256, 3) data_augmentation_rgb = tf.keras.Sequential( [ tf.keras.layers.RandomFlip("horizontal"), tf.keras.layers.RandomFlip("vertical"), tf.keras.layers.RandomRotation(0.5), tf.keras.layers.RandomZoom(0.5), tf.keras.layers.RandomContrast(0.5), RandomColorDistortion(name='random_contrast_brightness/none'), ] ) input_shape = (256, 256, 3) padding = 'same' kernel_size = 3 model = tf.keras.Sequential([ tf.keras.layers.Input(input_shape), data_augmentation_rgb, tf.keras.layers.Rescaling((1./255)), tf.keras.layers.Conv2D(16, kernel_size, padding=padding, activation='relu', strides=1, data_format='channels_last'), tf.keras.layers.MaxPooling2D(), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(32, kernel_size, padding=padding, activation='relu'), # best 4 tf.keras.layers.MaxPooling2D(), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(64, kernel_size, padding=padding, activation='relu'), # best 3 tf.keras.layers.MaxPooling2D(), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(128, kernel_size, padding=padding, activation='relu'), # best 3 tf.keras.layers.MaxPooling2D(), tf.keras.layers.BatchNormalization(), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), # best 1 tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(128, activation='relu'), # best 1 tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(64, activation='relu'), # best 1 tf.keras.layers.Dropout(0.1), tf.keras.layers.Dense(5, activation = 'softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam') model.summary() model.save("test", save_format='h5') Loading your model with your custom layer would look like this then: model = tf.keras.models.load_model('test.h5', custom_objects={'RandomColorDistortion': RandomColorDistortion}) where RandomColorDistortion is the name of your custom layer.
12
11
69,969,482
2021-11-15
https://stackoverflow.com/questions/69969482/why-arent-pandas-operations-in-place
Pandas operations usually create a copy of the original dataframe. As some answers on SO point out, even when using inplace=True, a lot of operations still create a copy to operate on. Now, I think I'd be called a madman if I told my colleagues that everytime I want to, for example, apply +2 to a list, I copy the whole thing before doing it. Yet, it's what Pandas does. Even simple operations such as append always reallocate the whole dataframe. Having to reallocate and copy everything on every operation seems like a very inefficient way to go about operating on any data. It also makes operating on particularly large dataframes impossible, even if they fit in your RAM. Furthermore, this does not seem to be a problem for Pandas developers or users, so much so that there's an open issue #16529 discussing the removal of the inplace parameter entirely, which has received mostly positive responses; some started getting deprecated since 1.0. It seems like I'm missing something. So, what am I missing? What are the advantages of always copying the dataframe on operations, instead of executing them in-place whenever possible? Note: I agree that method chaining is very neat, I use it all the time. However, I feel that "because we can method chain" is not the whole answer, since Pandas sometimes copies even in inplace=True methods, which are not meant to be chained. So, I'm looking some other answers for why this would be a reasonable default.
As evidenced here in the pandas documentation, "... In general we like to favor immutability where sensible." The Pandas project is in the camp of preferring immutable (stateless) objects over mutable (objects with state) to guide programmers into creating more scalable / parallelizable data processing code. They are guiding the users by making the 'inplace=False' behavior the default. In this software engineering stack exchange Peter Torok discusses the pros and cons between mutable and immutable object programming really nicely. https://softwareengineering.stackexchange.com/a/151735 In summary some software engineers feel that objects that are immutable (unchanging) lead to less errors in the code - because object states are easy to lose track of and hard to track down increased scalability - it is easier to write multithreaded code, since one thread will not inadvertently modify the value contained by an object in another thread more concise code - since code is forced to be written in a functional programming and more mathematical style I will agree that this does have it's inefficiencies since constantly making copies of the same objects for minor changes does not seem ideal. It has other benefits noted above.
34
9
69,965,600
2021-11-14
https://stackoverflow.com/questions/69965600/importerror-dll-load-failed-while-importing-defs
I'm trying to install sentence-transformers library. But when I import it, this error pops out: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-c9f0b8c65221> in <module> ----> 1 import h5py ~\Anaconda3\envs\custom_env\lib\site-packages\h5py\__init__.py in <module> 32 raise 33 ---> 34 from . import version 35 36 if version.hdf5_version_tuple != version.hdf5_built_version_tuple: ~\Anaconda3\envs\custom_env\lib\site-packages\h5py\version.py in <module> 15 16 from collections import namedtuple ---> 17 from . import h5 as _h5 18 import sys 19 import numpy h5py\h5.pyx in init h5py.h5() ImportError: DLL load failed while importing defs: The specified procedure could not be found. I have installed h5py library. What am I missing?
I had the same problem after a CUDA/CUdnn update that probably messed some DLLs. I did: pip uninstall h5py, which indicated a successful uninstall along with some error messages. And immediately after pip install h5py Sometimes simple solutions work.
8
4
69,921,688
2021-11-11
https://stackoverflow.com/questions/69921688/how-to-handle-formatting-a-regular-string-which-could-be-a-f-string-c0209
For the following line: print("{0: <24}".format("==> core=") + str(my_dict["core"])) I am getting following warning message: [consider-using-f-string] Formatting a regular string which could be a f-string [C0209] Could I reformat it using f-string?
You could change the code to print(f"{'==> core=': <24}{my_dict['core']}"). The cast to string is implicit.
5
6
69,983,020
2021-11-16
https://stackoverflow.com/questions/69983020/modulenotfounderror-no-module-named-taming
ModuleNotFoundError Traceback (most recent call last) <ipython-input-14-2683ccd40dcb> in <module> 16 from omegaconf import OmegaConf 17 from PIL import Image ---> 18 from taming.models import cond_transformer, vqgan 19 import taming.modules 20 import torch ModuleNotFoundError: No module named 'taming' I've tried everything can you please help me? I've tried putting it in the same folder and stuff so please help!
Try the following command: pip install taming-transformers
17
25
69,992,818
2021-11-16
https://stackoverflow.com/questions/69992818/efficient-way-to-map-3d-function-to-a-meshgrid-with-numpy
I have a set of data values for a scalar 3D function, arranged as inputs x,y,z in an array of shape (n,3) and the function values f(x,y,z) in an array of shape (n,). EDIT: For instance, consider the following simple function data = np.array([np.arange(n)]*3).T F = np.linalg.norm(data,axis=1)**2 I would like to convolve this function with a spherical kernel in order to perform a 3D smoothing. The easiest way I found to perform this is to map the function values in a 3D spatial grid and then apply a 3D convolution with the kernel I want. This works fine, however the part that maps the 3D function to the 3D grid is very slow, as I did not find a way to do it with NumPy only. The code below is my actual implementation, where data is the (n,3) array containing the 3D positions in which the function is evaluated, F is the (n,) array containing the corresponding values of the function and M is the (N,N,N) array that contains the 3D space grid. step = 0.1 # Create meshgrid xmin = data[:,0].min() xmax = data[:,0].max() ymin = data[:,1].min() ymax = data[:,1].max() zmin = data[:,2].min() zmax = data[:,2].max() x = np.linspace(xmin,xmax,int((xmax-xmin)/step)+1) y = np.linspace(ymin,ymax,int((ymax-ymin)/step)+1) z = np.linspace(zmin,zmax,int((zmax-zmin)/step)+1) # Build image M = np.zeros((len(x),len(y),len(z))) for l in range(len(data)): for i in range(len(x)-1): if x[i] < data[l,0] < x[i+1]: for j in range(len(y)-1): if y[j] < data[l,1] < y[j+1]: for k in range(len(z)-1): if z[k] < data[l,2] < z[k+1]: M[i,j,k] = F[l] Is there a more efficient way to fill a 3D spatial grid with the values of a 3D function ?
For each item of data you're scanning pixels of cuboid to check if it's inside. There is an option to skip this scan. You could calculate corresponding indices of these pixels by yourself, for example: data = np.array([[1, 2, 3], #14 (corner1) [4, 5, 6], #77 (corner2) [2.5, 3.5, 4.5], #38.75 (duplicated pixel) [2.9, 3.9, 4.9], #47.63 (duplicated pixel) [1.5, 2, 3]]) #15.25 (one step up from [1, 2, 3]) step = 0.5 data_idx = ((data - data.min(axis=0))//step).astype(int) M = np.zeros(np.max(data_idx, axis=0) + 1) x, y, z = data_idx.T M[x, y, z] = F Note that only one value of duplicated pixels is being mapped to M.
5
3
69,966,739
2021-11-14
https://stackoverflow.com/questions/69966739/is-there-a-way-to-make-python-showtraceback-in-jupyter-notebooks-scrollable
Or if there are scrolling functionalities built in, to edit the scrolling settings? I tried this but it didn't work -- def test_exception(self, etype, value, tb, tb_offset=None): try: announce = Announce(etype, value) if announce.print: announce.title() announce.tips() announce.resources() announce.feedback() announce.scroll(self, etype, value, tb, tb_offset) #self.showtraceback((etype, value, tb), tb_offset=tb_offset) except: self.showtraceback((etype, value, tb), tb_offset=tb_offset) def scroll(self, etype, value, tb, tb_offset=None): b=widgets.HTML( value=self.showtraceback((etype, value, tb), tb_offset=tb_offset), placeholder='Some HTML', description='Some HTML', disabled=True ) a = HBox([b], layout=Layout(height='20px', overflow_y='10px')) display(a)
Try going to cell > current outputs > toggle scrolling in the Jupyter UI to enable scrolling output for a cell.
6
2
69,935,815
2021-11-11
https://stackoverflow.com/questions/69935815/how-do-i-revert-an-pip-upgrade
I did the following command just now, pip install --upgrade ipykernel However, i got Requirement already satisfied: ipykernel in ./anaconda3/lib/python3.8/site-packages (5.3.4) Collecting ipykernel Downloading ipykernel-6.5.0-py3-none-any.whl (125 kB) |████████████████████████████████| 125 kB 4.3 MB/s Collecting ipython<8.0,>=7.23.1 Downloading ipython-7.29.0-py3-none-any.whl (790 kB) |████████████████████████████████| 790 kB 9.2 MB/s Collecting debugpy<2.0,>=1.0.0 Downloading debugpy-1.5.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (1.9 MB) |████████████████████████████████| 1.9 MB 126.8 MB/s Collecting matplotlib-inline<0.2.0,>=0.1.0 Downloading matplotlib_inline-0.1.3-py3-none-any.whl (8.2 kB) Collecting traitlets<6.0,>=5.1.0 Downloading traitlets-5.1.1-py3-none-any.whl (102 kB) |████████████████████████████████| 102 kB 20.5 MB/s ... Requirement already satisfied: python-dateutil>=2.1 in ./anaconda3/lib/python3.8/site-packages (from jupyter-client<8.0->ipykernel) (2.8.1) Requirement already satisfied: ptyprocess>=0.5 in ./anaconda3/lib/python3.8/site-packages (from pexpect>4.3->ipython<8.0,>=7.23.1->ipykernel) (0.7.0) Requirement already satisfied: wcwidth in ./anaconda3/lib/python3.8/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython<8.0,>=7.23.1->ipykernel) (0.2.5) Requirement already satisfied: six>=1.5 in ./anaconda3/lib/python3.8/site-packages (from python-dateutil>=2.1->jupyter-client<8.0->ipykernel) (1.15.0) Installing collected packages: traitlets, matplotlib-inline, ipython, debugpy, ipykernel Attempting uninstall: traitlets Found existing installation: traitlets 5.0.5 Uninstalling traitlets-5.0.5: Successfully uninstalled traitlets-5.0.5 Attempting uninstall: ipython Found existing installation: ipython 7.22.0 Uninstalling ipython-7.22.0: Successfully uninstalled ipython-7.22.0 Attempting uninstall: ipykernel Found existing installation: ipykernel 5.3.4 Uninstalling ipykernel-5.3.4: Successfully uninstalled ipykernel-5.3.4 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. spyder 4.2.5 requires pyqt5<5.13, which is not installed. spyder 4.2.5 requires pyqtwebengine<5.13, which is not installed. conda-repo-cli 1.0.4 requires pathlib, which is not installed. Successfully installed debugpy-1.5.1 ipykernel-6.5.0 ipython-7.29.0 matplotlib-inline-0.1.3 traitlets-5.1.1 I would like to revert my command, since some of my codes suddenly does not work . Is it possible? Thanks! It seems a lot fo packages are installed. Update So I actually had my environment.yaml from my previous conda env export > environment.yaml If I do conda env update --file environment.yaml --prune It does not help me revert to my previous versions.... Can I force my base environment back to exactly environment.yaml?
I do not believe pip keeps a history of installed packages. If you have the text output from the terminal when you did the upgrade, pip does output a list of packages that will be installed, which version you had and which version you're replacing it with. You can manually revert each package by doing pip uninstall <package_name> && pip install <package_name>==<version_number> Edit: Based on your edits, I suggest you repost this question in terms of your conda environment. That's a totally different system, even though it does utilize pip. As a side note, you might be able to get away with just running pip install traitlets==5.0.5 pip install ipykernel==5.3.4 pip install ipython==7.22.0
7
9
69,986,869
2021-11-16
https://stackoverflow.com/questions/69986869/how-to-enable-and-disable-intel-mkl-in-numpy-python
I want to test and compare Numpy matrix multiplication and Eigen decomposition performance with Intel MKL and without Intel MKL. I have installed MKL using pip install mkl (Windows 10 (64-bit), Python 3.8). I then used examples from here for matmul and eigen decompositions. How do I now enable and disable MKL in order to check numpy performance with MKL and without it? Reference code: import numpy as np from time import time def matrix_mul(size, n=100): # reference: https://markus-beuckelmann.de/blog/boosting-numpy-blas.html np.random.seed(112) a, b = np.random.random((size, size)), np.random.random((size, size)) t = time() for _ in range(n): np.dot(a, b) delta = time() - t print('Dotted two matrices of size %dx%d in %0.4f ms.' % (size, size, delta / n * 1000)) def eigen_decomposition(size, n=10): np.random.seed(112) a = np.random.random((size, size)) t = time() for _ in range(n): np.linalg.eig(a) delta = time() - t print('Eigen decomposition of size %dx%d in %0.4f ms.' % (size, size, delta / n * 1000)) #Obtaining computation times: for i in range(20): eigen_decomposition(500) for i in range(20): matrix_mul(500)
You can use different environments for the comparison of Numpy with and without MKL. In each environment you can install the needed packages(numpy with MKL or without) using package installer. Then on that environments you can run your program to compare the performance of Numpy with and without MKL. NumPy doesn’t depend on any other Python packages, however, it does depend on an accelerated linear algebra library - typically Intel MKL or OpenBLAS. The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS. In the conda defaults channel, NumPy is built against Intel MKL. MKL is a separate package that will be installed in the users' environment when they install NumPy. When a user installs NumPy from conda-forge, that BLAS package then gets installed together with the actual library.But it can also be MKL (from the defaults channel), or even BLIS or reference BLAS. Please refer this link to know about installing Numpy in detail. You can create two different environments to compare the NumPy performance with MKL and without it. In the first environment install the stand-alone NumPy (that is, the NumPy without MKL) and in the second environment install the one with MKL. To create environment using NumPy without MKL. conda create -n <env_name_1> python=<version> conda activate <env_name_1> pip install numpy But depending on your OS, it might be possible that there is no distribution available (Windows). On Windows, we have always been linking against MKL. However, with the Anaconda 2.5 release we separated the MKL runtime into its own conda package, in order to do things uniformly on all platforms. In general you can create a new env: conda create -n wheel_based python activate wheel pip install numpy-1.13.3-cp36-none-win_amd64.whl # or whatever the file is named In the other environment, install NumPy with MKL using below command conda create -n <env_name_2> python=<version> conda activate <env_name_2> pip install intel-numpy In these environments <env_name_1> and <env_name_2> you can run your program seperately, so that you can compare the performance of Numpy without MKL and With MKL respectively.
6
5
69,986,654
2021-11-16
https://stackoverflow.com/questions/69986654/how-to-send-file-from-nodejs-to-flask-python
Hope you are doing well. I'm trying to send pdfs file from Nodejs to Flask using Axios. I read files from a directory (in the form of buffer array) and add them into formData (an npm package) and send an Axios request. const existingFile = fs.readFileSync(path) console.log(existingFile) const formData = new nodeFormData() formData.append("file", existingFile) formData.append("fileName", documentData.docuName) try { const getFile = await axios.post("http://127.0.0.1:5000/pdf-slicer", formData, { headers: { ...formData.getHeaders() } }) console.log(getFile)} catch (e) {console.log(e, "getFileError")} On flask side: I'm trying to get data from the request. print(request.files) if (request.method == "POST"): file=request.form["file"] if file: print(file) in request.file, I'm getting ImmutableMultiDict([]) but in request.form["file"], I'm getting data something like this: how can I handle this type of file format or how can I convert this file format to python fileObject.
I solved this issue by updating my Nodejs code. We need to convert formData file into octet/stream format. so I did minor change in my formData code : before: formData.append("file", existingFile) after: formData.append("file", fs.createReadStream(existingFile) Note: fs.createReadStream only accepts string or uint8array without null bytes. we cannot pass the buffer array.
5
1
69,958,526
2021-11-13
https://stackoverflow.com/questions/69958526/oserror-winerror-127-the-specified-procedure-could-not-be-found
While importing torch (import torch) I'm facing the following error message: OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\myUserName\anaconda3\lib\site-packages\torch\lib\jitbackend_test.dll" or one of its dependencies. I tried the suggestion from this article but without success. Any ideas how to fix it? My environment: NVIDIA GeForce GTX 1650 Windows 11 Cuda 11.5 Conda 4.10.3 Python 3.8.5 Torch 1.10 Microsoft Visual C++ Redistributable installed (https://aka.ms/vs/17/release/vc_redist.x64.exe)
Fortunately, after extensive research, I found a solution. Someone suggested me to create a new conda environment. And that worked for me! Solution: create new conda env by: conda create --name new-env install python: conda install python=3.8.5 run: conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch test cuda: import torch; print(torch.version.cuda); print(torch.cuda.is_available())
5
5
69,982,275
2021-11-15
https://stackoverflow.com/questions/69982275/plot-bar-chart-in-multiple-subplot-rows
I have a simple long-form dataset I would like to generate bar charts from. The dataframe looks like this: data = {'Year':[2019,2019,2019,2020,2020,2020,2021,2021,2021], 'Month_diff':[0,1,2,0,1,2,0,1,2], 'data': [12,10,13,16,12,18,19,45,34]} df = pd.DataFrame(data) I would like to plot a bar chart that has 3 rows, each for 2019, 2020 and 2021. X axis being month_diff and data goes on Y axis. How do I do this? If the data was in different columns, then I could have just used this code: df.plot(x="X", y=["A", "B", "C"], kind="bar") But my data is in a single column and ideally, I'd like to have different row for each year.
1. seaborn.catplot The simplest option for a long-form dataframe is the seaborn.catplot wrapper, as Johan said: import seaborn as sns sns.catplot(data=df, x='Month_diff', y='data', row='Year', kind='bar', height=2, aspect=4) 2. pivot + DataFrame.plot Without seaborn: pivot from long-form to wide-form (1 year per column) use DataFrame.plot with subplots=True to put each year into its own subplot (and optionally sharey=True) (df.pivot(index='Month_diff', columns='Year', values='data') .plot.bar(subplots=True, sharey=True, legend=False)) plt.tight_layout() Note that if you prefer a single grouped bar chart (which you alluded to at the end), you can just leave out the subplots param: df.pivot(index='Month_diff', columns='Year', values='data').plot.bar() 3. DataFrame.groupby + subplots You can also iterate the df.groupby('Year') object: Create a subplots grid of axes based on the number of groups (years) Plot each group (year) onto its own subplot row groups = df.groupby('Year') fig, axs = plt.subplots(nrows=len(groups), ncols=1, sharex=True, sharey=True) for (name, group), ax in zip(groups, axs): group.plot.bar(x='Month_diff', y='data', legend=False, ax=ax) ax.set_title(name) fig.supylabel('data') plt.tight_layout()
5
3
69,993,959
2021-11-16
https://stackoverflow.com/questions/69993959/python-threads-difference-for-3-10-and-others
For some, simple thread related code, i.e: import threading a = 0 threads = [] def x(): global a for i in range(1_000_000): a += 1 for _ in range(10): thread = threading.Thread(target=x) threads.append(thread) thread.start() for thread in threads: thread.join() print(a) assert a == 10_000_000 We got different behaviour, based on Python version. For 3.10, the output is: ❯ python3.10 b.py 10000000 For 3.9, the output is: ❯ python3.9 b.py 2440951 Traceback (most recent call last): File "/Users/romka/t/threads-test/b.py", line 24, in <module> assert a == 10_000_000 AssertionError As we do not acquire any lock, for me, results of 3.9 is obvious and expected. Question is why and how 3.10 got "correct" results, while should not? I'm review changelog for Python 3.10 and there is no anything related to threads or GIL which can bring such results.
An answer from a core developer: Unintended consequence of Mark Shannon's change that refactors fast opcode dispatching: https://github.com/python/cpython/commit/4958f5d69dd2bf86866c43491caf72f774ddec97 -- the INPLACE_ADD opcode no longer uses the "slow" dispatch path that checks for interrupts and such.
31
28
69,995,292
2021-11-16
https://stackoverflow.com/questions/69995292/algorithm-question-finding-the-cheapest-flight
I recently took an interview at a company (starting with M and ending in A) which asked me this question. Still practicing my algorithms, so I was hoping someone could help me understand how to solve this problem, and these types of problems. The problem: You are given 2 arrays. For example: D = [10,7,13,12,4] R = [5,12,7,10,12] D denotes the departure prices for flights from city A to city B. R denotes the return prices for flights from city B to city A. Find the minimum cost of a round trip flight between city A and city B. For example, the minimum in the example is D[1] + R[2]. (possible only to take the return flight on same or higher index from the departure flight) The tricky part is that, obviously, you must depart before returning. The naïve approach is just a double for loop combining all the possibilities. However, I know there is a better approach, but I can't wrap my head around it. I believe we want to create some sort of temporary array with the minimum so far or something like that... Thanks for reading.
Create a mono queue/stack out of the return prices array R, then you can solve it in O(n) where n is the length of D. R = [5, 12, 9, 10, 12] => [5, 9, 9, 10, 12] As you can see at each step we have access to the cheapest return flight that is possible at index i and above. Now, iterate over elements of D and look at that index in monoQ. Since that index at monoQ is the smallest possible in R for i and above, you now that you can't do better at that point. In code: D = [10,7,15,12,4] R = [5,12,9,10,12] monoQ = [0]*len(R) monoQ[-1] = R[-1] for i in range(len(R)-2, -1, -1): monoQ[i] = min(monoQ[i+1], R[i]) best = R[0]+D[0] for i, el in enumerate(D): best = min(best, D[i]+monoQ[i]) print(best)
5
3
69,984,897
2021-11-16
https://stackoverflow.com/questions/69984897/current-shortcut-to-run-python-in-vs-code
I use VS Code on a Mac laptop. If I'm using Python I can run the code by pressing the little arrow in the top right, However, I can't seem to find a keyboard shortcut for this. There is an old question, How to execute Python code from within Visual Studio Code, but all the answers there seem either to be obsolete or not to work on a Mac. One of them says that the F5 key should work, but my Mac has a useless touchbar instead of function keys so it's no help to me. tl;dr is there a shortcut to run Python code on a modern VS Code installation besides F5, or an easy way to set one up?
I'm using Windows so i can't give you a specific answer. But Code > Preferences > Keyboard Shortcuts, search with keyword run python file, you will get related shortcuts.
6
6
69,983,472
2021-11-16
https://stackoverflow.com/questions/69983472/how-to-satisfy-flake8-w605-invalid-escape-sequence-and-string-format-in
I have an issue in python. My original regex expression is: f"regex(metrics_api_failure\.prod\.[\w_]+\.{method_name}\.\d+\.\d+\.[\w_]+\.[\w_]+\.sum\.60)" (method_name is a local variable) and I got a lint warning: "[FLAKE8 W605] invalid escape sequence '\.'Arc(W605)" Which looks like recommends me to use r as the regex prefix. But if I do: r"regex(metrics_api_failure\.prod\.[\w_]+\.{method_name}\.\d+\.\d+\.[\w_]+\.[\w_]+\.sum\.60)" The {method_name} becomes the string type rather than a passed in variable. Does anyone have an idea how to solve this dilemma?
Pass in the expression: r"regex(metrics_api_failure\.prod\.[\w_]+\." + method_name + r"\.\d+\.\d+\.[\w_]+\.[\w_]+\.sum\.60)" Essentially, use Python string concatenation to accomplish the same thing that you were doing with the brackets. Then, r"" type string escaping should work. or use a raw format string: rf"regex(metrics_api_failure\.prod\.[\w_]+\.{method_name}\.\d+\.\d+\.[\w_]+\.[\w_]+\.sum\.60)"
13
27
69,950,010
2021-11-12
https://stackoverflow.com/questions/69950010/why-is-python-list-slower-when-sorted
In the following code, I create two lists with the same values: one list unsorted (s_not), the other sorted (s_yes). The values are created by randint(). I run some loop for each list and time it. import random import time for x in range(1,9): r = 10**x # do different val for the bound in randint() m = int(r/2) print("For rand", r) # s_not is non sorted list s_not = [random.randint(1,r) for i in range(10**7)] # s_yes is sorted s_yes = sorted(s_not) # do some loop over the sorted list start = time.time() for i in s_yes: if i > m: _ = 1 else: _ = 1 end = time.time() print("yes", end-start) # do the same to the unsorted list start = time.time() for i in s_not: if i > m: _ = 1 else: _ = 1 end = time.time() print("not", end-start) print() With output: For rand 10 yes 1.0437555313110352 not 1.1074268817901611 For rand 100 yes 1.0802974700927734 not 1.1524150371551514 For rand 1000 yes 2.5082249641418457 not 1.129960298538208 For rand 10000 yes 3.145440101623535 not 1.1366300582885742 For rand 100000 yes 3.313387393951416 not 1.1393756866455078 For rand 1000000 yes 3.3180911540985107 not 1.1336982250213623 For rand 10000000 yes 3.3231537342071533 not 1.13503098487854 For rand 100000000 yes 3.311596393585205 not 1.1345293521881104 So, when increasing the bound in the randint(), the loop over the sorted list gets slower. Why?
Cache misses. When N int objects are allocated back-to-back, the memory reserved to hold them tends to be in a contiguous chunk. So crawling over the list in allocation order tends to access the memory holding the ints' values in sequential, contiguous, increasing order too. Shuffle it, and the access pattern when crawling over the list is randomized too. Cache misses abound, provided there are enough different int objects that they don't all fit in cache. At r==1, and r==2, CPython happens to treat such small ints as singletons, so, e.g., despite that you have 10 million elements in the list, at r==2 it contains only (at most) 100 distinct int objects. All the data for those fit in cache simultaneously. Beyond that, though, you're likely to get more, and more, and more distinct int objects. Hardware caches become increasingly useless then when the access pattern is random. Illustrating: >>> from random import randint, seed >>> seed(987987987) >>> for x in range(1, 9): ... r = 10 ** x ... js = [randint(1, r) for _ in range(10_000_000)] ... unique = set(map(id, js)) ... print(f"{r:12,} {len(unique):12,}") ... 10 10 100 100 1,000 7,440,909 10,000 9,744,400 100,000 9,974,838 1,000,000 9,997,739 10,000,000 9,999,908 100,000,000 9,999,998
77
94
69,975,474
2021-11-15
https://stackoverflow.com/questions/69975474/python-typing-abstractmethod-with-default-arguments
I'm a little confuse how I'm supposed to type a base class abstract method? In this case my base class only requires that the inheriting class implements a method named 'learn' that returns None without mandating any arguments. class MyBaseClass(ABC): @abstractmethod def learn(self, *args, **kwargs) -> None: raise NotImplementedError() but if I implement it mypy raise en error 'Signature of "learn" incompatible with supertype "MyBaseClass"' class MyOtherClass(MyBaseClass): def learn(self, alpha=0.0, beta=1) -> None: # do something return None So how should I declare the learn method in the base class?
First things first, I'd ask why you want a method without known arguments. That sounds like a design problem. One solution It's fine to add new parameters to subclasses if those parameters have default values (and the base class doesn't use **kwargs), like class MyBaseClass: @abstractmethod def learn(self) -> None: raise NotImplementedError() class MyOtherClass(MyBaseClass): def learn(self, alpha=0.0, beta=1) -> None: ... though you won't be able to specify alpha and beta if you only know it's a MyBaseClass: def foo(x: MyOtherClass) -> None: x.learn(alpha=3) # ok def foo(x: MyBaseClass) -> None: x.learn(alpha=3) # not ok Why didn't *args, **kwargs work? If you have class MyBaseClass(ABC): @abstractmethod def learn(self, *args, **kwargs) -> None: raise NotImplementedError() then consider the function def foo(x: MyBaseClass) -> None: ... In that function, you can pass anything to x.learn, for example x.learn(1, fish='three'). This has to be true for any x. Since x can be an instance of a subclass of MyBaseClass (such as MyOtherClass), that subclass must also be able to accept those arguments.
5
2
69,973,873
2021-11-15
https://stackoverflow.com/questions/69973873/symbol-not-found-in-flat-namespace-ft-done-face-from-reportlab-with-python3
I have a django project using easy-thumbnail as a dependency. Installing all packages with pip is working as expected, but when I try to run my app I get this error: Invalid template library specified. ImportError raised when trying to load 'backend.templatetags.get_thumbnail': dlopen(/opt/homebrew/lib/python3.9/site-packages/reportlab/graphics/_renderPM.cpython-39-darwin.so, 0x0002): symbol not found in flat namespace '_FT_Done_Face' The error is raised from reportlab which is a dependency of easy-thumbnail. As far as I understand, reportlab is not able to find freetype. But it is installed correctly imho. I'm using macOS 12.0.1 I installed Python and freetype via Homebrew. pkg-config says, that freetype2 is available at the expected paths. What am I doing wrong? How can I fix this? Edit I did otool -l on the failing .so-file and this is, what I get (here I'm running it again in a venv): /Users/markusgerards/.pyenv/versions/myapp/lib/python3.9/site-packages/reportlab/graphics/_renderPM.cpython-39-darwin.so: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1292.60.1) I suspect that freetype should be listed there... right?
I reinstalled reportlab with this command: pip install reportlab --force-reinstall --no-cache-dir --global-option=build_ext This forced pip to actually build the package and now everythings works as intended!
9
14
69,959,195
2021-11-13
https://stackoverflow.com/questions/69959195/filling-missing-values-with-mean-in-pyspark
I am trying to fill NaN values with mean using PySpark. Below is my code that I am using and following is the error that occurred: from pyspark.sql.functions import avg def fill_with_mean(df_1, exclude=set()): stats = df_1.agg(*(avg(c).alias(c) for c in df_1.columns if c not in exclude)) return df_1.na.fill(stats.first().asDict()) res = fill_with_mean(df_1, ["MinTemp", "MaxTemp", "Evaporation", "Sunshine"]) res.show() Error: Py4JJavaError Traceback (most recent call last) <ipython-input-35-42f4d984f022> in <module>() 3 stats = df_1.agg(*(avg(c).alias(c) for c in df_1.columns if c not in exclude)) 4 return df_1.na.fill(stats.first().asDict()) ----> 5 res = fill_with_mean(df_1, ["MinTemp", "MaxTemp", "Evaporation", "Sunshine"]) 6 res.show() 5 frames /usr/local/lib/python3.7/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: 330 raise Py4JError( Py4JJavaError: An error occurred while calling o376.fill. : java.lang.NullPointerException at org.apache.spark.sql.DataFrameNaFunctions.$anonfun$fillMap$1(DataFrameNaFunctions.scala:418) at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286) at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at scala.collection.TraversableLike.map(TraversableLike.scala:286) at scala.collection.TraversableLike.map$(TraversableLike.scala:279) at scala.collection.AbstractTraversable.map(Traversable.scala:108) at org.apache.spark.sql.DataFrameNaFunctions.fillMap(DataFrameNaFunctions.scala:407) at org.apache.spark.sql.DataFrameNaFunctions.fill(DataFrameNaFunctions.scala:232) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) Can you let me know where am I going wrong? Is there any alternative way to fill missing values using mean? This is how my dataframe looks like: I wish to see mean values filled in place of null. Also, Evaporation and sunshine are not completely null, there are other values in it too. The dataset is a csv file: from pyspark.sql.functions import * import pyspark infer_schema = "true" first_row_is_header = "true" delimiter = "," df_1= spark.read.format("csv").option("header","true").load('/content/weatherAUS.csv') df_1.show() Source: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package
Based on your input data, I create my dataframe : from pyspark.sql import functions as F, Window df = spark.read.csv("./weatherAUS.csv", header=True, inferSchema=True, nullValue="NA") Then, I process the whole dataframe, excluding the columns you mentionned + the columns that cannot be replaced (date and location) exclude = ["date", "location"] + ["mintemp", "maxtemp", "evaporation", "sunshine"] df2 = df.select( *( F.coalesce(F.col(col), F.avg(col).over(Window.orderBy(F.lit(1)))).alias(col) if col.lower() not in exclude else F.col(col) for col in df.columns ) ) df2.show(5) +-------------------+----------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+------------+ | Date| Location|MinTemp|MaxTemp|Rainfall|Evaporation|Sunshine|WindGustDir|WindGustSpeed|WindDir9am|WindDir3pm|WindSpeed9am|WindSpeed3pm|Humidity9am|Humidity3pm|Pressure9am|Pressure3pm|Cloud9am|Cloud3pm|Temp9am|Temp3pm|RainToday|RainTomorrow| +-------------------+----------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+------------+ |2012-07-02 22:00:00|Townsville| 12.4| 23.3| 0.0| 6.0| 10.8| SSW| 33.0| SE| S| 7.0| 20.0| 34.0| 28.0| 1019.5| 1015.5| 1.0| 2.0| 17.5| 23.0| No| No| |2012-07-03 22:00:00|Townsville| 9.1| 21.7| 0.0| 5.0| 10.9| SE| 39.0| SSW| SSE| 17.0| 20.0| 26.0| 14.0| 1021.7| 1018.4| 1.0| 0.0| 16.4| 21.2| No| No| |2012-07-04 22:00:00|Townsville| 8.2| 23.4| 0.0| 5.2| 10.6| SSW| 30.0| SSW| NE| 22.0| 13.0| 34.0| 40.0| 1021.7| 1018.5| 2.0| 2.0| 17.1| 22.3| No| No| |2012-07-05 22:00:00|Townsville| 10.5| 24.5| 0.0| 6.0| 10.2| E| 39.0| SSW| SE| 11.0| 17.0| 48.0| 31.0| 1021.2| 1017.2| 1.0| 2.0| 17.9| 23.8| No| No| |2012-07-06 22:00:00|Townsville| 17.7| 24.1| 0.0| 6.8| 0.5| SE| 54.0| SE| ESE| 19.0| 31.0| 69.0| 58.0| 1019.2| 1017.0| 8.0| 7.0| 20.1| 23.2| No| No| +-------------------+----------+-------+-------+--------+-----------+--------+-----------+-------------+----------+----------+------------+------------+-----------+-----------+-----------+-----------+--------+--------+-------+-------+---------+------------+ only showing top 5 rows
6
1
69,970,569
2021-11-15
https://stackoverflow.com/questions/69970569/valueerror-unexpected-result-of-predict-function-empty-batch-outputs-pleas
I have this model : # Set random seed tf.random.set_seed(42) # Create some regression data X_regression = np.expand_dims(np.arange(0, 1000, 5), axis=0) y_regression = np.expand_dims(np.arange(100, 1100, 5), axis=0) # Split it into training and test sets X_reg_train = X_regression[:150] X_reg_test = X_regression[150:] y_reg_train = y_regression[:150] y_reg_test = y_regression[150:] # Setup random seed tf.random.set_seed(42) # Recreate the model model_3 = tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ]) # Change the loss and metrics of our compiled model model_3.compile(loss=tf.keras.losses.mae, # change the loss function to be regression-specific optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=['mae']) # change the metric to be regression-specific # Fit the recompiled model model_3.fit(X_reg_train, y_reg_train, epochs=100) To begin with, the model does not train well To add on, when I try to predict using that model, I get the following error : Why am I getting the above error and how can I fix it?
Change the axis dimension in expand_dims to 1 and slice your data like this, since it is 2D: import tensorflow as tf import numpy as np tf.random.set_seed(42) # Create some regression data X_regression = np.expand_dims(np.arange(0, 1000, 5), axis=1) y_regression = np.expand_dims(np.arange(100, 1100, 5), axis=1) # Split it into training and test sets X_reg_train = X_regression[:150, :] X_reg_test = X_regression[150:, :] y_reg_train = y_regression[:150, :] y_reg_test = y_regression[150:, :] tf.random.set_seed(42) # Recreate the model model_3 = tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ]) # Change the loss and metrics of our compiled model model_3.compile(loss=tf.keras.losses.mae, # change the loss function to be regression-specific optimizer=tf.keras.optimizers.Adam(learning_rate=0.01), metrics=['mae']) # change the metric to be regression-specific # Fit the recompiled model model_3.fit(X_reg_train, y_reg_train, epochs=100) model_3.predict(X_reg_test)
6
4
69,950,418
2021-11-13
https://stackoverflow.com/questions/69950418/get-min-and-max-values-of-categorical-variable-in-a-dataframe
I have a dataframe that looks like this: D X Y Z A 22 16 23 A 21 16 22 A 20 17 21 B 33 50 11 B 34 53 12 B 34 55 13 C 44 34 11 C 45 33 11 C 45 33 10 D 55 35 60 D 57 34 61 E 66 36 13 E 67 38 14 E 67 37 13 I want to get the minimum and maximum values of the categorical variable D across all the column values and so the output dataframe should look something like this: D Xmin Xmax Ymin Ymax Zmin Zmax A 20 22 16 17 21 23 B 33 34 50 55 11 13 C 44 45 33 34 10 11 D 55 57 34 35 60 61 E 66 67 36 38 13 14 I have tried this, but no luck: min_max_df = dfObj.groupby('D').agg({'X': [dfObj.min(axis=0), dfObj.max(axis=0)]})
I believe this is a nice way of doing it and in a single line of code. Making use of join doing the operation by index and the rsuffix and lsuffix to differentiate min and max. output = df.groupby('D').min().join(df.groupby('D').max(), lsuffix='min', rsuffix='max') Outputs: Xmin Xmax Ymin Ymax Zmin Zmax D A 20 22 16 17 21 23 B 33 34 50 55 11 13 C 44 45 33 34 10 11 D 55 57 34 35 60 61 E 66 67 36 38 13 14
17
9
69,962,789
2021-11-14
https://stackoverflow.com/questions/69962789/how-to-find-the-number-of-neighbours-pixels-in-binary-array
I am looking for an easy way to count the number of the green pixels in the image below, where the original image is the same but the green pixels are black. I tried it with numpy.diff(), but then I am counting some pixels twice. I thought about numpy.gradient() – but here I am not sure if it is the right tool. I know there have to be many solutions to this problem, but I don't know how to google for it. I am looking for a solution in python. To make it clearer, I have only one image (only black and white pixels). The image with the green pixel is just for illustration.
You can use the edge detection kernel for this problem. import numpy as np from scipy.ndimage import convolve a = np.array([[0, 0, 0, 0], [0, 1, 1, 1], [0, 1, 1, 1]]) kernel = np.array([[-1, -1, -1], [-1, 8, -1], [-1, -1, -1]]) Then, we will convolve the original array with the kernel. Notice that the edges are all negatives. >>> convolve(a, kernel) [[-1 -2 -3 -3] [-2 5 3 3] [-3 3 0 0]] We will count the number of negative values and get the result. >>> np.where(convolve(a, kernel) < 0, 1, 0) [[1 1 1 1] [1 0 0 0] [1 0 0 0]] >>> np.sum(np.where(convolve(a, kernel) < 0, 1, 0)) 6 Edges-only kernel There are a lot of things you can do with the kernel. For example, you can modify the kernel if you don't want to include diagonal neighbors. kernel = np.array([[ 0, -1, 0], [-1, 4, -1], [ 0, -1, 0]]) This gives the following output. >>> np.where(convolve(a, kernel) < 0, 1, 0) [[0 1 1 1] [1 0 0 0] [1 0 0 0]] >>> np.sum(np.where(convolve(a, kernel) < 0, 1, 0)) 5
6
6
69,957,212
2021-11-13
https://stackoverflow.com/questions/69957212/converting-timestamp-to-epoch-milliseconds-in-pyspark
I have a dataset like the below: epoch_seconds eq_time 1636663343887 2021-11-12 02:12:23 Now, I am trying to convert the eq_time to epoch seconds which should match the value of the first column but am unable to do so. Below is my code: df = spark.sql("select '1636663343887' as epoch_seconds") df1 = df.withColumn("eq_time", from_unixtime(col("epoch_seconds") / 1000)) df2 = df1.withColumn("epoch_sec", unix_timestamp(df1.eq_time)) df2.show(truncate=False) I am getting output like below: epoch_seconds eq_time epoch_sec 1636663343887 2021-11-12 02:12:23 1636663343 I tried this link as well but didn't help. My expected output is that the first and third columns should match each other. P.S: I am using the Spark 3.1.1 version on local whereas it is Spark 2.4.3 in production, and my end goal would be to run it in production.
Use to_timestamp instead of from_unixtime to preserve the milliseconds part when you convert epoch to spark timestamp type. Then, to go back to timestamp in milliseconds, you can use unix_timestamp function or by casting to long type, and concatenate the result with the fraction of seconds part of the timestamp that you get with date_format using pattern S: import pyspark.sql.functions as F df = spark.sql("select '1636663343887' as epoch_ms") df2 = df.withColumn( "eq_time", F.to_timestamp(F.col("epoch_ms") / 1000) ).withColumn( "epoch_milli", F.concat(F.unix_timestamp("eq_time"), F.date_format("eq_time", "S")) ) df2.show(truncate=False) #+-------------+-----------------------+-------------+ #|epoch_ms |eq_time |epoch_milli | #+-------------+-----------------------+-------------+ #|1636663343887|2021-11-11 21:42:23.887|1636663343887| #+-------------+-----------------------+-------------+
10
8
69,960,699
2021-11-14
https://stackoverflow.com/questions/69960699/any-simpler-way-to-assign-multiple-columns-in-python-like-r-data-table
I'm wondering if there's any simpler way to assign multiple columns in Python, just like the := in R data.table. For example, in Python I would have to write like this: df['Col_A'] = df.A/df.B df['Col_B'] = df.C/df.D df['Col_C'] = df.E/df.F * 1000000 df['Col_D'] = df.G/df.H * 1000000 However, it's just one line of code in R data.table: df[, ':='(Col_A = A/B, Col_B = C/D, Col_C = E/F*1000000, Col_B = G/H*1000000)]
You can use DataFrame.assign to assign multiple columns: df = df.assign(Col_A=df.A/df.B, Col_B=df.C/df.D, Col_C=df.E/df.F*1000000, Col_D=df.G/df.H*1000000) Example: df = pd.DataFrame(np.random.random((4, 8)), columns=list('ABCDEFGH')) # A B ... H # 0 0.771211 0.238201 ... 0.311904 # 1 0.197548 0.635218 ... 0.626639 # 2 0.332333 0.838589 ... 0.477978 # 3 0.929690 0.327412 ... 0.046179 df = df.assign(Col_A=df.A/df.B, Col_B=df.C/df.D, Col_C=df.E/df.F*1000000, Col_D=df.G/df.H*1000000) # A B ... H Col_A Col_B Col_C Col_D # 0 0.771211 0.238201 ... 0.311904 3.237647 1.547285 1.463586e+06 2.845234e+06 # 1 0.197548 0.635218 ... 0.626639 0.310993 1.385892 1.394466e+07 2.685293e+05 # 2 0.332333 0.838589 ... 0.477978 0.396300 0.078238 8.494174e+06 6.001031e+05 # 3 0.929690 0.327412 ... 0.046179 2.839514 0.852443 1.962892e+06 8.791233e+06 If you want column names with spaces, you can use a dict: df = df.assign(**{'Col A': df.A/df.B, 'Col B': df.C/df.D, 'Col C': df.E/df.F*1000000, 'Col D': df.G/df.H*1000000}) # A B ... H Col A Col B Col C Col D # 0 0.868320 0.086743 ... 0.505330 10.010311 6.680195 1.147554e+06 2.620416e+05 # 1 0.244341 0.908793 ... 0.389684 0.268863 2.388179 2.196769e+06 2.235063e+06 # 2 0.917949 0.248149 ... 0.710027 3.699188 0.453094 1.311617e+06 1.004200e+06 # 3 0.616655 0.498817 ... 0.703579 1.236235 2.186589 1.272981e+06 8.602272e+05
5
3
69,958,768
2021-11-13
https://stackoverflow.com/questions/69958768/what-flags-to-use-for-configure-when-building-python-from-source
I am building Python 3.10 from source on Ubuntu 18.04, following instructions from several web links, primarily the Python website (https://devguide.python.org/setup) and RealPython (https://realpython.com/installing-python/#how-to-build-python-from-source-code). I extracted Python-3.10.0.tgz into /opt/Python3.10. I have three questions. First, the Python website says to use ./configure --with-pydebug and RealPython says to use ./configure --enable-optimizations --with-ensurepip=install. Another source says to include --enable-shared and --enable-unicode=ucs4. Which of these is best? Should I use all of those flags? Second, I currently have Python 3.6 and Python 3.8 installed. They are installed in several directories under /usr. Following the directions I have seen on the web I am building in /opt/Python3.10. I assume that make altinstall (the final build step) will take care of installing the build in the usual folders under /usr, but that's not clear. Should I use ./configure --prefix=directory although none of the web sources mention doing that? Finally, how much does --enable-optimizations slow down the install process? This is my first time building Python from source, and it will help to clear these things up. Thanks for any help.
Welcome to the world of Python build configuration! I'll go through the command line options to ./configure one by one. --with-pydebug is for core Python developers, not developers (like you and me) just using Python. It creates debugging symbols and slows down execution. You don't need it. --enable-optimizations is good for performance in the long run, at the expense of lengthening the compiling process, possibly by 3-fold (or more), depending on your system. However, it results in faster execution, so I would use it in your situation. --with-ensurepip=install is good. You want the most up-to-date version of pip. --enable-shared is maybe not a good idea in your case, so I'd recommend not using it here. Read Difference between static and shared libraries? to understand the difference. Basically, since you'll possibly be installing to a non-system path (/opt/local, see below) that almost certainly isn't on your system's search path for shared libraries, you'll very likely run into problems down the road. A static build has all the pieces in one place, so you can install and run it from wherever. This is at the expense of size - the python binary will be rather large - but is great for non-sys admins. Even if you end up installing to /usr/local, I would argue that static is better/easier than shared. --enable-unicode=ucs4 is optional, and may not be compatible with your system. You don't need it. ./configure is smart enough to figure out what Unicode settings are best. This option is left over from build instructions that are quite a few versions out of date. --prefix I would suggest you use --prefix=/opt/local if that directory already exists and is in your $PATH, or if you know how to edit your $PATH in ~/.bashrc. Otherwise, use /usr/local or $HOME. /usr/local is the designated system-wide location for local software installs (i.e., stuff that doesn't come with Ubuntu), and is likely already on your $PATH. $HOME is always an option that doesn't require the use of sudo, which is great from a security perspective. You'll need to add /home/your_username/bin to your $PATH if it isn't already present.
8
11
69,954,697
2021-11-13
https://stackoverflow.com/questions/69954697/why-does-loc-assignment-with-two-sets-of-brackets-result-in-nan-in-a-pandas-dat
I have a DataFrame: name age 0 Paul 25 1 John 27 2 Bill 23 I know that if I enter: df[['name']] = df[['age']] I'll get the following: name age 0 25 25 1 27 27 2 23 23 But I expect the same result from the command: df.loc[:, ['name']] = df.loc[:, ['age']] But instead, I get this: name age 0 NaN 25 1 NaN 27 2 NaN 23 For some reason, if I omit those square brackets [] around column names, I'll get exactly what I expected. That is the command: df.loc[:, 'name'] = df.loc[:, 'age'] gives the right result: name age 0 25 25 1 27 27 2 23 23 Why does two pairs of brackets with .loc result in NaN? Is it some sort of a bug or is it intended behaviour? I can't figure out the reason for such a behaviour.
That's because for the loc assignment all index axes are aligned, including the columns: Since age and name do not match, there is no data to assign, hence the NaNs. You can make it work by renaming the columns: df.loc[:, ["name"]] = df.loc[:, ["age"]].rename(columns={"age": "name"}) or by accessing the numpy array: df.loc[:, ["name"]] = df.loc[:, ["age"]].values
13
3
69,953,800
2021-11-13
https://stackoverflow.com/questions/69953800/pip-could-not-find-a-version-that-satisfies-the-requirement
I'm having problems with installing a package using pip. I tried: pip install jurigged Causing these errors: ERROR: Could not find a version that satisfies the requirement jurigged (from versions: none) ERROR: No matching distribution found for jurigged I checked if pip was up to date which was the case. I'm on Python 3.7.4. Does anyone know a solution to this problem?
From PyPI, jurigged is only supported as of Python >= 3.8 (see also here) pip doesn't find anything to install because you do not meet the requirements. Upgrade to Python >= 3.8 and do the same: pip install jurigged
35
24
69,921,629
2021-11-10
https://stackoverflow.com/questions/69921629/transformers-autotokenizer-tokenize-introducing-extra-characters
I am using HuggingFace transformers AutoTokenizer to tokenize small segments of text. However this tokenization is splitting incorrectly in the middle of words and introducing # characters to the tokens. I have tried several different models with the same results. Here is an example of a piece of text and the tokens that were created from it. CTO at TLR Communications Pty Ltd ['[CLS]', 'CT', '##O', 'at', 'T', '##LR', 'Communications', 'P', '##ty', 'Ltd', '[SEP]'] And here is the code I am using to generate the tokens tokenizer = AutoTokenizer.from_pretrained("tokenizer_bert.json") tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence)))
This is not an error but a feature. BERT and other transformers use WordPiece tokenization algorithm that tokenizes strings into either: (1) known words; or (2) "word pieces" for unknown words in the tokenizer vocabulary. In your examle, words "CTO", "TLR", and "Pty" are not in the tokenizer vocabulary, and thus WordPiece splits them into subwords. E.g. the first subword is "CT" and another part is "##O" where "##" denotes that the subword is connected to the predecessor. This is a great feature that allows to represent any string.
6
4
69,948,897
2021-11-12
https://stackoverflow.com/questions/69948897/pandas-valueerror-worksheet-index-0-is-invalid-0-worksheets-found
Simple problem that has me completely dumbfounded. I am trying to read an Excel document with pandas but I am stuck with this error: ValueError: Worksheet index 0 is invalid, 0 worksheets found My code snippet works well for all but one Excel document linked below. Is this an issue with my Excel document (which definitely has sheets when I open it in Excel) or am I missing something completely obvious? Excel Document EDIT - Forgot the code. It is quite simply: import pandas as pd df = pd.read_excel(FOLDER + 'omx30.xlsx') FOLDER Is the absolute path to the folder in which the file is located.
It seems there indeed is a problem with my excel file. We have not been able to figure out what though. For now the path of least resistance is simply saving as a .csv in excel and using pd.read_csv to read this instead.
14
0
69,933,345
2021-11-11
https://stackoverflow.com/questions/69933345/expected-min-ndim-2-found-ndim-1-full-shape-received-none
In my model, I have a normalizing layer for a 1 column feature array. I assume this gives a 1 ndim output: single_feature_model = keras.models.Sequential([ single_feature_normalizer, layers.Dense(1) ]) Normailaztion step: single_feature_normalizer = preprocessing.Normalization(axis=None) single_feature_normalizer.adapt(single_feature) The error I'm getting is: ValueError Traceback (most recent call last) <ipython-input-98-22191285d676> in <module>() 2 single_feature_model = keras.models.Sequential([ 3 single_feature_normalizer, ----> 4 layers.Dense(1) # Linear Model 5 ]) /usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name) 225 ndim = x.shape.rank 226 if ndim is not None and ndim < spec.min_ndim: --> 227 raise ValueError(f'Input {input_index} of layer "{layer_name}" ' 228 'is incompatible with the layer: ' 229 f'expected min_ndim={spec.min_ndim}, ' ValueError: Input 0 of layer "dense_27" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,) I seems that the dense layer is looking for a 2 ndim array while the normalization layer outputs a 1 ndim array. Is there anyway to solve this and getting the model working?
I think you need to explicitly define an input layer with your input shape, since your output layer cannot infer the shape of the tensor coming from the normalization layer: import tensorflow as tf single_feature_normalizer = tf.keras.layers.Normalization(axis=None) feature = tf.random.normal((314, 1)) single_feature_normalizer.adapt(feature) single_feature_model = tf.keras.models.Sequential([ tf.keras.layers.Input(shape=(1,)), single_feature_normalizer, tf.keras.layers.Dense(1) ]) Or define the input shape directly in the normalization layer without using an input layer: single_feature_normalizer = tf.keras.layers.Normalization(input_shape=[1,], axis=None)
6
7
69,935,149
2021-11-11
https://stackoverflow.com/questions/69935149/how-to-extract-part-of-a-string-in-pandas-column-and-make-a-new-column
I have the below pandas dataframe. d = {'col1': [1, 2,3,4,5,60,0,0,6,3,2,4],'col3': [1, 22,33,44,55,60,1,5,6,3,2,4],'Name': ['2a df a1asd_V1', 'xcd a2asd_V3','23vg aabsd_V1','dfgdf_aabsd_V0','a3as d_V1','aa bsd_V3','aasd_V4','aabsd_V4','aa_adn sd_V15',np.nan,'aasd_V12','aasd120Abs'],'Date': ['2021-06-13', '2021-06-13','2021-06-13','2021-06-14','2021-06-15','2021-06-15','2021-06-13','2021-06-16','2021-06-13','2021-06-13','2021-06-13','2021-06-16']} dff = pd.DataFrame(data=d) dff col1 col3 Name Date 0 1 1 2a df a1asd_V1 2021-06-13 1 2 22 xcd a2asd_V3 2021-06-13 2 3 33 23vg aabsd_V1 2021-06-13 3 4 44 dfgdf_aabsd_V0 2021-06-14 4 5 55 a3as d_V1 2021-06-15 5 60 60 aa bsd_V3 2021-06-15 6 0 1 aasd_V4 2021-06-13 7 0 5 aabsd_V4 2021-06-16 8 6 6 aa_adn sd_V10 2021-06-13 9 3 3 NaN 2021-06-13 10 2 2 aasd_V12 2021-06-13 11 4 4 aasd120Abs 2021-06-16 I want to make two new columns based on the Name column. I want to extract the part of the string in the Name column like V1, V2, V3, V4...V20 like that. Also if there isn't anything like that at end of the Name string or if the Name row is empty, just want to make an empty cell. So I want something like below pandas dataframe. col1 col3 Name Date Version Version 0 1 1 2a df a1asd_V1 2021-06-13 V1 Version 1 1 2 22 xcd a2asd_V3 2021-06-13 V3 Version 3 2 3 33 23vg aabsd_V1 2021-06-13 V1 Version 1 3 4 44 dfgdf_aabsd_V0 2021-06-14 V0 Version 0 4 5 55 a3as d_V1 2021-06-15 V1 Version 1 5 60 60 aa bsd_V3 2021-06-15 V3 Version 3 6 0 1 aasd_V4 2021-06-13 V4 Version 4 7 0 5 aabsd_V4 2021-06-16 V4 Version 4 8 6 6 aa_adn sd_V10 2021-06-13 V10 Version 10 9 3 3 NaN 2021-06-13 10 2 2 aasd_V12 2021-06-13 V12 Version 12 11 4 4 aasd120Abs 2021-06-16 Is it possible to do that? I know in SQL we can do that using "LIKE" WHEN `Name` LIKE '%V10%' THEN 'Verison 10'. Is there a similar command or any other way to do that in python? Thanks in advance! Any help is appreciated!
Use str.extract with a regex and str.replace to rename values: dff['Version_short'] = dff['Name'].str.extract('_(V\d+)$').fillna('') dff['Version_long'] = dff['Version_short'].str.replace('V', 'Version ') Output: >>> dff col1 col3 Name Date Version_short Version_long 0 1 1 2a df a1asd_V1 2021-06-13 V1 Version 1 1 2 22 xcd a2asd_V3 2021-06-13 V3 Version 3 2 3 33 23vg aabsd_V1 2021-06-13 V1 Version 1 3 4 44 dfgdf_aabsd_V0 2021-06-14 V0 Version 0 4 5 55 a3as d_V1 2021-06-15 V1 Version 1 5 60 60 aa bsd_V3 2021-06-15 V3 Version 3 6 0 1 aasd_V4 2021-06-13 V4 Version 4 7 0 5 aabsd_V4 2021-06-16 V4 Version 4 8 6 6 aa_adn sd_V15 2021-06-13 V15 Version 15 9 3 3 NaN 2021-06-13 10 2 2 aasd_V12 2021-06-13 V12 Version 12 11 4 4 aasd120Abs 2021-06-16
7
13
69,930,036
2021-11-11
https://stackoverflow.com/questions/69930036/deprecationwarning-desired-capabilities-has-been-deprecated-please-pass-in-a-s
maybe someone met the issue... I use a custom 'chrome driver' for PyTest with 'performance' log: cap = webdriver.DesiredCapabilities.CHROME.copy() cap['goog:loggingPrefs'] = {'performance': 'ALL'} services = Service(executable_path='/usr/local/bin/chromedriver') chrome_driver = webdriver.Chrome(desired_capabilities=cap, options=options, service=services) My environment: Selenium 4.0.0 ChromeDriver 95.0.4638.17 And now I have got a warning: =============================== warnings summary =============================== tests/test_name.py::test_main[/] /Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/selenium/webdriver/chrome/webdriver.py:69: DeprecationWarning: desired_capabilities has been deprecated, please pass in a Service object super(WebDriver, self).__init__(DesiredCapabilities.CHROME['browserName'], "goog", -- Docs: https://docs.pytest.org/en/stable/warnings.html ======================== 1 passed, 1 warning in 10.49s ========================= May be someone know how does handle this?
See below from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService options = webdriver.ChromeOptions() options.set_capability("loggingPrefs", {'performance': 'ALL'}) service = ChromeService(executable_path'/usr/local/bin/chromedriver') driver = webdriver.Chrome(service=service, options=options) If the above doesn't work, try: options.set_capability("goog:loggingPrefs", {'performance': 'ALL'})
8
11
69,928,211
2021-11-11
https://stackoverflow.com/questions/69928211/plotly-px-scatter-3d-marker-size
I have a dataframe df: x y z ... colours marker_size marker_opacity test1 0.118709 0.219099 -0.024387 ... red 100 0.5 test2 -0.344873 -0.401508 0.169995 ... blue 100 0.5 test3 -0.226923 0.021078 0.400358 ... red 100 0.5 test4 0.085421 0.098442 -0.588749 ... purple 100 0.5 test5 0.367666 0.062889 0.042783 ... green 100 0.5 I am trying to plot this with plotly like so: fig = px.scatter_3d(df, x='x', y='y', z = 'z', color='labels', hover_name = df.index, opacity = 0.5, size = 'marker_size') fig.write_html(file_name) When I open file_name, everything is fine, but my points are too big. When I alter the 'marker_size' column of my df, nothing changes (I have tried 0.1, 1, 10, 100...). Why is this? I have also tried: Param: size = 1: Result: ValueError: Value of 'size' is not the name of a column in 'data_frame'. Expected one of ['x', 'y', 'z', 'labels', 'colours', 'marker_size', 'marker_opacity'] but received: 1 Param: size = [1]*len(df): Result: No difference to using the 'marker_size' df column
If you're looking to increase the marker size of all traces, just use: fig.update_traces(marker_size = 12) Details: The size attribute of px.scatter_3d isn't there to let you specify the size of your markers directly. But rather to let you add a fourth dimension to your scatter plot representing the varying size of another variable. size: str or int or Series or array-like Either a name of a column in `data_frame`, or a pandas Series or array_like object. Values from this column or array_like are used to assign mark sizes. The reason why changing the value from 1 to 10 or 100 is that you seem to have been changing all the values in df['marker_size'] at the same time: In order for such an assignment to have effect, you would need to have a variable and not a constant in df['marker_size']. You can take a closer look at how these things work through this snippet: import plotly.express as px df = px.data.iris() fig = px.scatter_3d(df, x='sepal_length', y='sepal_width', z='petal_width', color='species', size = 'petal_length' ) fig.show() Here you can see that the size attribute works as intended, since your markers will have varying sizes as defined by df['petal_length]:
8
16