question_id
int64
59.5M
79.4M
creation_date
stringlengths
8
10
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
60,289,195
2020-2-18
https://stackoverflow.com/questions/60289195/open-cv-trivial-circle-detection-how-to-get-least-squares-instead-of-a-contou
My goal is to accurately measure the diameter of a hole from a microscope. Workflow is: take an image, process for fitting, fit, convert radius in pixels to mm, write to a csv This is an output of my image processing script used to measure the diameter of a hole. I'm having an issue where it seems like my circle fitting is prioritizing matching the contour rather than something like a least squares approach. I've alternatively averaged many fits in something like this: My issue here is I like to quickly scan to make sure the circle fit is appropriate. The trade off is the more fits I have, the more realistic the fit, the fewer I have the easier is to make sure the number is correct. My circles aren't always as pretty and circular as this one so it's important to me. Here's the piece of my script fitting circles if you could take a look and tell me how to do more of a least squares approach on the order of 5 circles. I don't want to use minimum circle detection because a fluid is flowing through this hole so I'd like it to be more like a hydraulic diameter-- thanks! (thresh, blackAndWhiteImage0) = cv2.threshold(img0, 100, 255, cv2.THRESH_BINARY) #make black + white median0 = cv2.medianBlur(blackAndWhiteImage0, 151) #get rid of noise circles0 = cv2.HoughCircles(median0,cv2.HOUGH_GRADIENT,1,minDist=5,param1= 25, param2=10, minRadius=min_radius_small,maxRadius=max_radius_small) #fit circles to image
Here is another way to fit a circle by getting the equivalent circle center and radius from the binary image using connected components and drawing a circle from that using Python/OpenCV/Skimage. Input: import cv2 import numpy as np from skimage import measure # load image and set the bounds img = cv2.imread("dark_circle.png") # convert to grayscale gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # blur blur = cv2.GaussianBlur(gray, (3,3), 0) # threshold thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # apply morphology open with a circular shaped kernel kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5,5)) binary = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2) # find contour and draw on input (for comparison with circle) cnts = cv2.findContours(binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] c = cnts[0] result = img.copy() cv2.drawContours(result, [c], -1, (0, 255, 0), 1) # find radius and center of equivalent circle from binary image and draw circle # see https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.regionprops # Note: this should be the same as getting the centroid and area=cv2.CC_STAT_AREA from cv2.connectedComponentsWithStats and computing radius = 0.5*sqrt(4*area/pi) or approximately from the area of the contour and computed centroid via image moments. regions = measure.regionprops(binary) circle = regions[0] yc, xc = circle.centroid radius = circle.equivalent_diameter / 2.0 print("radius =",radius, " center =",xc,",",yc) xx = int(round(xc)) yy = int(round(yc)) rr = int(round(radius)) cv2.circle(result, (xx,yy), rr, (0, 0, 255), 1) # write result to disk cv2.imwrite("dark_circle_fit.png", result) # display it cv2.imshow("image", img) cv2.imshow("thresh", thresh) cv2.imshow("binary", binary) cv2.imshow("result", result) cv2.waitKey(0) Result showing contour (green) compared to circle fit (red): Circle Radius and Center: radius = 117.6142467296168 center = 220.2169911178609 , 150.26823599797507 A least squares fit method (between the contour points and a circle) can be obtained using Scipy. For example, see: https://gist.github.com/lorenzoriano/6799568 https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
7
5
60,289,768
2020-2-18
https://stackoverflow.com/questions/60289768/error-unhashable-type-dict-with-dataclass
I have a class Table: @dataclass(frozen=True, eq=True) class Table: name: str signature: Dict[str, Type[DBType]] prinmary_key: str foreign_keys: Dict[str, Type[ForeignKey]] indexed: List[str] And need to create such dictionary: table = Table(*args) {table: 'id'} TypeError: unhashable type: 'dict' Don't understand what's the problem.
The autogenerated hash method isn't safe, since it tries to hash the unhashable attributes signature, primary_key, and indexed. You need to define your own __hash__ method that ignores those attributes. One possibility is def __hash__(self): return hash((self.name, self.primary_key)) Both self.name and self.primary_key are immutable, so a tuple containing those values is also immutable and thus hashable. An alternative to defining this method explicitly would be to use the field function to turn off the mutable fields for hashing purposes. @dataclass(frozen=True, eq=True) class Table: name: str signature: Dict[str, Type[DBType]] = field(compare=False) prinmary_key: str foreign_keys: Dict[str, Type[ForeignKey]] = field(compare=False) indexed: List[str] = field(compare=False) field has a hash parameter whose default value is the value of compare, and the documentation discourages using a different value for hash. (Probably to ensure that equal items hash identically.) It's unlikely that you really want to use these three fields for the purposes of comparing two tables, so you this should be OK. I would consult the documentation rather than relying on my relatively uninformed summary of it.
8
15
60,285,688
2020-2-18
https://stackoverflow.com/questions/60285688/python-typehint-for-os-getenv-causes-downstream-incompatible-type-errors
When using os.getenv to retrieve environment variables, the default behavior returns a type of Optional[str]. This is problematic as any downstream methods/functions that utilize these variables will likely be defined to accept a str type explicitly. Is there an accepted usage to get around this issue or enforce the str return type? In the stub file definition for getenv in typeshed you can find that getenv can have a return type of Optional[str] or Union[str, T_] depending on the usage of the default kwarg. The four options I can see as yet are: Define any downstream operations to accept Optional[str] types as arguments. This doesn't feel particularly right as a function/method may not be structured in a way that the Optional type makes sense. i.e. the operation has no reason for a particular argument to be None. Use the default kwarg for getenv and provide a str default value. This seems more correct, but requires that one set a default value for every usage of getenv. The only problem I can see with this is that doing so may be confounding for testing or usage in different environments. Define some kind of variable checking function. This could be a function that accepts the name of an environment variable to load, explicitly returns a string, and raises an error if the environment variable doesn't exist. Explicitly set the type of the value returned by getenv to be a str. I really don't like this as it expects the environment to always be properly configured which, in my experience, is not a good assumption. Find below an example that raises a mypy error. import os SOME_VAR = os.getenv("SOME_VAR") def some_func(val: str) -> None: print(f"Loaded env var: {val}") some_func(SOME_VAR) The above raises the mypy error: error: Argument 1 to "some_func" has incompatible type "Optional[str]"; expected "str"
tl;dr Use os.environ['SOME_VAR'] if you're sure it's always there os.getenv can and does return None -- mypy is being helpful in showing you have a bug there: >>> repr(os.getenv('DOES_NOT_EXIST')) 'None' >>> repr(os.getenv('USER')) "'asottile'" Alternatively, you can convince mypy that it is of the type you expect in two different ways: utilizing assertions: x = os.getenv('SOME_VAR') assert x is not None, x # mypy will believe that it is non-None after this point utilizing a cast: from typing import cast x = cast(str, os.getenv('SOME_VAR')) # mypy will believe that it is a `str` after this point (the cast has some downsides in that it is never checked, whereas the assertion will hopefully lead to a test failure) I would suggest not ignoring this error / working around it and instead use os.environ['SOME_VAR'] for things you expect to always be there, or write a condition to check for the error case when it is missing
10
13
60,241,138
2020-2-15
https://stackoverflow.com/questions/60241138/animation-of-tangent-line-of-a-3d-curve
I am writing a Python program to animate a tangent line along a 3D curve. However, my tangent line is not moving. I think the problem is line.set_data(np.array(Tangent[:,0]).T,np.array(Tangent[:,1]).T) in animate(i) but I can't figure out. Any help will be appreciated. The following is the code. from mpl_toolkits import mplot3d import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation import matplotlib matplotlib.use( 'tkagg' ) plt.style.use('seaborn-pastel') fig = plt.figure() ax = plt.axes(projection='3d') ax = plt.axes(projection='3d') # Data for a three-dimensional line zline = np.linspace(0, 15, 1000) xline = np.sin(zline) yline = np.cos(zline) ax.plot3D(xline, yline, zline, 'red') def curve(t): return [np.sin(t),np.cos(t),t] def vector_T(t): T = [np.cos(t),-np.sin(t),1] return T/np.linalg.norm(T) len = 2 def tangent_line(t): P = np.add(curve(t),len*vector_T(t)) Q = np.subtract(curve(t),len*vector_T(t)) return np.array([P, Q]).T t0 = 0 Tangent=tangent_line(t0) line, = ax.plot3D(Tangent[0], Tangent[1], Tangent[2], 'green') def init(): line.set_data([], []) return line, def animate(i): t0 = 15* (i/200) Tangent=tangent_line(t0) #print(Tangent) line.set_data(np.array(Tangent[:,0]).T,np.array(Tangent[:,1]).T) return line, anim = FuncAnimation(fig, animate, init_func=init, frames=200, interval=20, blit=True) plt.show()
you've called the wrong function in animate: Replace line.set_data(...) with line.set_data_3d(Tangent[0], Tangent[1], Tangent[2]) and it will work. There are still some minor issues in the code (e.g., don't use len as a variable name). I'd recommend using the following: #!/usr/bin/env python3 from mpl_toolkits import mplot3d import numpy as np import matplotlib.pyplot as plt from matplotlib.animation import FuncAnimation import matplotlib matplotlib.use('tkagg') plt.style.use('seaborn-pastel') fig = plt.figure() ax = plt.axes(projection='3d') # Data for a three-dimensional line zline = np.linspace(0, 15, 1000) xline = np.sin(zline) yline = np.cos(zline) ax.plot3D(xline, yline, zline, 'red') def curve(t): return [ np.sin(t), np.cos(t), t ] def tangent(t): t = [ np.cos(t), -np.sin(t), 1.0 ] return t/np.linalg.norm(t) def tangent_line(t): length = 2.0 offset = length * tangent(t) pos = curve(t) return np.array([ pos-offset, pos+offset ]).T line = ax.plot3D(*tangent_line(0), 'green')[0] def animate(i): line.set_data_3d(*tangent_line(15* (i/200))) return [ line ] anim = FuncAnimation(fig, animate, frames=200, interval=20, blit=True) plt.show()
7
4
60,281,354
2020-2-18
https://stackoverflow.com/questions/60281354/apply-minmaxscaler-on-multiple-columns-in-pyspark
I want to apply MinMaxScalar of PySpark to multiple columns of PySpark data frame df. So far, I only know how to apply it to a single column, e.g. x. from pyspark.ml.feature import MinMaxScaler pdf = pd.DataFrame({'x':range(3), 'y':[1,2,5], 'z':[100,200,1000]}) df = spark.createDataFrame(pdf) scaler = MinMaxScaler(inputCol="x", outputCol="x") scalerModel = scaler.fit(df) scaledData = scalerModel.transform(df) What if I have 100 columns? Is there any way to do min-max scaling for many columns in PySpark? Update: Also, how to apply MinMaxScalar on integer or double values? It throws the following error: java.lang.IllegalArgumentException: requirement failed: Column length must be of type struct<type:tinyint,size:int,indices:array<int>,values:array<double>> but was actually int.
Question 1: How to change your example to run properly. You need to prepare the data as a vector for the transformers to work. from pyspark.ml.feature import MinMaxScaler from pyspark.ml import Pipeline from pyspark.ml.linalg import VectorAssembler pdf = pd.DataFrame({'x':range(3), 'y':[1,2,5], 'z':[100,200,1000]}) df = spark.createDataFrame(pdf) assembler = VectorAssembler(inputCols=["x"], outputCol="x_vec") scaler = MinMaxScaler(inputCol="x_vec", outputCol="x_scaled") pipeline = Pipeline(stages=[assembler, scaler]) scalerModel = pipeline.fit(df) scaledData = scalerModel.transform(df) Question 2: To run MinMaxScaler on multiple columns you can use a pipeline that receives a list of transformation prepared with with a list comprehension: from pyspark.ml import Pipeline from pyspark.ml.feature import MinMaxScaler columns_to_scale = ["x", "y", "z"] assemblers = [VectorAssembler(inputCols=[col], outputCol=col + "_vec") for col in columns_to_scale] scalers = [MinMaxScaler(inputCol=col + "_vec", outputCol=col + "_scaled") for col in columns_to_scale] pipeline = Pipeline(stages=assemblers + scalers) scalerModel = pipeline.fit(df) scaledData = scalerModel.transform(df) Check this example pipeline in the official documentation. Eventually, you will end with the results in this format: >>> scaledData.printSchema() root |-- x: long (nullable = true) |-- y: long (nullable = true) |-- z: long (nullable = true) |-- x_vec: vector (nullable = true) |-- y_vec: vector (nullable = true) |-- z_vec: vector (nullable = true) |-- x_scaled: vector (nullable = true) |-- y_scaled: vector (nullable = true) |-- z_scaled: vector (nullable = true) >>> scaledData.show() +---+---+----+-----+-----+--------+--------+--------+--------------------+ | x| y| z|x_vec|y_vec| z_vec|x_scaled|y_scaled| z_scaled| +---+---+----+-----+-----+--------+--------+--------+--------------------+ | 0| 1| 100|[0.0]|[1.0]| [100.0]| [0.0]| [0.0]| [0.0]| | 1| 2| 200|[1.0]|[2.0]| [200.0]| [0.5]| [0.25]|[0.1111111111111111]| | 2| 5|1000|[2.0]|[5.0]|[1000.0]| [1.0]| [1.0]| [1.0]| +---+---+----+-----+-----+--------+--------+--------+--------------------+ Extra Post-processing: You can recover the columns in their original names with some post-processing. For example: from pyspark.sql import functions as f names = {x + "_scaled": x for x in columns_to_scale} scaledData = scaledData.select([f.col(c).alias(names[c]) for c in names.keys()]) The output will be: scaledData.show() +------+-----+--------------------+ | y| x| z| +------+-----+--------------------+ | [0.0]|[0.0]| [0.0]| |[0.25]|[0.5]|[0.1111111111111111]| | [1.0]|[1.0]| [1.0]| +------+-----+--------------------+
19
24
60,280,466
2020-2-18
https://stackoverflow.com/questions/60280466/merging-two-dataframes-with-pd-na-in-merge-column-yields-typeerror-boolean-val
With Pandas 1.0.1, I'm unable to merge if the df = df.merge(df2, on=some_column) yields File /home/torstein/code/fintechdb/Sheets/sheets/gild.py, line 42, in gild df = df.merge(df2, on=some_column) File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py, line 7297, in merge validate=validate, File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 88, in merge return op.get_result() File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 643, in get_result join_index, left_indexer, right_indexer = self._get_join_info() File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 862, in _get_join_info (left_indexer, right_indexer) = self._get_join_indexers() File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 841, in _get_join_indexers self.left_join_keys, self.right_join_keys, sort=self.sort, how=self.how File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 1311, in _get_join_indexers zipped = zip(*mapped) File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 1309, in <genexpr> for n in range(len(left_keys)) File /home/torstein/anaconda3/lib/python3.7/site-packages/pandas/core/reshape/merge.py, line 1918, in _factorize_keys rlab = rizer.factorize(rk) File pandas/_libs/hashtable.pyx, line 77, in pandas._libs.hashtable.Factorizer.factorize File pandas/_libs/hashtable_class_helper.pxi, line 1817, in pandas._libs.hashtable.PyObjectHashTable.get_labels File pandas/_libs/hashtable_class_helper.pxi, line 1732, in pandas._libs.hashtable.PyObjectHashTable._unique File pandas/_libs/missing.pyx, line 360, in pandas._libs.missing.NAType.__bool__ TypeError: boolean value of NA is ambiguous while this works: df[some_column].fillna(np.nan, inplace=True) df2[some_column].fillna(np.nan, inplace=True) df = df.merge(df2, on=some_column) # Works If instead, I do df[some_column].fillna(pd.NA, inplace=True) then the error returns.
This has to do with pd.NA being implemented in pandas 1.0.0 and how the pandas team decided it should work in a boolean context. Also, you take into account it is an experimental feature, hence it shouldn't be used for anything but experimenting: Warning Experimental: the behaviour of pd.NA can still change without warning. In another link of pandas documentation, where it covers working with missing values, is where I believe the reason and the answer you are looking for can be found: NA in a boolean context: Since the actual value of an NA is unknown, it is ambiguous to convert NA to a boolean value. The following raises an error: TypeError: boolean value of NA is ambiguous Furthermore, it provides a valuable piece of advise: "This also means that pd.NA cannot be used in a context where it is evaluated to a boolean, such as if condition: ... where condition can potentially be pd.NA. In such cases, isna() can be used to check for pd.NA or condition being pd.NA can be avoided, for example by filling missing values beforehand."
11
13
60,273,813
2020-2-18
https://stackoverflow.com/questions/60273813/what-is-runtime-in-context-of-python-what-does-it-consist-of
In context to this question What is “runtime”? (https://stackoverflow.com/questions/3900549/what-is-runtime/3900561) I am trying to understand what would a python runtime be made of. My guess is: The python process that contains all runtime variables. The GIL The underlying interpreter code (CPython etc.). Now if this is right, can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime? (I think this is the right option) Or, every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime? Or, doesn't matter how many threads or processes are running, it will all come under a single runtime? Simply put, what is the definition of runtime in the context of Python? PS: I understand the difference between threads and processes. GIL: I understand the impacts but I do not grok it.
You are talking about two different (yet similar) concepts in computer science; multiprocess, and multithreading. Here is some compilation of questions/answers that might be useful: Multiprocessing -- Wikipedia Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system.The term also refers to the ability of a system to support more than one processor or the ability to allocate tasks between them. Multithreading -- Wikipedia In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution concurrently, supported by the operating system. This approach differs from multiprocessing. In a multithreaded application, the threads share the resources of a single or multiple cores, which include the computing units, the CPU caches, and the translation lookaside buffer (TLB). What is the difference between a process and a thread? -- StackOverflow Process Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, open handles to system objects, a security context, a unique process identifier, environment variables, a priority class, minimum and maximum working set sizes, and at least one thread of execution. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads. Thread A thread is an entity within a process that can be scheduled for execution. All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, thread local storage, a unique thread identifier, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process. Threads can also have their own security context, which can be used for impersonating clients. Meaning of “Runtime Environment” and of “Software framework”? -- StackOverflow A runtime environment basically is a virtual machine that runs on top of a machine - provides machine abstraction. It is generally lower level than a library. A framework can contain a runtime environment, but is generally tied to a library. Runtime System -- Wikipedia In computer programming, a runtime system, also called runtime environment, primarily implements portions of an execution model. Most languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the layout of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language. global interpreter lock -- Python Docs The mechanism used by the CPython interpreter to assure that only one thread executes Python bytecode at a time. This simplifies the CPython implementation by making the object model (including critical built-in types such as dict) implicitly safe against concurrent access. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of much of the parallelism afforded by multi-processor machines. However, some extension modules, either standard or third-party, are designed so as to release the GIL when doing computationally-intensive tasks such as compression or hashing. Also, the GIL is always released when doing I/O. Past efforts to create a “free-threaded” interpreter (one which locks shared data at a much finer granularity) have not been successful because performance suffered in the common single-processor case. It is believed that overcoming this performance issue would make the implementation much more complicated and therefore costlier to maintain. What is the Python Global Interpreter Lock (GIL)? -- Real Python Useful source for more info on GIL. Does python os.fork uses the same python interpreter? -- StackOverflow Whenever you fork, the entire Python process is duplicated in memory (including the Python interpreter, your code and any libraries, current stack etc.) to create a second process - one reason why forking a process is much more expensive than creating a thread. This creates a new copy of the python interpreter. One advantage of having two python interpreters running is that you now have two GIL's (Global Interpreter Locks), and therefore can have true multi-processing on a multi-core system. Threads in one process share the same GIL, meaning only one runs at a given moment, giving only the illusion of parallelism. Memory Management -- Python Docs Memory management in Python involves a private heap containing all Python objects and data structures. The management of this private heap is ensured internally by the Python memory manager. The Python memory manager has different components which deal with various dynamic storage management aspects, like sharing, segmentation, preallocation or caching. When you spawn a thread via the threading library, you are effectively spawning jobs inside a single Python runtime. This runtime ensures the threads have a shared memory and manages the running sequence of these threads via the global interpreter lock: Understanding the Python GIL -- dabeaz When you spawn a process via the multiprocessing library, you are spawning a new process that contains a new Python interpreter (a new runtime) that runs the designated code. If you want to share memory you have to use multiprocessing.shared_memory: multiprocessing.shared_memory -- Python Docs This module provides a class, SharedMemory, for the allocation and management of shared memory to be accessed by one or more processes on a multicore or symmetric multiprocessor (SMP) machine. To assist with the life-cycle management of shared memory especially across distinct processes, a BaseManager subclass, SharedMemoryManager, is also provided in the multiprocessing.managers module. Can we say that multiprocessing in python creates multiple runtimes and a python process is something we can directly relate to the runtime? Yes. Different GIL, different memory space, different runtime. Every python thread with its own stack which works on the same GIL and memory space as the parent process can be called as having a separate runtime? Depends what you mean by "stack". Same GIL, shared memory space, same runtime. Doesn't matter how many threads and processes are running, it will all come under a single runtime? Depends if multithreading/multiprocess. Simply put, what is the definition of runtime in the context of Python? The runtime environment is literally python.exe or /usr/bin/python. It's the Python executable that will interpret your Python code by transforming it into CPU-readable bytecode. When you multithread, you only have one python running. When you multiprocess you have multiple pythons running. I hope that a core dev can come in and speak more to this in greater detail. For now the above is simply just a compilation of sources for you to start understanding/seeing the bigger picture.
7
18
60,270,639
2020-2-17
https://stackoverflow.com/questions/60270639/python-mocking-sqlalchemy-connection
I have a simple function that connects to a DB and fetches some data. db.py from sqlalchemy import create_engine from sqlalchemy.pool import NullPool def _create_engine(app): impac_engine = create_engine( app['DB'], poolclass=NullPool # this setting enables NOT to use Pooling, preventing from timeout issues. ) return impac_engine def get_all_pos(app): engine = _create_engine(app) qry = """SELECT DISTINCT id, name FROM p_t ORDER BY name ASC""" try: cursor = engine.execute(qry) rows = cursor.fetchall() return rows except Exception as re: raise re I'm trying to write some test cases by mocking this connection - tests.py import unittest from db import get_all_pos from unittest.mock import patch from unittest.mock import Mock class TestPosition(unittest.TestCase): @patch('db.sqlalchemy') def test_get_all_pos(self, mock_sqlalchemy): mock_sqlalchemy.create_engine = Mock() get_all_pos({'DB': 'test'}) if __name__ == '__main__': unittest.main() When I run the above file python tests.py, I get the following error - "Could not parse rfc1738 URL from string '%s'" % name sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string 'test' Shouldn't mock_sqlalchemy.create_engine = Mock() give me a mock object and bypass the URL check.
Another option would be to mock your _create_engine function. Since this is a unit test and we want to test get_all_pos we shouldn't need to rely on the behavior of _create_engine, so we can just patch that like so. import unittest import db from unittest.mock import patch class TestPosition(unittest.TestCase): @patch.object(db, '_create_engine') def test_get_all_pos(self, mock_sqlalchemy): args = {'DB': 'test'} db.get_all_pos(args) mock_sqlalchemy.assert_called_once() mock_sqlalchemy.assert_called_with({'DB': 'test'}) if __name__ == '__main__': unittest.main() If you want to test certain results you will need to properly set all the corresponding attributes. I would recommend not chaining it into one call so that it is more readable as shown below. import unittest import db from unittest.mock import patch from unittest.mock import Mock class Cursor: def __init__(self, vals): self.vals = vals def fetchall(self): return self.vals class TestPosition(unittest.TestCase): @patch.object(db, '_create_engine') def test_get_all_pos(self, mock_sqlalchemy): to_test = [1, 2, 3] mock_cursor = Mock() cursor_attrs = {'fetchall.return_value': to_test} mock_cursor.configure_mock(**cursor_attrs) mock_execute = Mock() engine_attrs = {'execute.return_value': mock_cursor} mock_execute.configure_mock(**engine_attrs) mock_sqlalchemy.return_value = mock_execute args = {'DB': 'test'} rows = db.get_all_pos(args) mock_sqlalchemy.assert_called_once() mock_sqlalchemy.assert_called_with({'DB': 'test'}) self.assertEqual(to_test, rows)
9
5
60,266,554
2020-2-17
https://stackoverflow.com/questions/60266554/type-object-datetime-datetime-has-no-attribute-fromisoformat
I have a script with the following import: from datetime import datetime and a piece of code where I call: datetime.fromisoformat(duedate) Sadly, when I run the script with an instance of Python 3.6, the console returns the following error: AttributeError: type object 'datetime.datetime' has no attribute 'fromisoformat' I tried to run it from two instances of anaconda (3.7 and 3.8) and it works nice and smooth. I supposed there was an import problem so I tried to copy datetime.py from anaconda/Lib to the script directory, with no success. The datetime.py clearly contains the class datetime and the method fromisoformat but still it seems unlinked. I even tried to explicitly link the datetime.py file, with the same error: parent_dir = os.path.abspath(os.path.dirname(__file__)) vendor_dir = os.path.join(parent_dir, 'libs') sys.path.append(vendor_dir+os.path.sep+"datetime.py") Can you help me? My ideas are over...
The issue here is actually that fromisoformat is not available in Python versions older than 3.7, you can see that clearly stated in the documenation here. Return a date corresponding to a date_string given in the format YYYY-MM-DD: >>> >>> from datetime import date >>> date.fromisoformat('2019-12-04') datetime.date(2019, 12, 4) This is the inverse of date.isoformat(). It only supports the format YYYY-MM-DD. New in version 3.7.
46
59
60,264,419
2020-2-17
https://stackoverflow.com/questions/60264419/does-it-make-sense-to-use-sklearn-gridsearchcv-together-with-calibratedclassifie
What I want to do is to derive a classifier which is optimal in its parameters with respect to a given metric (for example the recall score) but also calibrated (in the sense that the output of the predict_proba method can be directly interpreted as a confidence level, see https://scikit-learn.org/stable/modules/calibration.html). Does it make sense to use sklearn GridSearchCV together with CalibratedClassifierCV, that is, to fit a classifier via GridSearchCV, and then pass the GridSearchCV output to the CalibratedClassifierCV object? If I'm correct, the CalibratedClassifierCV object would fit a given estimator cv times, and the probabilities for each of the folds are then averaged for prediction. However, the results of the GridSearchCV could be different for each of the folds.
Yes you can do this and it would work. I don't know if it makes sense to do this, but the least I can do is explain what I believe would happen. We can compare doing this to the alternative which is getting the best estimator from the grid search and feeding that to the calibration. Simply getting the best estimator and feeding it to calibrationcv from sklearn.model_selection import GridSearchCV from sklearn import svm, datasets from sklearn.calibration import CalibratedClassifierCV iris = datasets.load_iris() parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]} svc = svm.SVC() clf = GridSearchCV(svc, parameters) clf.fit(iris.data, iris.target) calibration_clf = CalibratedClassifierCV(clf.best_estimator_) calibration_clf.fit(iris.data, iris.target) calibration_clf.predict_proba(iris.data[0:10]) array([[0.91887427, 0.07441489, 0.00671085], [0.91907451, 0.07417992, 0.00674558], [0.91914982, 0.07412815, 0.00672202], [0.91939591, 0.0738401 , 0.00676399], [0.91894279, 0.07434967, 0.00670754], [0.91910347, 0.07414268, 0.00675385], [0.91944594, 0.07381277, 0.0067413 ], [0.91903299, 0.0742324 , 0.00673461], [0.91951618, 0.07371877, 0.00676505], [0.91899007, 0.07426733, 0.00674259]]) Feeding grid search in the Calibration cv from sklearn.model_selection import GridSearchCV from sklearn import svm, datasets from sklearn.calibration import CalibratedClassifierCV iris = datasets.load_iris() parameters = {'kernel':('linear', 'rbf'), 'C':[1, 10]} svc = svm.SVC() clf = GridSearchCV(svc, parameters) cal_clf = CalibratedClassifierCV(clf) cal_clf.fit(iris.data, iris.target) cal_clf.predict_proba(iris.data[0:10]) array([[0.900434 , 0.0906832 , 0.0088828 ], [0.90021418, 0.09086583, 0.00891999], [0.90206035, 0.08900572, 0.00893393], [0.9009212 , 0.09012478, 0.00895402], [0.90101953, 0.0900889 , 0.00889158], [0.89868497, 0.09242412, 0.00889091], [0.90214948, 0.08889812, 0.0089524 ], [0.8999936 , 0.09110965, 0.00889675], [0.90204193, 0.08896843, 0.00898964], [0.89985101, 0.09124147, 0.00890752]]) Notice that the output of the probabilities are slightly different between the two. The difference between each method is: Using the best estimator is only doing the calibration across 5 splits (the default cv). It uses the same estimator in all 5 splits. Using grid search, is doing going to fit a grid search on each of the 5 CV splits from calibration 5 times. You are essentially doing cross validation on 4/5 of the data each time choosing the best estimator for the 4/5 of the data and then doing the calibration with that best estimator on the last 5th. You could have slightly different models running on each set of test data depending on what the grid search chooses. I think the grid search and calibration are different goals so in my opinion I would probably work on each separately and go with the first way specified above get a model that works the best and then feed that in the calibration curve. However, I don't know your specific goals so I can't say that the 2nd way described here is the WRONG way. You can always try both ways and see what gives you better performance and go with the one that works best.
8
7
60,264,393
2020-2-17
https://stackoverflow.com/questions/60264393/pandas-copy-value-from-one-column-to-another-if-condition-is-met
I have a dataframe: df = col1 col2 col3 1 2 3 1 4 6 3 7 2 I want to edit df, such that when the value of col1 is smaller than 2 , take the value from col3. So I will get: new_df = col1 col2 col3 3 2 3 6 4 6 3 7 2 I tried to use assign and df.loc but it didn't work. What is the best way to do so?
df['col1'] = df.apply(lambda x: x['col3'] if x['col1'] < x['col2'] else x['col1'], axis=1)
17
16
60,257,856
2020-2-17
https://stackoverflow.com/questions/60257856/is-data-safety-guaranteed-while-using-threadpoolexecutor-from-pythons-future
I'm looking for a conceptual answer on this question. I'm wondering whether using ThreadPool in python to perform concurrent tasks, guarantees that data is not corrupted; I mean multiple threads don't access the critical data at the same time. If so, how does this ThreadPoolExecutor internally works to ensure that critical data is accessed by only one thread at a time?
Thread pools do not guarantee that shared data is not corrupted. Threads can swap at any byte code execution boundary and corruption is always a risk. Shared data should be protected by synchronization resources such as locks, condition variables and events. See the threading module docs concurrent.futures.ThreadPoolExecutor is a thread pool specialized to the concurrent.futures async task model. But all of the risks of traditional threading are still there. If you are using the python async model, things that fiddle with shared data should be dispatched on the main thread. The thread pool should be used for autonomous events, especially those that wait on blocking I/O.
7
8
60,248,740
2020-2-16
https://stackoverflow.com/questions/60248740/how-to-set-navigator-webdriver-to-undefined-with-selenium-for-firefox-geckodriv
I am trying to set the navigator.webdriver variable in the Firefox browser to undefined using Selenium in Python. I have been able to successfully do this when using Chrome, but now I need to do the same thing using in Firefox. When using the Firefox webdriver, execute_cdp_cmd(...) does not exist. Does anyone know how to do the same thing using the firefox webdriver instead of the chrome webdriver? Please see the relevant code below. driver.execute_cdp_cmd("Page.addScriptToEvaluateOnNewDocument", { "source": """ Object.defineProperty(navigator, 'webdriver', { get: () => undefined }) """ })
I have since found a solution to my problem. The code below will set "navigator.webdriver" to undefined in a Firefox browser being run by Selenium. profile.set_preference("dom.webdriver.enabled", False)
8
5
60,220,751
2020-2-14
https://stackoverflow.com/questions/60220751/is-there-a-way-to-take-screen-shots-of-desktop-that-not-current-active-one-using
I'm trying to record screen when playing a website by using mss and opencv, but I don't want the program to use the current screen. I want to put them to play on a second desktop, like Desktop 2 in the following picture macos have 4 desktop setup so I can work in the desktop 1 without any interruption.
Currently, MSS does not support capturing inactive workspaces. This Ask Ubuntu answer indicates (referencing a post on an "archived" (i.e. apparently deleted) forum) that this is not normally possible in the X Window System.1 The answer discusses using Xvfb as a workaround, but this does not appear to be useful for screen-capture software, as it is essentially a way to run the application on a virtual display, which can then be captured normally. If taking a screenshot of an inactive space is possible on macOS (which I consider unlikely for the same reasons as on X), you would likely need to use non-API functions from CoreGraphics (as there was no public API for spaces as of 2016). This GitHub repository documents those functions, though the repository was last updated in 2016, so it may not be as helpful as you would like. Another option, which may be the least OS-dependent, is to run a headless virtual machine and take screenshots of that. How well this works will depend on the virtual machine manager and the virtual machine itself, as well as how you are taking the screenshots. 1 Basically because the inactive desktops are not rendered. For anyone who sees this question while looking for a way to take screenshots of other monitors, look at the edit history of this answer.
7
6
60,246,570
2020-2-16
https://stackoverflow.com/questions/60246570/gensim-lda-coherence-score-nan
I created a Gensim LDA Model as shown in this tutorial: https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/ lda_model = gensim.models.LdaMulticore(data_df['bow_corpus'], num_topics=10, id2word=dictionary, random_state=100, chunksize=100, passes=10, per_word_topics=True) And it generates 10 topics with a log_perplexity of: lda_model.log_perplexity(data_df['bow_corpus']) = -5.325966117835991 But when I run the coherence model on it to calculate coherence score, like so: coherence_model_lda = CoherenceModel(model=lda_model, texts=data_df['bow_corpus'].tolist(), dictionary=dictionary, coherence='c_v') with np.errstate(invalid='ignore'): lda_score = coherence_model_lda.get_coherence() My LDA-Score is nan. What am I doing wrong here?
Solved! Coherence Model requires the original text, instead of the training corpus fed to LDA_Model - so when i ran this: coherence_model_lda = CoherenceModel(model=lda_model, texts=data_df['corpus'].tolist(), dictionary=dictionary, coherence='c_v') with np.errstate(invalid='ignore'): lda_score = coherence_model_lda.get_coherence() I got a coherence score of: 0.462 Hope this helps someone else making the same mistake. Thanks!
8
12
60,243,099
2020-2-15
https://stackoverflow.com/questions/60243099/what-is-the-meaning-of-the-second-output-of-huggingfaces-bert
Using the vanilla configuration of base BERT model in the huggingface implementation, I get a tuple of length 2. import torch import transformers from transformers import AutoModel,AutoTokenizer bert_name="bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(bert_name) BERT = AutoModel.from_pretrained(bert_name) e=tokenizer.encode('I am hoping for the best', add_special_tokens=True) q=BERT(torch.tensor([e])) print (len(q)) #Output: 2 The first element is what I expect to receive - the 768 dimension embedding of each input token. print (e) #Output : [101, 1045, 2572, 5327, 2005, 1996, 2190, 102] print (q[0].shape) #Output : torch.Size([1, 8, 768]) But what is the second element in the tuple? print (q[1].shape) # torch.Size([1, 768]) It has the same size as the encoding of each token. But what is it? Maybe a copy of the [CLS] token, a representation for the classification of the entire encoded text? Let's check. a= q[0][:,0,:] b=q[1] print (torch.eq(a,b)) #Output : Tensor([[False, False, False, .... False]]) Nope! What about a copy the embedding of the last token (for whatever reason)? c= q[0][:,-1,:] b=q[1] print (torch.eq(a,c)) #Output : Tensor([[False, False, False, .... False]]) So, also not that. The documentation talks about how changing the config can result in more tuple elements (like hidden states), but I did not find any description of this "mysterious" tuple element outputted by the default configuration. Any ideas as to what is it and what is its usage?
The output in this case is a tuple of (last_hidden_state, pooler_output). You can find documentation about what the returns could be here.
7
8
60,238,433
2020-2-15
https://stackoverflow.com/questions/60238433/what-is-the-meaning-of-pydantic-modelsschemas-in-python-while-building-an-api
I am new at python, and I am trying to build an API with FastAPI.It's working so far, I connected with postgres db, I made post/get/ request and everything is working, but I don't have good understanding why we define the schemas like this, why do we have to create an class UserBase(BaseModel) class UserCreate(UserBase) class User(UserBase) I will post the source code, for all the files, and if you guys could help me to get a good understanding over this,it would really help me so much, because I've got an assignement for tomorrow. schemas.py from typing import List from pydantic import BaseModel ##BOOKING class BookingBase(BaseModel): name:str description:str = None class BookingCreate(BookingBase): pass class Booking(BookingBase): id:int user_id:int class Config: orm_mode = True ##USER class UserBase(BaseModel): email: str class UserCreate(UserBase): password: str class User(UserBase): id: int is_active: bool bookings: List[Booking] = [] class Config: orm_mode = True models.py from .database import Base from sqlalchemy import Boolean, Column, ForeignKey, Integer, String,DateTime from sqlalchemy.sql import func from sqlalchemy.orm import relationship class User(Base): __tablename__ = "users" id = Column(Integer, primary_key=True,index=True) email = Column(String, unique=True, index= True) hashed_password = Column(String) is_active = Column(Boolean,default=True) bookings = relationship("Booking", back_populates="owner") class Booking(Base): __tablename__ = "bookings" id=Column(Integer,primary_key=True,index=True) name = Column(String,index=True) description = Column(String, index=True) created_date = Column(DateTime, server_default=func.now()) user_id = Column(Integer,ForeignKey("users.id")) owner = relationship("User",back_populates="bookings") crud.py from . import models,schemas from sqlalchemy.orm import Session def get_user(db:Session,user_id:int): return db.query(models.User).filter(models.User.id == user_id).first() def fetch_user_by_email(db:Session,email:str): return db.query(models.User).filter(models.User.email == email).first() def get_all_users(db: Session, skip: int = 0, limit: int = 100): return db.query(models.User).offset(skip).limit(limit).all() def get_bookings(db:Session,skip:int=0,limit:int=100): return db.query(models.Booking).offset(skip).limit(limit).all() def create_new_user(db:Session,user:schemas.UserCreate): testing_hashed = user.password + "test" db_user = models.User(email=user.email,hashed_password=testing_hashed) db.add(db_user) db.commit() db.refresh(db_user) return db_user def create_user_booking(db: Session, booking: schemas.BookingCreate, user_id: int): db_item = models.Booking(**booking.dict(), user_id=user_id) db.add(db_item) db.commit() db.refresh(db_item) return db_item database.py from sqlalchemy import create_engine from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import sessionmaker # SQLALCHEMY_DATABASE_URL = "sqlite:///./test.db" SQLALCHEMY_DATABASE_URL = "postgresql://postgres:root@localhost/meetingbookerdb" ##Creating the SQLAlchemy ORM engine..>> above we have imported create_engine method from sqlalchemy ##Since we are using Postgres we dont need anything else create_engine engine = create_engine( SQLALCHEMY_DATABASE_URL ) #Creating SessionLocal class which will be database session on the request.. SessionLocal = sessionmaker(autocommit=False,autoflush=False,bind=engine) ## Creating the base clase, using the declerative_base() method which returns a class. ## Later we will need this Base Class to create each of the database models Base = declarative_base() and main.py from typing import List from fastapi import Depends, FastAPI, HTTPException from sqlalchemy.orm import Session from .app import crud, models, schemas from .app.database import SessionLocal, engine models.Base.metadata.create_all(bind=engine) app = FastAPI() # Dependency def get_db(): try: db = SessionLocal() yield db finally: db.close() @app.post("/users/", response_model=schemas.User) def create_user(user: schemas.UserCreate, db: Session = Depends(get_db)): db_user = crud.fetch_user_by_email(db, email=user.email) if db_user: raise HTTPException(status_code=400, detail="Email already registered") return crud.create_new_user(db=db, user=user) @app.get("/users/", response_model=List[schemas.User]) def read_users(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): users = crud.get_all_users(db, skip=skip, limit=limit) return users @app.get("/users/{user_id}", response_model=schemas.User) def read_user(user_id: int, db: Session = Depends(get_db)): db_user = crud.get_user(db, user_id=user_id) if db_user is None: raise HTTPException(status_code=404, detail="User not found") return db_user @app.post("/users/{user_id}/bookings/", response_model=schemas.Booking) def create_booking_for_user( user_id: int,booking: schemas.BookingCreate, db: Session = Depends(get_db) ): return crud.create_user_booking(db=db, booking=booking, user_id=user_id) @app.get("/bookings/", response_model=List[schemas.Booking]) def read_bookings(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)): bookings = crud.get_bookings(db, skip=skip, limit=limit) return bookings The question is, why do we have to create these schemas like that, Okay I get it the first one UserBase has to be for validation with pydantic, but what about the other two, can someone give me a good explaination.. Thank you.
Pydantic schemas define the properties and types to validate some payload. They act like a guard before you actually allow a service to fulfil a certain action (e.g. create a database object). I'm not sure if you're used to serializers, but it's pretty much the same thing except Pydantic and FastAPI integrate with newish Python 3 properties (see type checking) which makes it somewhat easier to achieve the things you used to do with framework builtins/libraries. In your example UserCreate only requires a password and email address so inheritance makes your code more DRY.
13
11
60,226,848
2020-2-14
https://stackoverflow.com/questions/60226848/array-does-not-have-indent-or-space-in-pyyaml
In the code below I created the net_plan_dict variable dictionary and converted it into a YAML format file. Inside the dictionary I have a field called addresses which is an array of three elements. After creating the YAML file, the three array elements were not placed under the addresses field : import yaml net_plan_dict = { 'networking': { 'addresses': ['192.168.1.1', '192.168.1.2', "192.168.1.3"], 'gateway4': '192.168.121.1' } } with open("new.yaml", "w") as f: yaml.dump(net_plan_dict, f) Output of the above code is as follows (in the file below, the IPs are not below the address and have no space or indent). new.yaml: networking: addresses: - 192.168.1.1 <-------- does not have indent - 192.168.1.2 - 192.168.1.3 gateway4: 192.168.121.1 but my goal is to get this output file (how to create this file, when ips are under addresses field): networking: addresses: - 192.168.1.1 - 192.168.1.2 - 192.168.1.3 gateway4: 192.168.121.1
PyYAML's dump() doesn't have the fine control to have a different indent for the mappings (2 positions) and sequences (4 positions), nor can it offset the sequence indicator (-) within the space of the (sequence) indent). If you want that kind of control over your output you should use ruamel.yaml (disclaimer: I am the author of that package): import sys import ruamel.yaml net_plan_dict = { 'networking': { 'addresses': ['192.168.1.1', '192.168.1.2', "192.168.1.3"], 'gateway4': '192.168.121.1' } } yaml = ruamel.yaml.YAML() yaml.indent(mapping=2, sequence=4, offset=2) yaml.dump(net_plan_dict, sys.stdout) which gives: networking: addresses: - 192.168.1.1 - 192.168.1.2 - 192.168.1.3 gateway4: 192.168.121.1
8
4
60,226,557
2020-2-14
https://stackoverflow.com/questions/60226557/how-to-forcefully-close-an-async-generator
Let's say I have an async generator like this: async def event_publisher(connection, queue): while True: if not await connection.is_disconnected(): event = await queue.get() yield event else: return I consume it like this: published_events = event_publisher(connection, queue) async for event in published_events: # do event processing here It works just fine, however when the connection is disconnected and there is no new event published the async for will just wait forever, so ideally I would like to close the generator forcefully like this: if connection.is_disconnected(): await published_events.aclose() But I get the following error: RuntimeError: aclose(): asynchronous generator is already running Is there a way to stop processing of an already running generator?
It seems to be related to this issue. Noticable: As shown in https://gist.github.com/1st1/d9860cbf6fe2e5d243e695809aea674c, it's an error to close a synchronous generator while it is being iterated. ... In 3.8, calling "aclose()" can crash with a RuntimeError. It's no longer possible to reliably cancel a running asynchrounous generator. Well, since we can't cancel running asynchrounous generator, let's try to cancel its running. import asyncio from contextlib import suppress async def cancel_gen(agen): task = asyncio.create_task(agen.__anext__()) task.cancel() with suppress(asyncio.CancelledError): await task await agen.aclose() # probably a good idea, # but if you'll be getting errors, try to comment this line ... if connection.is_disconnected(): await cancel_gen(published_events) Can't test if it'll work since you didn't provide reproducable example.
13
7
60,227,582
2020-2-14
https://stackoverflow.com/questions/60227582/making-a-python-test-think-an-installed-package-is-not-available
I have a test that makes sure a specific (helpful) error message is raised, when a required package is not available. def foo(caller): try: import pkg except ImportError: raise ImportError(f'Install "pkg" to use {caller}') pkg.bar() with pytest.raises(ImportError, match='Install "pkg" to use test_function'): foo('test_function') However, pkg is generally available, as other tests rely on it. Currently, I set up an additional virtual env without pkg just for this test. This seems like overkill. Is it possible to "hide" an installed package within a module or function?
I ended up with the following pytest-only solution, which appears to be more robust in the setting of a larger project. import builtins import pytest @pytest.fixture def hide_available_pkg(monkeypatch): import_orig = builtins.__import__ def mocked_import(name, *args, **kwargs): if name == 'pkg': raise ImportError() return import_orig(name, *args, **kwargs) monkeypatch.setattr(builtins, '__import__', mocked_import) @pytest.mark.usefixtures('hide_available_pkg') def test_message(): with pytest.raises(ImportError, match='Install "pkg" to use test_function'): foo('test_function')
8
6
60,211,248
2020-2-13
https://stackoverflow.com/questions/60211248/sort-a-list-by-presence-of-items-in-another-list
Suppose I have two lists: a = ['30', '10', '90', '1111', '17'] b = ['60', '1201', '30', '17', '900'] How would you sort this most efficiently, such that: list b is sorted with respect to a. Unique elements in b should be placed at the end of the sorted list. Unique elements in a can be ignored. example output: c = ['30', '17', '60', '1201', '900'] Sorry, it's a simple question. My attempt is stuck at the point of taking the intersection. intersection = sorted(set(a) & set(b), key = a.index)
There is no need to actually sort here. You want the elements in a which are in b, in the same order as they were in a; followed by the elements in b which are not in a, in the same order as they were in b. We can just do this with two filters, using the sets for fast membership tests: >>> a = ['30', '10', '90', '1111', '17'] >>> b = ['60', '1201', '30', '17', '900'] >>> a_set = set(a) >>> b_set = set(b) >>> [*filter(lambda x: x in b_set, a), *filter(lambda x: x not in a_set, b)] ['30', '17', '60', '1201', '900'] Or if you prefer comprehensions: >>> [*(x for x in a if x in b_set), *(x for x in b if x not in a_set)] ['30', '17', '60', '1201', '900'] Both take linear time, which is better than sorting.
7
7
60,226,735
2020-2-14
https://stackoverflow.com/questions/60226735/how-to-count-overlapping-datetime-intervals-in-pandas
I have a following DataFrame with two datetime columns: start end 0 01.01.2018 00:47 01.01.2018 00:54 1 01.01.2018 00:52 01.01.2018 01:03 2 01.01.2018 00:55 01.01.2018 00:59 3 01.01.2018 00:57 01.01.2018 01:16 4 01.01.2018 01:00 01.01.2018 01:12 5 01.01.2018 01:07 01.01.2018 01:24 6 01.01.2018 01:33 01.01.2018 01:38 7 01.01.2018 01:34 01.01.2018 01:47 8 01.01.2018 01:37 01.01.2018 01:41 9 01.01.2018 01:38 01.01.2018 01:41 10 01.01.2018 01:39 01.01.2018 01:55 I would like to count how many starts (intervals) are active at the same time before they end at given time (in other words: how many times each row overlaps with the rest of the rows). E.g. from 00:47 to 00:52 only one is active, from 00:52 to 00:54 two, from 00:54 to 00:55 only one again, and so on. I tried to stack columns onto each other, sort by date and by iterrating through whole dataframe give each "start" +1 to counter and -1 to each "end". It works but on my original data frame, where I have few millions of rows, iteration takes forever - I need to find a quicker way. My original basic-and-not-very-good code: import pandas as pd import numpy as np df = pd.read_csv('something.csv', sep=';') df = df.stack().to_frame() df = df.reset_index(level=1) df.columns = ['status', 'time'] df = df.sort_values('time') df['counter'] = np.nan df = df.reset_index().drop('index', axis=1) print(df.head(10)) gives: status time counter 0 start 01.01.2018 00:47 NaN 1 start 01.01.2018 00:52 NaN 2 stop 01.01.2018 00:54 NaN 3 start 01.01.2018 00:55 NaN 4 start 01.01.2018 00:57 NaN 5 stop 01.01.2018 00:59 NaN 6 start 01.01.2018 01:00 NaN 7 stop 01.01.2018 01:03 NaN 8 start 01.01.2018 01:07 NaN 9 stop 01.01.2018 01:12 NaN and: counter = 0 for index, row in df.iterrows(): if row['status'] == 'start': counter += 1 else: counter -= 1 df.loc[index, 'counter'] = counter final output: status time counter 0 start 01.01.2018 00:47 1.0 1 start 01.01.2018 00:52 2.0 2 stop 01.01.2018 00:54 1.0 3 start 01.01.2018 00:55 2.0 4 start 01.01.2018 00:57 3.0 5 stop 01.01.2018 00:59 2.0 6 start 01.01.2018 01:00 3.0 7 stop 01.01.2018 01:03 2.0 8 start 01.01.2018 01:07 3.0 9 stop 01.01.2018 01:12 2.0 Is there any way i can do this by NOT using iterrows()? Thanks in advance!
Use Series.cumsum with Series.map (or Series.replace): new_df = df.melt(var_name = 'status',value_name = 'time').sort_values('time') new_df['counter'] = new_df['status'].map({'start':1,'end':-1}).cumsum() print(new_df) status time counter 0 start 2018-01-01 00:47:00 1 1 start 2018-01-01 00:52:00 2 11 end 2018-01-01 00:54:00 1 2 start 2018-01-01 00:55:00 2 3 start 2018-01-01 00:57:00 3 13 end 2018-01-01 00:59:00 2 4 start 2018-01-01 01:00:00 3 12 end 2018-01-01 01:03:00 2 5 start 2018-01-01 01:07:00 3 15 end 2018-01-01 01:12:00 2 14 end 2018-01-01 01:16:00 1 16 end 2018-01-01 01:24:00 0 6 start 2018-01-01 01:33:00 1 7 start 2018-01-01 01:34:00 2 8 start 2018-01-01 01:37:00 3 9 start 2018-01-01 01:38:00 4 17 end 2018-01-01 01:38:00 3 10 start 2018-01-01 01:39:00 4 19 end 2018-01-01 01:41:00 3 20 end 2018-01-01 01:41:00 2 18 end 2018-01-01 01:47:00 1 21 end 2018-01-01 01:55:00 0 We could also use numpy.cumsum: new_df['counter'] = np.where(new_df['status'].eq('start'),1,-1).cumsum()
8
10
60,214,658
2020-2-13
https://stackoverflow.com/questions/60214658/patching-an-object-by-reference-rather-than-by-name-string
The most common way to patch something in a module seems to be to use something like from unittest.mock import patch from mypackage.my_module.my_submodule import function_to_test @patch('mypackage.my_module.my_submodule.fits.open') def test_something(self, mock_fits_open) # ... mock_fits_open.return_value = some_return_value function_to_test() # ... However, with the value passed to the patch decorator being a string, I don't get lots of the nice benefits from IDE. I can't use parts of the string to jump to definitions. I don't get autocomplete (and an implicit spelling check). Nor full refactoring capabilities. And so on. Using patch.object I can get much closer to what I'm looking for. from unittest.mock import patch import mypackage.my_module.my_submodule from mypackage.my_module.my_submodule import function_to_test @patch.object(mypackage.my_module.my_submodule.fits, 'open') def test_something(self, mock_fits_open) # ... mock_fits_open.return_value = some_return_value function_to_test() # ... However, this still requires the final part of the name of the referenced object is just a string. Is there a (nice) way to patch an object purely on the reference to that object? That is, I would like to be able to do something like from unittest.mock import patch import mypackage.my_module.my_submodule from mypackage.my_module.my_submodule import function_to_test @patch.reference(mypackage.my_module.my_submodule.fits.open) def test_something(self, mock_fits_open) # ... mock_fits_open.return_value = some_return_value function_to_test() # ...
Patching works by replacing in the namespace where the name is looked up. The underlying logic of mock.patch is essentially working with a context-managed name shadowing. You could do the same thing manually with: save original value associated with name (if any) try overwriting the name execute the code under test finally resetting name back to the original value Therefore, you fundamentally need to patch on a name, there is no patching a reference directly.
7
3
60,206,006
2020-2-13
https://stackoverflow.com/questions/60206006/where-does-zappa-upload-environment-variables-to
tl;dr Environment variables set in a zappa_settings.json don't upload as environment variables to AWS Lambda. Where do they go? ts;wm I have a Lambda function configured, deployed and managed using the Zappa framework. In the zappa_settings.json I have set a number of environment variables. These variables are definitely present as my application successfully runs however, when trying to inspect the Lambda function environment variables in the console or AWS CLI I see no environment variables have been uploaded to the Lambda function itself. Extract from zappa_settings.json: { "stage-dev": { "app_function": "project.app", "project_name": "my-project", "runtime": "python3.7", "s3_bucket": "my-project-zappa", "slim_handler": true, "environment_variables": { "SECRET": "mysecretvalue" } } } Output of aws lambda get-function-configuration --function-name my-project-stage-dev: { "Configuration": { "FunctionName": "my-project-stage-dev", "FunctionArn": "arn:aws:lambda:eu-west-1:000000000000:function:my-project-stage-dev", "Runtime": "python3.7", "Role": "arn:aws:iam::000000000000:role/lambda-execution-role", "Handler": "handler.lambda_handler", "CodeSize": 12333025, "Description": "Zappa Deployment", "Timeout": 30, "MemorySize": 512, "LastModified": "...", "CodeSha256": "...", "Version": "$LATEST", "TracingConfig": { "Mode": "PassThrough" }, "RevisionId": "..." }, "Code": { "RepositoryType": "S3", "Location": "..." } } Environment is absent from the output despite being included in the zappa_settings and the AWS documentation indicating it should be included if present, this is confirmed by checking in the console. I want to know where zappa is uploading the environment variables to, and if possible why it is doing so over using Lambda's in-built environment? AWS CLI docs: https://docs.aws.amazon.com/cli/latest/reference/lambda/get-function-configuration.html
environment_variables are saved in zappa_settings.py when creating a package for deployment (run zappa package STAGE and explore the archive) and are then dynamically set as environment variables by modifying os.environ in handler.py. To set native AWS variables you need to use aws_environment_variables.
9
10
60,202,691
2020-2-13
https://stackoverflow.com/questions/60202691/python-typing-declare-return-value-type-based-on-function-argument
Suppose I have function that takes type as argument and returns instance of that type: def fun(t): return t(42) Then I can call it and get objects of provided types: fun(int) # 42 fun(float) # 42.0 fun(complex) # (42+0j) fun(str) # "42" fun(MyCustomType) # something That list is not exhaustive, I'd like to be able to use any type with appropriate constructor. Then, I'd like to add type hints for that function. What should be the type hint for return value of that function? I've tried using simply t, as t is a type: def fun(t: type) -> t: return t(42) but that doesn't work: main.py:1: error: Name 't' is not defined This answer suggests using a TypeVar: from typing import TypeVar T = TypeVar("T") def fun(t: T) -> T: return t(42) But that doesn't seem to be right, as T denotes a type, so it suggests that type itself is returned, not its instance. Mypy rejects it: main.py:6: error: "object" not callable Using Any obviously work, but I feel it's too vague, it doesn't convey the intent: from typing import Any def fun(t: type) -> Any: return t(42)
TLDR: You need a TypeVar for the return type of calling t: def fun(t: Callable[[int], R]) -> R: ... Constraining on a type is too restrictive here. The function accepts any Callable that takes an integer, and the return type of the function is that of the Callable. This can be specified using a TypeVar for the return type: from typing import Callable, TypeVar R = TypeVar('R') # the variable return type def fun(t: Callable[[int], R]) -> R: return t(42) fun(int) # Revealed type is 'builtins.int*' fun(float) # Revealed type is 'builtins.float*' reveal_type(fun(lambda x: str(x))) # Revealed type is 'builtins.str*' This works for types as well, because type instantiation is a call. If a more complex signature, e.g. with keyword arguments, is needed, use Protocol (from typing or typing_extensions). Note that if one explicitly wants to pass only 42 to the Callable, Literal (from typing or typing_extensions) can be used to specify that. R = TypeVar('R') def fun(t: Callable[[Literal[42]], R]) -> R: return t(42) Note that any function of the type Callable[[int], R] also satisfies Callable[[Literal[42]], R].
23
16
60,197,665
2020-2-12
https://stackoverflow.com/questions/60197665/opencv-how-to-use-floodfill-with-rgb-image
I am trying to use floodFill on an image like below to extract the sky: But even when I set the loDiff=Scalar(0,0,0) and upDiff=Scalar(255,255,255) the result is just showing the seed point and does not grow larger (the green dot): code: Mat flood; Point seed = Point(180, 80); flood = imread("D:/Project/data/1.jpeg"); cv::floodFill(flood, seed, Scalar(0, 0, 255), NULL, Scalar(0, 0, 0), Scalar(255, 255, 255)); circle(flood, seed, 2, Scalar(0, 255, 0), CV_FILLED, CV_AA); This is the result (red dot is the seed): How can I set the function to get a larger area (like the whole sky)?
You need to set loDiff and upDiff arguments correctly. See floodFill documentation: loDiff – Maximal lower brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. upDiff – Maximal upper brightness/color difference between the currently observed pixel and one of its neighbors belonging to the component, or a seed pixel being added to the component. Here is a Python code sample: import cv2 flood = cv2.imread("1.jpeg"); seed = (180, 80) cv2.floodFill(flood, None, seedPoint=seed, newVal=(0, 0, 255), loDiff=(5, 5, 5, 5), upDiff=(5, 5, 5, 5)) cv2.circle(flood, seed, 2, (0, 255, 0), cv2.FILLED, cv2.LINE_AA); cv2.imshow('flood', flood) cv2.waitKey(0) cv2.destroyAllWindows() Result:
7
4
60,189,415
2020-2-12
https://stackoverflow.com/questions/60189415/black-python-ignore-rule
I feel Black is doing something not compliant (with my Organisation), so I am trying to ignore certain rules. Example below and a related link PEP 8: whitespace before ':' My Organisation (Coding Standards) does not give priority to what Black feels is right, but wants a way to customise black configurations. I dont see any mentioned of Ignoring a Rule in Black documentation https://github.com/psf/black#command-line-options. They have given examples to ignore Flake8 rules, but dont seem to have any documentation for their own product.
You can't customize black. From the readme: Black reformats entire files in place. It is not configurable.
15
12
60,180,101
2020-2-12
https://stackoverflow.com/questions/60180101/validate-list-in-marshmallow
currently I am using marshmallow schema to validate the request, and I have this a list and I need to validate the content of it. class PostValidationSchema(Schema): checks = fields.List( fields.String(required=True) ) the checks field is a list it should only contain these specific values ["booking", "reservation", "flight"]
If you mean to check the list only has those three elements in that order, then use Equal validator. from marshmallow import Schema, fields, validate class PostValidationSchema(Schema): checks = fields.List( fields.String(required=True), validate=validate.Equal(["booking", "reservation", "flight"]) ) schema = PostValidationSchema() schema.load({"checks": ["booking", "reservation", "flight"]}) # OK schema.load({"checks": ["booking", "reservation"]}) # ValidationError If the list can have any number of elements and those can only be one of those three specific values, then use OneOf validator. from marshmallow import Schema, fields, validate class PostValidationSchema(Schema): checks = fields.List( fields.String( required=True, validate=validate.OneOf(["booking", "reservation", "flight"]) ), ) schema = PostValidationSchema() schema.load({"checks": ["booking", "reservation", "flight"]}) # OK schema.load({"checks": ["booking", "reservation"]}) # OK schema.load({"checks": ["booking", "dummy"]}) # ValidationError
7
12
60,182,065
2020-2-12
https://stackoverflow.com/questions/60182065/django-structlog-is-not-printing-or-writing-log-message-to-console-or-file
I have installed django-structlog 1.4.1 for my Django project. I have followed all the steps which has been described in that link. In my settings.py file: import structlog MIDDLEWARE = [ 'django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', 'django_structlog.middlewares.RequestMiddleware', ] LOGGING = { "version": 1, "disable_existing_loggers": False, "formatters": { "json_formatter": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.JSONRenderer(), }, "plain_console": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.dev.ConsoleRenderer(), }, "key_value": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.KeyValueRenderer(key_order=['timestamp', 'level', 'event', 'logger']), }, }, "handlers": { "console": { "class": "logging.StreamHandler", "formatter": "plain_console", }, "json_file": { "class": "logging.handlers.WatchedFileHandler", "filename": "log/json.log", "formatter": "json_formatter", }, "flat_line_file": { "class": "logging.handlers.WatchedFileHandler", "filename": "log/flat_line.log", "formatter": "key_value", }, }, "loggers": { "django_structlog": { "handlers": ["console", "flat_line_file", "json_file"], "level": "DEBUG", }, "django_structlog_demo_project": { "handlers": ["console", "flat_line_file", "json_file"], "level": "DEBUG", }, } } structlog.configure( processors=[ structlog.stdlib.filter_by_level, structlog.processors.TimeStamper(fmt="iso"), structlog.stdlib.add_logger_name, structlog.stdlib.add_log_level, structlog.stdlib.PositionalArgumentsFormatter(), structlog.processors.StackInfoRenderer(), structlog.processors.format_exc_info, structlog.processors.UnicodeDecoder(), structlog.processors.ExceptionPrettyPrinter(), structlog.stdlib.ProcessorFormatter.wrap_for_formatter, ], context_class=structlog.threadlocal.wrap_dict(dict), logger_factory=structlog.stdlib.LoggerFactory(), wrapper_class=structlog.stdlib.BoundLogger, cache_logger_on_first_use=True, ) In my views.py: from django.http.response import HttpResponse import structlog logger = structlog.get_logger(__name__) def func(request): logger.debug("debug message", bar="Buz") logger.info("info message", bar="Buz") logger.warning("warning message", bar="Buz") logger.error("error message", bar="Buz") logger.critical("critical message", bar="Buz") return HttpResponse('success') Output in json.log: {"request_id": "7903fdfb-e99a-4360-a8f0-769696520cc9", "user_id": null, "ip": "127.0.0.1", "request": "<WSGIRequest: GET '/test'>", "user_agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36", "event": "request_started", "timestamp": "2020-02-12T05:11:23.877111Z", "logger": "django_structlog.middlewares.request", "level": "info"} {"request_id": "7903fdfb-e99a-4360-a8f0-769696520cc9", "user_id": null, "ip": "127.0.0.1", "code": 200, "request": "<WSGIRequest: GET '/test'>", "event": "request_finished", "timestamp": "2020-02-12T05:11:23.879736Z", "logger": "django_structlog.middlewares.request", "level": "info"} Output in flat_line.log: timestamp='2020-02-12T05:11:23.877111Z' level='info' event='request_started' logger='django_structlog.middlewares.request' request_id='7903fdfb-e99a-4360-a8f0-769696520cc9' user_id=None ip='127.0.0.1' request=<WSGIRequest: GET '/test'> user_agent='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36' timestamp='2020-02-12T05:11:23.879736Z' level='info' event='request_finished' logger='django_structlog.middlewares.request' request_id='7903fdfb-e99a-4360-a8f0-769696520cc9' user_id=None ip='127.0.0.1' code=200 request=<WSGIRequest: GET '/test'> Output in console: 2020-02-12T05:11:23.877111Z [info ] request_started [django_structlog.middlewares.request] ip=127.0.0.1 request=<WSGIRequest: GET '/test'> request_id=7903fdfb-e99a-4360-a8f0-769696520cc9 user_agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36 user_id=None {'request_id': '7903fdfb-e99a-4360-a8f0-769696520cc9', 'user_id': None, 'ip': '127.0.0.1', 'bar': 'Buz', 'event': 'warning message', 'timestamp': '2020-02-12T05:11:23.879035Z', 'logger': 'operational.views.core_view', 'level': 'warning'} {'request_id': '7903fdfb-e99a-4360-a8f0-769696520cc9', 'user_id': None, 'ip': '127.0.0.1', 'bar': 'Buz', 'event': 'error message', 'timestamp': '2020-02-12T05:11:23.879292Z', 'logger': 'operational.views.core_view', 'level': 'error'} {'request_id': '7903fdfb-e99a-4360-a8f0-769696520cc9', 'user_id': None, 'ip': '127.0.0.1', 'bar': 'Buz', 'event': 'critical message', 'timestamp': '2020-02-12T05:11:23.879468Z', 'logger': 'operational.views.core_view', 'level': 'critical'} 2020-02-12T05:11:23.879736Z [info ] request_finished [django_structlog.middlewares.request] code=200 ip=127.0.0.1 request=<WSGIRequest: GET '/test'> request_id=7903fdfb-e99a-4360-a8f0-769696520cc9 user_id=None [12/Feb/2020 05:11:23] "GET /test HTTP/1.1" 200 7 My issues are: 'info' and 'debug' level log message is not showing at the console. Any type of log message is not writing at the log files except "event='request_started'" and "event='request_finished'" I want same message in all of my log files and console. How can i achieve this?
I haven't used django-structlog (but written structlog 🤓) and this looks like django_structlog_demo_project is not the name of the logger of your application and hence the settings don't apply (default log level is INFO). You can either fix the name or since your configurations are identical, I would suggest to delete the example logger and rename the first one to root which should have a global effect. This is sadly a very common gotcha with standard library's logging.
8
19
60,179,799
2020-2-12
https://stackoverflow.com/questions/60179799/python-dataclass-whats-a-pythonic-way-to-validate-initialization-arguments
What's a pythonic way to validate the init arguments before instantiation w/o overriding dataclasses built-in init? I thought perhaps leveraging the __new__ dunder-method would be appropriate? from dataclasses import dataclass @dataclass class MyClass: is_good: bool = False is_bad: bool = False def __new__(cls, *args, **kwargs): instance: cls = super(MyClass, cls).__new__(cls, *args, **kwargs) if instance.is_good: assert not instance.is_bad return instance
Define a __post_init__ method on the class; the generated __init__ will call it if defined: from dataclasses import dataclass @dataclass class MyClass: is_good: bool = False is_bad: bool = False def __post_init__(self): if self.is_good: assert not self.is_bad This will even work when the replace function is used to make a new instance.
35
62
60,175,608
2020-2-11
https://stackoverflow.com/questions/60175608/visual-studio-code-and-jinja-templates
I use VS code since a while with some Extensions. All is perfect expect when I use Flask. Prettier put all flask code glued together, and intellisence is not working with flask code: {% extends "layout.html" %} {% block style %} body {color: red; } {% endblock %} {% block body %} you must provide a name {% endblock %} What can I do to make it work with flask (trie flask-snippets)? I run it in virtuel env (run before lauch vscode). Thanks in advance,
Batteries included solution Here is a solution that gives you code highlighting, tag-autocompletion and customizable auto-formatting: Install Better Jinja or Django (better syntax highlighting within double quotes) plugin Install djLint plugin Press CTRL + Shift + P and type open settings json + Enter This is my config and it works great for my jinja templates. djLint has more filetype-specific options to offer (see extension-description in VS Code). // settings.json // My jinja-templates use the extension `.jinja2`. To make it work with the "Django" plugin I add this to my settings: "files.associations": { "*.jinja2": "django-html" }, // Add emmet tag autocomplete for jinja and django templates "emmet.includeLanguages": { "jinja2": "html", "jinja-html": "html", "django-html": "html", }, // Set djLint as formatter for html, jinja, jinja-html, django-html "[html][jinja][jinja-html][django-html]": { "editor.formatOnSave": true, "editor.defaultFormatter": "monosans.djlint", "editor.detectIndentation": false, "editor.linkedEditing": true, "editor.tabSize": 2 }, // Add specific formatting rules as needed, e.g.: "djlint.enableLinting": true, "djlint.closeVoidTags": true, "djlint.formatAttributeTemplateTags": true, "djlint.formatCss": true, "djlint.formatJs": true, "djlint.lineBreakAfterMultilineTag": false, "djlint.maxBlankLines": 2, "djlint.maxLineLength": 100, "djlint.maxAttributeLength": 100
39
21
60,131,839
2020-2-8
https://stackoverflow.com/questions/60131839/violin-plot-troubles-in-python-on-log-scale
My violin plots are showing weird formats when using a log scale on my plots. I've tried using matplotlib and seaborn and I get very similar results. import matplotlib.pyplot as plt import seaborn as sns data = [[1e-05, 0.00102, 0.00498, 0.09154, 0.02009, 1e-05, 0.06649, 0.42253, 0.02062, 0.10812, 0.07128, 0.03903, 0.00506, 0.13391, 0.08668, 0.04127, 0.00927, 0.00118, 0.063, 0.18392, 0.05948, 0.07774, 0.14018, 0.0133, 0.00339, 0.00271, 0.05233, 0.00054, 0.0593, 1e-05, 0.00076, 0.03409, 0.71491, 0.02311, 0.10246, 0.12491, 0.05164, 0.1553, 0.01079, 0.01734, 0.02239, 0.1347, 0.02877, 0.04752, 0.00333, 0.04553, 0.03189, 0.00947, 0.00158, 0.00888, 0.12663, 0.07531, 0.12367, 0.11346, 0.06638, 0.06154, 1e-05, 0.1838, 0.08659, 0.05654, 0.07658, 0.0348, 0.02954, 0.0123, 0.01529, 0.05559, 0.00416, 0.00038, 0.14142, 0.00164, 0.03671, 0.10609, 0.01209, 0.0024, 0.11718, 0.11224, 0.06032, 0.09632, 0.12216, 0.00087, 0.06746, 0.00433, 0.06836, 0.09928, 2e-05, 0.14116, 0.05718, 0.01196, 0.04297, 0.00709, 0.10535, 0.04772, 0.05691, 0.06277, 1e-05, 0.03917, 0.0026, 0.06763, 0.02083, 0.32244, 0.00561, 0.03399, 0.08146, 0.10606, 0.01482, 0.00339, 0.02275, 0.00685, 0.1536, 0.0592, 0.08869, 1e-05, 0.20489, 0.00094, 0.00714, 0.06355, 0.03414, 0.03002, 0.02365, 0.04376, 0.0246, 0.02745, 0.07604, 0.12069, 1e-05, 0.02974, 0.10681, 0.00987, 0.02543, 0.01416, 0.00098, 3e-05, 0.00967, 0.11958, 0.02882, 0.03634, 0.19232, 0.12058, 0.36535, 0.07428, 0.02829, 0.09189, 0.03677, 0.00036, 0.0463, 0.57029, 0.0105, 0.00015, 0.06212, 0.0329, 0.06102, 0.12267], [0.01219, 0.14638, 0.03822, 0.05784, 0.03615, 0.03288, 0.00986, 0.05331, 0.01434, 0.00999, 0.05272, 0.03269, 0.0682, 0.15455, 0.09675, 0.02272, 0.0027, 0.01955, 0.06194, 0.00115, 0.07799, 0.03987, 0.11152, 0.07229, 0.007, 0.00075, 0.04499, 0.01534, 0.04301, 0.01247, 0.09511, 0.02297, 0.05538, 0.04614, 0.07359, 0.06909, 1e-05, 0.04247, 0.05485, 0.00071, 0.082, 0.07614, 0.03751, 0.01625, 0.03309, 0.03228, 0.08109, 0.02171, 0.07246, 0.00353, 0.02434, 0.01394, 0.037, 0.02429, 0.15162, 0.0527, 0.0201, 0.07954, 0.07626, 0.09285, 0.05071, 0.01224, 0.06331, 0.07556, 0.04952, 0.00052, 0.00588, 0.132, 0.00067, 0.00012, 0.00084, 0.03865, 0.02362, 0.08976, 0.18545, 0.04882, 0.03789, 0.05006, 0.02979, 0.003, 0.09262, 0.05668, 0.02486, 0.05855, 0.11588, 0.07713, 0.10428, 0.00706, 0.02467, 0.13257, 0.11547, 0.06143, 0.09478, 0.06099, 0.02483, 0.09312, 0.16867, 0.07236, 0.10962, 0.04149, 0.05005, 0.09087, 0.0313, 0.03697, 0.07201, 2e-05, 0.00259, 0.00115, 0.03907, 0.02931, 0.14907, 0.05598, 0.07087, 0.09709, 0.10653, 0.11936, 0.08196, 0.1213, 0.00627, 0.08496, 0.00038, 0.03537, 0.20043, 0.05159, 0.05872, 0.07754, 0.07621, 0.05924, 0.09587, 0.02653, 0.07135, 1e-05, 0.01377, 0.0062, 0.01965, 0.00115, 0.07529, 0.04709, 0.05458, 0.10895, 0.02195, 0.04534, 0.015, 0.00577, 0.05784, 0.01691, 0.08103, 0.04178, 0.04328, 0.01204, 0.03463, 0.03805, 0.01231, 0.03646, 0.01162, 0.16536, 0.03471, 0.00541, 0.09088, 0.06447, 0.07263, 0.05924, 0.0952, 0.09938, 0.04464, 0.05543, 0.03827, 0.11514, 0.02803, 0.09589, 0.0254, 0.05351, 0.00171, 0.00856, 0.05828, 0.11975, 7e-05, 0.07093, 0.06077, 0.0384, 0.00163, 0.05992, 0.00463, 0.00975, 0.00429, 0.12965, 0.03388, 0.02372, 0.07622, 0.04341, 0.06637, 0.00578, 0.06946, 0.00469, 0.11668, 0.07033, 0.06806, 0.05505, 0.02195, 0.05089, 0.03404, 0.00552, 0.05331, 0.03695, 0.41581, 0.01553, 0.02045, 0.09779, 0.03842, 0.01115, 0.05392, 0.01147, 0.05855, 0.05588, 0.20745, 0.01536, 0.03993, 0.07677, 0.01388, 0.0029, 0.00235, 0.05823, 0.05237, 0.00425, 0.09225, 0.00703, 0.24038, 0.06733, 0.00064, 0.08959, 0.04365, 0.02308, 0.04566, 0.08395, 0.0038, 0.05322, 0.0145, 0.02012, 0.07084, 0.08202, 0.01091, 0.03738, 0.03798, 0.03473, 0.08534, 0.00133, 0.04046, 0.10119, 0.0317, 0.00312, 0.03614, 0.10442, 0.13286, 0.0042, 0.04229, 0.01735, 0.09879, 0.07516, 0.00303, 0.08062, 0.09347, 0.03473, 0.05099, 0.16373, 0.08988, 0.04696, 0.07488, 0.12159, 0.11098, 0.00549, 0.00122, 0.05276, 0.09883, 0.01346, 0.02059, 0.07394, 0.0413, 0.08766, 0.0124, 0.09913, 0.00754, 0.15671, 0.02699, 0.09978, 1e-05, 0.00243, 0.02819, 0.00027, 0.05793, 0.03165, 0.10168, 0.00042, 0.00044, 0.01332, 0.00542, 0.05946, 0.009, 0.10857, 0.01699, 1e-05, 0.00073, 0.10842, 0.17143, 0.00036, 0.00014, 0.10508, 0.01333, 0.34202, 0.12201, 0.04618, 0.02507, 0.02939, 0.03497, 0.01905, 0.00136, 0.02354, 0.00061, 0.08514, 0.14529, 0.04097, 0.12821, 0.18862], [0.04683, 0.02943, 0.07885, 0.07846, 0.06855, 0.02815, 0.00792, 0.0826, 0.00554, 0.01041, 0.03957, 0.0126, 0.08399, 0.15046, 0.15594, 0.03941, 0.0428, 0.11343, 0.15665, 0.07381, 0.04386, 0.12008, 0.04816, 0.04844, 0.08248, 0.08023, 0.03011, 0.00464, 0.07204, 0.08376, 0.05777, 0.06164, 0.00697, 0.02023, 0.04844, 0.0592, 0.00954, 0.06357, 0.0122, 0.05905, 0.00705, 0.0054, 0.08822, 0.06056, 0.02598, 0.02136, 0.05638, 0.03768, 0.05101, 0.08908, 0.0384, 0.01579, 0.04023, 0.03746, 0.17236, 0.08293, 0.12469, 0.14018, 0.04301, 0.07258, 0.02678, 0.08078, 0.07698, 0.06346, 0.06984, 0.04832, 0.07512, 0.0342, 0.05339, 0.026, 0.11585, 0.02744, 0.00979, 0.01312, 0.05915, 0.01326, 0.00107, 0.00737, 0.05971, 0.0451, 0.05788, 0.0007, 0.0043, 0.00142, 0.0019, 0.00055, 0.00223, 0.02441, 0.04555, 0.03869, 0.05791, 0.05517, 0.15743, 0.04517, 0.47114, 0.05639, 0.00152, 0.00371, 1e-05, 1e-05, 0.04192, 0.02758, 0.01945, 0.02763, 0.04021, 0.02844, 0.01823, 0.10665, 0.02067, 0.05433, 0.05591, 0.00733, 0.00858, 0.01949, 0.06519, 0.07793, 0.00199, 0.09916, 0.08717, 0.06273, 0.09408, 0.00638, 0.00248, 0.08922, 0.09157, 0.03525, 0.01791, 0.06016, 0.01939, 0.12194, 0.08303, 0.0831, 0.02714, 0.06312, 0.11584, 0.11334, 0.04314, 0.02575, 0.00629, 0.02408, 0.02274, 0.03037, 0.06737, 0.0175, 0.00888, 0.06568, 0.0839, 0.0085, 0.00831, 0.00154, 0.01072, 0.01289, 0.09074, 0.02131, 0.02997, 0.02343, 0.02355, 0.05324, 0.09564, 0.17995, 0.00828, 0.0148, 0.01858, 0.02106, 0.00288, 0.00344, 0.001, 0.02143, 0.00732, 0.01458, 0.01547, 0.01742, 0.00032, 0.24005, 0.00028, 0.00302, 0.07275, 0.04579, 0.06316, 0.02572, 0.09316, 0.03062, 0.10521, 0.07123, 0.03069, 0.07958, 0.04484, 0.01948, 0.01951, 0.01282, 0.00868, 0.07931, 0.01105, 0.01235, 0.09297, 0.06959, 0.00716, 0.0271, 0.00592, 0.09362, 0.00319, 0.00859, 0.08486, 0.02001, 0.00194, 0.04189, 0.09024, 0.07705, 0.07365, 0.01123, 0.03202, 0.01361, 0.00098, 0.00397, 0.00139, 0.00397, 0.00445, 1e-05, 0.00267, 0.06564, 0.06567, 0.06566, 0.06566, 0.09249, 0.03475, 0.0338, 0.0664, 0.02986, 0.04024, 0.00835, 0.04304, 0.04081, 0.04534, 0.06636, 0.03312, 0.06175, 0.03117, 0.02243, 0.03454, 0.11135, 0.07016, 0.0681, 0.09716, 0.02589, 0.4367, 0.08293, 0.11834, 0.00191, 0.10913, 0.00159, 0.0638, 0.01808, 0.00116, 0.00911, 0.01408, 0.09179, 0.02122, 0.05026, 0.05144, 0.03169, 0.06674]] fig, ax = plt.subplots(1,3, sharey=True) sns.violinplot(data=data, ax=ax[0]) sns.swarmplot(data=data, ax=ax[1]) sns.stripplot(data=data, ax=ax[2]) When using the data on a linear scale, everything looks fine. However, a lot of my data is between 0.1 and 0.00001 so I wanted to use a log scale for better visualization. When switching to a log scale: plt.yscale('log') plt.ylim(0.000001, 1) My swarmplot and stripplot plots look fine, however, the violin plots do not condense towards the bottom. Notice that I also don't have any negative values, but the violin plots always suggest that I do. Overall, I would have expected my violin plots to look something more like this (which was done in R). Any suggestions on how to get the violin plots to act more like the plots in the last picture (i.e. condensing when there are fewer data points) using seaborn or matplotlib, or another python based visualization?
New way, seaborn 0.13, parameter log_scale Seaborn version 0.13 introduces a new parameter log_scale. This enables the kde curve can be calculated directly in log space. Here is how it looks with the given data: import matplotlib.pyplot as plt import seaborn as sns import numpy as np data = [[1e-05, 0.00102, 0.00498, 0.09154, 0.02009, 1e-05, 0.06649, 0.42253, 0.02062, 0.10812, 0.07128, 0.03903, 0.00506, 0.13391, 0.08668, 0.04127, 0.00927, 0.00118, 0.063, 0.18392, 0.05948, 0.07774, 0.14018, 0.0133, 0.00339, 0.00271, 0.05233, 0.00054, 0.0593, 1e-05, 0.00076, 0.03409, 0.71491, 0.02311, 0.10246, 0.12491, 0.05164, 0.1553, 0.01079, 0.01734, 0.02239, 0.1347, 0.02877, 0.04752, 0.00333, 0.04553, 0.03189, 0.00947, 0.00158, 0.00888, 0.12663, 0.07531, 0.12367, 0.11346, 0.06638, 0.06154, 1e-05, 0.1838, 0.08659, 0.05654, 0.07658, 0.0348, 0.02954, 0.0123, 0.01529, 0.05559, 0.00416, 0.00038, 0.14142, 0.00164, 0.03671, 0.10609, 0.01209, 0.0024, 0.11718, 0.11224, 0.06032, 0.09632, 0.12216, 0.00087, 0.06746, 0.00433, 0.06836, 0.09928, 2e-05, 0.14116, 0.05718, 0.01196, 0.04297, 0.00709, 0.10535, 0.04772, 0.05691, 0.06277, 1e-05, 0.03917, 0.0026, 0.06763, 0.02083, 0.32244, 0.00561, 0.03399, 0.08146, 0.10606, 0.01482, 0.00339, 0.02275, 0.00685, 0.1536, 0.0592, 0.08869, 1e-05, 0.20489, 0.00094, 0.00714, 0.06355, 0.03414, 0.03002, 0.02365, 0.04376, 0.0246, 0.02745, 0.07604, 0.12069, 1e-05, 0.02974, 0.10681, 0.00987, 0.02543, 0.01416, 0.00098, 3e-05, 0.00967, 0.11958, 0.02882, 0.03634, 0.19232, 0.12058, 0.36535, 0.07428, 0.02829, 0.09189, 0.03677, 0.00036, 0.0463, 0.57029, 0.0105, 0.00015, 0.06212, 0.0329, 0.06102, 0.12267], [0.01219, 0.14638, 0.03822, 0.05784, 0.03615, 0.03288, 0.00986, 0.05331, 0.01434, 0.00999, 0.05272, 0.03269, 0.0682, 0.15455, 0.09675, 0.02272, 0.0027, 0.01955, 0.06194, 0.00115, 0.07799, 0.03987, 0.11152, 0.07229, 0.007, 0.00075, 0.04499, 0.01534, 0.04301, 0.01247, 0.09511, 0.02297, 0.05538, 0.04614, 0.07359, 0.06909, 1e-05, 0.04247, 0.05485, 0.00071, 0.082, 0.07614, 0.03751, 0.01625, 0.03309, 0.03228, 0.08109, 0.02171, 0.07246, 0.00353, 0.02434, 0.01394, 0.037, 0.02429, 0.15162, 0.0527, 0.0201, 0.07954, 0.07626, 0.09285, 0.05071, 0.01224, 0.06331, 0.07556, 0.04952, 0.00052, 0.00588, 0.132, 0.00067, 0.00012, 0.00084, 0.03865, 0.02362, 0.08976, 0.18545, 0.04882, 0.03789, 0.05006, 0.02979, 0.003, 0.09262, 0.05668, 0.02486, 0.05855, 0.11588, 0.07713, 0.10428, 0.00706, 0.02467, 0.13257, 0.11547, 0.06143, 0.09478, 0.06099, 0.02483, 0.09312, 0.16867, 0.07236, 0.10962, 0.04149, 0.05005, 0.09087, 0.0313, 0.03697, 0.07201, 2e-05, 0.00259, 0.00115, 0.03907, 0.02931, 0.14907, 0.05598, 0.07087, 0.09709, 0.10653, 0.11936, 0.08196, 0.1213, 0.00627, 0.08496, 0.00038, 0.03537, 0.20043, 0.05159, 0.05872, 0.07754, 0.07621, 0.05924, 0.09587, 0.02653, 0.07135, 1e-05, 0.01377, 0.0062, 0.01965, 0.00115, 0.07529, 0.04709, 0.05458, 0.10895, 0.02195, 0.04534, 0.015, 0.00577, 0.05784, 0.01691, 0.08103, 0.04178, 0.04328, 0.01204, 0.03463, 0.03805, 0.01231, 0.03646, 0.01162, 0.16536, 0.03471, 0.00541, 0.09088, 0.06447, 0.07263, 0.05924, 0.0952, 0.09938, 0.04464, 0.05543, 0.03827, 0.11514, 0.02803, 0.09589, 0.0254, 0.05351, 0.00171, 0.00856, 0.05828, 0.11975, 7e-05, 0.07093, 0.06077, 0.0384, 0.00163, 0.05992, 0.00463, 0.00975, 0.00429, 0.12965, 0.03388, 0.02372, 0.07622, 0.04341, 0.06637, 0.00578, 0.06946, 0.00469, 0.11668, 0.07033, 0.06806, 0.05505, 0.02195, 0.05089, 0.03404, 0.00552, 0.05331, 0.03695, 0.41581, 0.01553, 0.02045, 0.09779, 0.03842, 0.01115, 0.05392, 0.01147, 0.05855, 0.05588, 0.20745, 0.01536, 0.03993, 0.07677, 0.01388, 0.0029, 0.00235, 0.05823, 0.05237, 0.00425, 0.09225, 0.00703, 0.24038, 0.06733, 0.00064, 0.08959, 0.04365, 0.02308, 0.04566, 0.08395, 0.0038, 0.05322, 0.0145, 0.02012, 0.07084, 0.08202, 0.01091, 0.03738, 0.03798, 0.03473, 0.08534, 0.00133, 0.04046, 0.10119, 0.0317, 0.00312, 0.03614, 0.10442, 0.13286, 0.0042, 0.04229, 0.01735, 0.09879, 0.07516, 0.00303, 0.08062, 0.09347, 0.03473, 0.05099, 0.16373, 0.08988, 0.04696, 0.07488, 0.12159, 0.11098, 0.00549, 0.00122, 0.05276, 0.09883, 0.01346, 0.02059, 0.07394, 0.0413, 0.08766, 0.0124, 0.09913, 0.00754, 0.15671, 0.02699, 0.09978, 1e-05, 0.00243, 0.02819, 0.00027, 0.05793, 0.03165, 0.10168, 0.00042, 0.00044, 0.01332, 0.00542, 0.05946, 0.009, 0.10857, 0.01699, 1e-05, 0.00073, 0.10842, 0.17143, 0.00036, 0.00014, 0.10508, 0.01333, 0.34202, 0.12201, 0.04618, 0.02507, 0.02939, 0.03497, 0.01905, 0.00136, 0.02354, 0.00061, 0.08514, 0.14529, 0.04097, 0.12821, 0.18862], [0.04683, 0.02943, 0.07885, 0.07846, 0.06855, 0.02815, 0.00792, 0.0826, 0.00554, 0.01041, 0.03957, 0.0126, 0.08399, 0.15046, 0.15594, 0.03941, 0.0428, 0.11343, 0.15665, 0.07381, 0.04386, 0.12008, 0.04816, 0.04844, 0.08248, 0.08023, 0.03011, 0.00464, 0.07204, 0.08376, 0.05777, 0.06164, 0.00697, 0.02023, 0.04844, 0.0592, 0.00954, 0.06357, 0.0122, 0.05905, 0.00705, 0.0054, 0.08822, 0.06056, 0.02598, 0.02136, 0.05638, 0.03768, 0.05101, 0.08908, 0.0384, 0.01579, 0.04023, 0.03746, 0.17236, 0.08293, 0.12469, 0.14018, 0.04301, 0.07258, 0.02678, 0.08078, 0.07698, 0.06346, 0.06984, 0.04832, 0.07512, 0.0342, 0.05339, 0.026, 0.11585, 0.02744, 0.00979, 0.01312, 0.05915, 0.01326, 0.00107, 0.00737, 0.05971, 0.0451, 0.05788, 0.0007, 0.0043, 0.00142, 0.0019, 0.00055, 0.00223, 0.02441, 0.04555, 0.03869, 0.05791, 0.05517, 0.15743, 0.04517, 0.47114, 0.05639, 0.00152, 0.00371, 1e-05, 1e-05, 0.04192, 0.02758, 0.01945, 0.02763, 0.04021, 0.02844, 0.01823, 0.10665, 0.02067, 0.05433, 0.05591, 0.00733, 0.00858, 0.01949, 0.06519, 0.07793, 0.00199, 0.09916, 0.08717, 0.06273, 0.09408, 0.00638, 0.00248, 0.08922, 0.09157, 0.03525, 0.01791, 0.06016, 0.01939, 0.12194, 0.08303, 0.0831, 0.02714, 0.06312, 0.11584, 0.11334, 0.04314, 0.02575, 0.00629, 0.02408, 0.02274, 0.03037, 0.06737, 0.0175, 0.00888, 0.06568, 0.0839, 0.0085, 0.00831, 0.00154, 0.01072, 0.01289, 0.09074, 0.02131, 0.02997, 0.02343, 0.02355, 0.05324, 0.09564, 0.17995, 0.00828, 0.0148, 0.01858, 0.02106, 0.00288, 0.00344, 0.001, 0.02143, 0.00732, 0.01458, 0.01547, 0.01742, 0.00032, 0.24005, 0.00028, 0.00302, 0.07275, 0.04579, 0.06316, 0.02572, 0.09316, 0.03062, 0.10521, 0.07123, 0.03069, 0.07958, 0.04484, 0.01948, 0.01951, 0.01282, 0.00868, 0.07931, 0.01105, 0.01235, 0.09297, 0.06959, 0.00716, 0.0271, 0.00592, 0.09362, 0.00319, 0.00859, 0.08486, 0.02001, 0.00194, 0.04189, 0.09024, 0.07705, 0.07365, 0.01123, 0.03202, 0.01361, 0.00098, 0.00397, 0.00139, 0.00397, 0.00445, 1e-05, 0.00267, 0.06564, 0.06567, 0.06566, 0.06566, 0.09249, 0.03475, 0.0338, 0.0664, 0.02986, 0.04024, 0.00835, 0.04304, 0.04081, 0.04534, 0.06636, 0.03312, 0.06175, 0.03117, 0.02243, 0.03454, 0.11135, 0.07016, 0.0681, 0.09716, 0.02589, 0.4367, 0.08293, 0.11834, 0.00191, 0.10913, 0.00159, 0.0638, 0.01808, 0.00116, 0.00911, 0.01408, 0.09179, 0.02122, 0.05026, 0.05144, 0.03169, 0.06674]] fig, ax = plt.subplots(ncols=3, figsize=(16, 5), sharey=True) sns.violinplot(data=data, ax=ax[0], log_scale=True) sns.swarmplot(data=data, s=3, ax=ax[1]) sns.stripplot(data=data, ax=ax[2]) plt.tight_layout() plt.show() Old way, transforming the data The tick labels for the y-axis can be rewritten using a custom formatter. And minor ticks similar to a log plot can be generated. import matplotlib.pyplot as plt from matplotlib import ticker as mticker import seaborn as sns import numpy as np data = [[1e-05, 0.00102, 0.00498, 0.09154, 0.02009, 1e-05, 0.06649, 0.42253, 0.02062, 0.10812, 0.07128, 0.03903, 0.00506, 0.13391, 0.08668, 0.04127, 0.00927, 0.00118, 0.063, 0.18392, 0.05948, 0.07774, 0.14018, 0.0133, 0.00339, 0.00271, 0.05233, 0.00054, 0.0593, 1e-05, 0.00076, 0.03409, 0.71491, 0.02311, 0.10246, 0.12491, 0.05164, 0.1553, 0.01079, 0.01734, 0.02239, 0.1347, 0.02877, 0.04752, 0.00333, 0.04553, 0.03189, 0.00947, 0.00158, 0.00888, 0.12663, 0.07531, 0.12367, 0.11346, 0.06638, 0.06154, 1e-05, 0.1838, 0.08659, 0.05654, 0.07658, 0.0348, 0.02954, 0.0123, 0.01529, 0.05559, 0.00416, 0.00038, 0.14142, 0.00164, 0.03671, 0.10609, 0.01209, 0.0024, 0.11718, 0.11224, 0.06032, 0.09632, 0.12216, 0.00087, 0.06746, 0.00433, 0.06836, 0.09928, 2e-05, 0.14116, 0.05718, 0.01196, 0.04297, 0.00709, 0.10535, 0.04772, 0.05691, 0.06277, 1e-05, 0.03917, 0.0026, 0.06763, 0.02083, 0.32244, 0.00561, 0.03399, 0.08146, 0.10606, 0.01482, 0.00339, 0.02275, 0.00685, 0.1536, 0.0592, 0.08869, 1e-05, 0.20489, 0.00094, 0.00714, 0.06355, 0.03414, 0.03002, 0.02365, 0.04376, 0.0246, 0.02745, 0.07604, 0.12069, 1e-05, 0.02974, 0.10681, 0.00987, 0.02543, 0.01416, 0.00098, 3e-05, 0.00967, 0.11958, 0.02882, 0.03634, 0.19232, 0.12058, 0.36535, 0.07428, 0.02829, 0.09189, 0.03677, 0.00036, 0.0463, 0.57029, 0.0105, 0.00015, 0.06212, 0.0329, 0.06102, 0.12267], [0.01219, 0.14638, 0.03822, 0.05784, 0.03615, 0.03288, 0.00986, 0.05331, 0.01434, 0.00999, 0.05272, 0.03269, 0.0682, 0.15455, 0.09675, 0.02272, 0.0027, 0.01955, 0.06194, 0.00115, 0.07799, 0.03987, 0.11152, 0.07229, 0.007, 0.00075, 0.04499, 0.01534, 0.04301, 0.01247, 0.09511, 0.02297, 0.05538, 0.04614, 0.07359, 0.06909, 1e-05, 0.04247, 0.05485, 0.00071, 0.082, 0.07614, 0.03751, 0.01625, 0.03309, 0.03228, 0.08109, 0.02171, 0.07246, 0.00353, 0.02434, 0.01394, 0.037, 0.02429, 0.15162, 0.0527, 0.0201, 0.07954, 0.07626, 0.09285, 0.05071, 0.01224, 0.06331, 0.07556, 0.04952, 0.00052, 0.00588, 0.132, 0.00067, 0.00012, 0.00084, 0.03865, 0.02362, 0.08976, 0.18545, 0.04882, 0.03789, 0.05006, 0.02979, 0.003, 0.09262, 0.05668, 0.02486, 0.05855, 0.11588, 0.07713, 0.10428, 0.00706, 0.02467, 0.13257, 0.11547, 0.06143, 0.09478, 0.06099, 0.02483, 0.09312, 0.16867, 0.07236, 0.10962, 0.04149, 0.05005, 0.09087, 0.0313, 0.03697, 0.07201, 2e-05, 0.00259, 0.00115, 0.03907, 0.02931, 0.14907, 0.05598, 0.07087, 0.09709, 0.10653, 0.11936, 0.08196, 0.1213, 0.00627, 0.08496, 0.00038, 0.03537, 0.20043, 0.05159, 0.05872, 0.07754, 0.07621, 0.05924, 0.09587, 0.02653, 0.07135, 1e-05, 0.01377, 0.0062, 0.01965, 0.00115, 0.07529, 0.04709, 0.05458, 0.10895, 0.02195, 0.04534, 0.015, 0.00577, 0.05784, 0.01691, 0.08103, 0.04178, 0.04328, 0.01204, 0.03463, 0.03805, 0.01231, 0.03646, 0.01162, 0.16536, 0.03471, 0.00541, 0.09088, 0.06447, 0.07263, 0.05924, 0.0952, 0.09938, 0.04464, 0.05543, 0.03827, 0.11514, 0.02803, 0.09589, 0.0254, 0.05351, 0.00171, 0.00856, 0.05828, 0.11975, 7e-05, 0.07093, 0.06077, 0.0384, 0.00163, 0.05992, 0.00463, 0.00975, 0.00429, 0.12965, 0.03388, 0.02372, 0.07622, 0.04341, 0.06637, 0.00578, 0.06946, 0.00469, 0.11668, 0.07033, 0.06806, 0.05505, 0.02195, 0.05089, 0.03404, 0.00552, 0.05331, 0.03695, 0.41581, 0.01553, 0.02045, 0.09779, 0.03842, 0.01115, 0.05392, 0.01147, 0.05855, 0.05588, 0.20745, 0.01536, 0.03993, 0.07677, 0.01388, 0.0029, 0.00235, 0.05823, 0.05237, 0.00425, 0.09225, 0.00703, 0.24038, 0.06733, 0.00064, 0.08959, 0.04365, 0.02308, 0.04566, 0.08395, 0.0038, 0.05322, 0.0145, 0.02012, 0.07084, 0.08202, 0.01091, 0.03738, 0.03798, 0.03473, 0.08534, 0.00133, 0.04046, 0.10119, 0.0317, 0.00312, 0.03614, 0.10442, 0.13286, 0.0042, 0.04229, 0.01735, 0.09879, 0.07516, 0.00303, 0.08062, 0.09347, 0.03473, 0.05099, 0.16373, 0.08988, 0.04696, 0.07488, 0.12159, 0.11098, 0.00549, 0.00122, 0.05276, 0.09883, 0.01346, 0.02059, 0.07394, 0.0413, 0.08766, 0.0124, 0.09913, 0.00754, 0.15671, 0.02699, 0.09978, 1e-05, 0.00243, 0.02819, 0.00027, 0.05793, 0.03165, 0.10168, 0.00042, 0.00044, 0.01332, 0.00542, 0.05946, 0.009, 0.10857, 0.01699, 1e-05, 0.00073, 0.10842, 0.17143, 0.00036, 0.00014, 0.10508, 0.01333, 0.34202, 0.12201, 0.04618, 0.02507, 0.02939, 0.03497, 0.01905, 0.00136, 0.02354, 0.00061, 0.08514, 0.14529, 0.04097, 0.12821, 0.18862], [0.04683, 0.02943, 0.07885, 0.07846, 0.06855, 0.02815, 0.00792, 0.0826, 0.00554, 0.01041, 0.03957, 0.0126, 0.08399, 0.15046, 0.15594, 0.03941, 0.0428, 0.11343, 0.15665, 0.07381, 0.04386, 0.12008, 0.04816, 0.04844, 0.08248, 0.08023, 0.03011, 0.00464, 0.07204, 0.08376, 0.05777, 0.06164, 0.00697, 0.02023, 0.04844, 0.0592, 0.00954, 0.06357, 0.0122, 0.05905, 0.00705, 0.0054, 0.08822, 0.06056, 0.02598, 0.02136, 0.05638, 0.03768, 0.05101, 0.08908, 0.0384, 0.01579, 0.04023, 0.03746, 0.17236, 0.08293, 0.12469, 0.14018, 0.04301, 0.07258, 0.02678, 0.08078, 0.07698, 0.06346, 0.06984, 0.04832, 0.07512, 0.0342, 0.05339, 0.026, 0.11585, 0.02744, 0.00979, 0.01312, 0.05915, 0.01326, 0.00107, 0.00737, 0.05971, 0.0451, 0.05788, 0.0007, 0.0043, 0.00142, 0.0019, 0.00055, 0.00223, 0.02441, 0.04555, 0.03869, 0.05791, 0.05517, 0.15743, 0.04517, 0.47114, 0.05639, 0.00152, 0.00371, 1e-05, 1e-05, 0.04192, 0.02758, 0.01945, 0.02763, 0.04021, 0.02844, 0.01823, 0.10665, 0.02067, 0.05433, 0.05591, 0.00733, 0.00858, 0.01949, 0.06519, 0.07793, 0.00199, 0.09916, 0.08717, 0.06273, 0.09408, 0.00638, 0.00248, 0.08922, 0.09157, 0.03525, 0.01791, 0.06016, 0.01939, 0.12194, 0.08303, 0.0831, 0.02714, 0.06312, 0.11584, 0.11334, 0.04314, 0.02575, 0.00629, 0.02408, 0.02274, 0.03037, 0.06737, 0.0175, 0.00888, 0.06568, 0.0839, 0.0085, 0.00831, 0.00154, 0.01072, 0.01289, 0.09074, 0.02131, 0.02997, 0.02343, 0.02355, 0.05324, 0.09564, 0.17995, 0.00828, 0.0148, 0.01858, 0.02106, 0.00288, 0.00344, 0.001, 0.02143, 0.00732, 0.01458, 0.01547, 0.01742, 0.00032, 0.24005, 0.00028, 0.00302, 0.07275, 0.04579, 0.06316, 0.02572, 0.09316, 0.03062, 0.10521, 0.07123, 0.03069, 0.07958, 0.04484, 0.01948, 0.01951, 0.01282, 0.00868, 0.07931, 0.01105, 0.01235, 0.09297, 0.06959, 0.00716, 0.0271, 0.00592, 0.09362, 0.00319, 0.00859, 0.08486, 0.02001, 0.00194, 0.04189, 0.09024, 0.07705, 0.07365, 0.01123, 0.03202, 0.01361, 0.00098, 0.00397, 0.00139, 0.00397, 0.00445, 1e-05, 0.00267, 0.06564, 0.06567, 0.06566, 0.06566, 0.09249, 0.03475, 0.0338, 0.0664, 0.02986, 0.04024, 0.00835, 0.04304, 0.04081, 0.04534, 0.06636, 0.03312, 0.06175, 0.03117, 0.02243, 0.03454, 0.11135, 0.07016, 0.0681, 0.09716, 0.02589, 0.4367, 0.08293, 0.11834, 0.00191, 0.10913, 0.00159, 0.0638, 0.01808, 0.00116, 0.00911, 0.01408, 0.09179, 0.02122, 0.05026, 0.05144, 0.03169, 0.06674]] log_data = [[np.log10(d) for d in row] for row in data] fig, ax = plt.subplots(ncols=3, figsize=(16, 5), sharey=True) sns.violinplot(data=log_data, ax=ax[0]) sns.swarmplot(data=log_data, s=3, ax=ax[1]) sns.stripplot(data=log_data, ax=ax[2]) ax[0].yaxis.set_major_formatter(mticker.StrMethodFormatter("$10^{{{x:.0f}}}$")) ymin, ymax = ax[0].get_ylim() tick_range = np.arange(np.floor(ymin), ymax) ax[0].yaxis.set_ticks(tick_range) ax[0].yaxis.set_ticks([np.log10(x) for p in tick_range for x in np.linspace(10 ** p, 10 ** (p + 1), 10)], minor=True) plt.tight_layout() plt.show() This should show the expected plot.
12
15
60,169,996
2020-2-11
https://stackoverflow.com/questions/60169996/does-a-with-statement-support-type-hinting
Can you define the type hint for a variable defined with the with syntax? with example() as x: print(x) I would like to type hint the above to say that x is a str (as an example). The only work around that I've found is to use an intermediate variable, but this feels hacky. with example() as x: y: str = x print(y) I can't find an example in the typing documentation.
PEP 526, which has been implemented in Python 3.6, allows you to annotate variables. The variable used in a with statement can be annotated like this: x: str with example() as x: [...]
56
73
60,092,641
2020-2-6
https://stackoverflow.com/questions/60092641/mad-results-differ-in-pandas-scipy-and-numpy
I want to compute the MAD (median absolute deviation) which is defined by MAD = median(|x_i - mean(x)|) for a list of numbers x x = list(range(0, 10)) + [1000] However, the results differ significantly using numpy, pandas, and an hand-made implementation: from scipy import stats import pandas as pd import numpy as np print(stats.median_absolute_deviation(x, scale=1)) # prints 3.0 print(pd.Series(x).mad()) # prints 164.54 print(np.median(np.absolute(x - np.mean(x)))) # prints 91.0 What is wrong?
The median absolute deviation is defined as: median(|x_i - median(x)| The method mad in Pandas returns the mean absolute deviation instead. You can calculate MAD using following methods: x = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 1000] stats.median_absolute_deviation(x, scale=1) # 3.0 np.median(np.absolute(x - np.median(x))) # 3.0 x = pd.Series(x) (x - x.median()).abs().median() # 3.0
10
23
60,137,572
2020-2-9
https://stackoverflow.com/questions/60137572/issues-installing-pytorch-1-4-no-matching-distribution-found-for-torch-1-4
Used the install guide on pytorch.org on how to install it and the command I'm using is pip install torch===1.4.0 torchvision===0.5.0 -f https://download.pytorch.org/whl/torch_stable.html But it's coming up with this error; ERROR: Could not find a version that satisfies the requirement torch===1.4.0 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2) ERROR: No matching distribution found for torch===1.4.0 Is this even a me-related issue? Can other people use this command? Pip is installed and works for other modules, Python 3.8, CUDA version 10.1, Windows 10 Home 2004
Looks like this issue is related to virtual environment. Did you try recommended installation line in another/new one virtual environment? If it doesn't help the possible solution might be installing package using direct link to PyTorch and TorchVision builds for your system: Windows: pip install https://download.pytorch.org/whl/cu101/torch-1.4.0-cp38-cp38-win_amd64.whl pip install https://download.pytorch.org/whl/cu101/torchvision-0.5.0-cp38-cp38-win_amd64.whl Ubuntu (Linux): pip install https://download.pytorch.org/whl/cu101/torch-1.4.0-cp38-cp38-linux_x86_64.whl pip install https://download.pytorch.org/whl/cu101/torchvision-0.5.0-cp38-cp38-linux_x86_64.whl
46
31
60,127,234
2020-2-8
https://stackoverflow.com/questions/60127234/how-to-use-a-pydantic-model-with-form-data-in-fastapi
I am trying to submit data from HTML forms and validate it with a Pydantic model. Using this code from fastapi import FastAPI, Form from pydantic import BaseModel from starlette.responses import HTMLResponse app = FastAPI() @app.get("/form", response_class=HTMLResponse) def form_get(): return '''<form method="post"> <input type="text" name="no" value="1"/> <input type="text" name="nm" value="abcd"/> <input type="submit"/> </form>''' class SimpleModel(BaseModel): no: int nm: str = "" @app.post("/form", response_model=SimpleModel) def form_post(form_data: SimpleModel = Form(...)): return form_data However, I get the HTTP error: "422 Unprocessable Entity" { "detail": [ { "loc": [ "body", "form_data" ], "msg": "field required", "type": "value_error.missing" } ] } The equivalent curl command (generated by Firefox) is curl 'http://localhost:8001/form' -H 'Content-Type: application/x-www-form-urlencoded' --data 'no=1&nm=abcd' Here the request body contains no=1&nm=abcd. What am I doing wrong?
I found a solution that can help us to use Pydantic with FastAPI forms :) My code: class AnyForm(BaseModel): any_param: str any_other_param: int = 1 @classmethod def as_form( cls, any_param: str = Form(...), any_other_param: int = Form(1) ) -> AnyForm: return cls(any_param=any_param, any_other_param=any_other_param) @router.post('') async def any_view(form_data: AnyForm = Depends(AnyForm.as_form)): ... It's shown in the Swagger as a usual form. It can be more generic as a decorator: import inspect from typing import Type from fastapi import Form from pydantic import BaseModel from pydantic.fields import ModelField def as_form(cls: Type[BaseModel]): new_parameters = [] for field_name, model_field in cls.__fields__.items(): model_field: ModelField # type: ignore new_parameters.append( inspect.Parameter( model_field.alias, inspect.Parameter.POSITIONAL_ONLY, default=Form(...) if model_field.required else Form(model_field.default), annotation=model_field.outer_type_, ) ) async def as_form_func(**data): return cls(**data) sig = inspect.signature(as_form_func) sig = sig.replace(parameters=new_parameters) as_form_func.__signature__ = sig # type: ignore setattr(cls, 'as_form', as_form_func) return cls And the usage looks like @as_form class Test(BaseModel): param: str a: int = 1 b: str = '2342' c: bool = False d: Optional[float] = None @router.post('/me', response_model=Test) async def me(request: Request, form: Test = Depends(Test.as_form)): return form
56
73
60,158,087
2020-2-10
https://stackoverflow.com/questions/60158087/youtubedl-certificate-verify-failed
I ran this code in Python: from __future__ import unicode_literals import youtube_dl ydl_opts = { 'format': 'bestaudio/best', 'postprocessors': [{ 'key': 'FFmpegExtractAudio', 'preferredcodec': 'mp3', 'preferredquality': '192', }], } with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download(['YOUTUBE URL']) I was hoping it would convert the Youtube video to a URL file. I got a really long error which basically repeated this: [0;31mERROR:[0m Unable to download webpage: (caused by URLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1108)'))) I have searched online but a unsure on how to solve this problem?
Add the no-check-certificate parameter to the command: youtube-dl --no-check-certificate This option was renamed to --no-check-certificates starting with version 2021.10.09 (inclusive).
30
99
60,154,404
2020-2-10
https://stackoverflow.com/questions/60154404/is-there-the-equivalent-of-to-markdown-to-read-data
With pandas 1.0.0 the use of .to_markdown() to show the content of a dataframe in this forum in markdown is going to proliferate. Is there a convenient way to load the data back into a dataframe? Maybe an option to .from_clipboard(markdown=True)?
None of the answers so far read the data from the clipboard. They all require the data to be in a string. This led me to look into the source code of pandas.read_clipboard(), and lo and behold, the method internally uses pandas.read_csv(), and passes all arguments to it. This automatically leads to the following solution: t = pd.DataFrame({"a": [0, 1], "b":[2, 3]}).to_markdown() print(t) | | a | b | |---:|----:|----:| | 0 | 0 | 2 | | 1 | 1 | 3 | Mark the table and copy it to the clipboard (Ctrl-C on Windows). Building on the above answers: pd.read_clipboard(sep="|", header=0, index_col=1, skipinitialspace=True).dropna( axis=1, how="all" ).iloc[1:] The next step would be to integrate this into https://pyjanitor-devs.github.io/pyjanitor/ so it can be easily called with a markdown=True parameter.
22
4
60,095,520
2020-2-6
https://stackoverflow.com/questions/60095520/understanding-contour-hierarchies-how-to-distinguish-filled-circle-contour-and
I am unable to differentiate between the below two contours. cv2.contourArea() is giving the same value for both. Is there any function to distinguish them in Python? How do I use contour hierarchies to determine the difference?
To distinguish between a filled contour and unfilled contour, you can use contour hierarchy when finding contours with cv2.findContours(). Specifically, you can select the contour retrieval mode to optionally return an output vector containing information about the image topology. There are the four possible modes: cv2.RETR_EXTERNAL - retrieves only the extreme outer contours (no hierarchy) cv2.RETR_LIST - retrieves all of the contours without establishing any hierarchical relationships cv2.RETR_CCOMP - retrieves all of the contours and organizes them into a two-level hierarchy. At the top level, there are external boundaries of the components. At the second level, there are boundaries of the holes. If there is another contour inside a hole of a connected component, it is still put at the top level cv2.RETR_TREE - retrieves all of the contours and reconstructs a full hierarchy of nested contours Understanding contour hierarchies So with this information, we can use cv2.RETR_CCOMP or cv2.RETR_TREE to return a hierarchy list. Take for example this image: When we use the cv2.RETR_TREE parameter, the contours are arranged in a hierarchy, with the outermost contours for each object at the top. Moving down the hierarchy, each new level of contours represents the next innermost contour for each object. In the image above, the contours in the image are colored to represent the hierarchical structure of the returned contours data. The outermost contours are red, and they are at the top of the hierarchy. The next innermost contours -- the dice pips, in this case -- are green. We can get that information about the contour hierarchies via the hierarchy array from the cv2.findContours function call. Suppose we call the function like this: (_, contours, hierarchy) = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) The third return value, saved in the hierarchy variable in this code, is a three-dimensional NumPy array, with one row, X columns, and a "depth" of 4. The X columns correspond to the number of contours found by the function. The cv2.RETR_TREE parameter causes the function to find the internal contours as well as the outermost contours for each object. Column zero corresponds to the first contour, column one the second, and so on. Each of the columns has a four-element array of integers, representing indices of other contours, according to this scheme: [next, previous, first child, parent] The next index refers to the next contour in this contour's hierarchy level, while the previous index refers to the previous contour in this contour's hierarchy level. The first child index refers to the first contour that is contained inside this contour. The parent index refers to the contour containing this contour. In all cases, an value of -1 indicates that there is no next, previous, first child, or parent contour, as appropriate. For a more concrete example, here are some example hierarchy values. The values are in square brackets, and the indices of the contours precede each entry. If you printed out the hierarchy array you will get something like this 0: [ 6 -1 1 -1] 18: [19 -1 -1 17] 1: [ 2 -1 -1 0] 19: [20 18 -1 17] 2: [ 3 1 -1 0] 20: [21 19 -1 17] 3: [ 4 2 -1 0] 21: [22 20 -1 17] 4: [ 5 3 -1 0] 22: [-1 21 -1 17] 5: [-1 4 -1 0] 23: [27 17 24 -1] 6: [11 0 7 -1] 24: [25 -1 -1 23] 7: [ 8 -1 -1 6] 25: [26 24 -1 23] 8: [ 9 7 -1 6] 26: [-1 25 -1 23] 9: [10 8 -1 6] 27: [32 23 28 -1] 10: [-1 9 -1 6] 28: [29 -1 -1 27] 11: [17 6 12 -1] 29: [30 28 -1 27] 12: [15 -1 13 11] 30: [31 29 -1 27] 13: [14 -1 -1 12] 31: [-1 30 -1 27] 14: [-1 13 -1 12] 32: [-1 27 33 -1] 15: [16 12 -1 11] 33: [34 -1 -1 32] 16: [-1 15 -1 11] 34: [35 33 -1 32] 17: [23 11 18 -1] 35: [-1 34 -1 32] The entry for the first contour is [6, -1, 1, -1]. This represents the first of the outermost contours; note that there is no particular order for the contours, e.g., they are not stored left to right by default. The entry tells us that the next dice outline is the contour with index six, that there is no previous contour in the list, that the first contour inside this one has index one, and that there is no parent for this contour (no contour containing this one). We can visualize the information in the hierarchy array as seven trees, one for each of the dice in the image. The seven outermost contours are all those that have no parent, i.e., those with an value of -1 in the fourth field of their hierarchy entry. Each of the child nodes beneath one of the "roots" represents a contour inside the outermost contour. Note how contours 13 and 14 are beneath contour 12 in the diagram. These two contours represent the innermost contours, perhaps noise or some lost paint in one of the pips. Once we understand how contours are arranged into a hierarchy, we can perform more sophisticated tasks, such as counting the number of contours within a shape in addition to the number of objects in an image. Going back to your question, we can use hierarchy to distinguish between inner and outer contours to determine if a contour is filled or unfilled. We can define a filled contour as a contour with no child whereas a unfilled contour as at least one child. So with this screenshot of your input image (removed the box): Result Code import cv2 # Load image, grayscale, Otsu's threshold image = cv2.imread('1.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Filter using contour hierarchy cnts, hierarchy = cv2.findContours(thresh, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)[-2:] hierarchy = hierarchy[0] for component in zip(cnts, hierarchy): currentContour = component[0] currentHierarchy = component[1] x,y,w,h = cv2.boundingRect(currentContour) # Has inner contours which means it is unfilled if currentHierarchy[3] > 0: cv2.putText(image, 'Unfilled', (x,y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (36,255,12), 2) # No child which means it is filled elif currentHierarchy[2] == -1: cv2.putText(image, 'Filled', (x,y-5), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (36,255,12), 2) cv2.imshow('image', image) cv2.waitKey()
8
23
60,150,956
2020-2-10
https://stackoverflow.com/questions/60150956/attaching-python-script-while-building-r-package
I have not found some R package ( there is no one, trust me) for my tasks, but there is one in python. So i wrote python script and used reticulaye::py_run_file('my_script.py') in some functions. But after building and installation, package can't find that script. Where should i put this script in order to use it after installation directly from package. And another thing, i need to install miniconda reticulate::install_miniconda(). Does anyone know the way to install it automatically after install.package command ?
Typically non-R code goes in ./inst/python/your_script.py (likewise for JS, etc). Anything in the inst folder will be installed into your package's root directory unchanged. To call these files in your package functions, use something like: reticulate::py_run_file(system.file("python", "your_script.py", package = "yourpkg")) See: http://r-pkgs.had.co.nz/inst.html For your second question, you should prompt the user before installing anything, but you would usually call any external installers in a special function called .onLoad with arguments libname and pkgname. This is a function that is automatically executed when you call library(yourpkg). .onLoad <- function(libname, pkgname) { user_permission <- utils::askYesNo("Install miniconda? downloads 50MB and takes time") if (isTRUE(user_permission)) { reticulate::install_miniconda() } else { message("You should run `reticulate::install_miniconda()` before using this package") } } You can put this function in any of your package R files.
12
18
60,145,306
2020-2-10
https://stackoverflow.com/questions/60145306/remove-background-text-and-noise-from-an-image-using-image-processing-with-openc
I have these images For which I want to remove the text in the background. Only the captcha characters should remain(i.e K6PwKA, YabVzu). The task is to identify these characters later using tesseract. This is what I have tried, but it isn't giving much good accuracy. import cv2 import pytesseract pytesseract.pytesseract.tesseract_cmd = r"C:\Users\HPO2KOR\AppData\Local\Tesseract-OCR\tesseract.exe" img = cv2.imread("untitled.png") gray_image = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) gray_filtered = cv2.inRange(gray_image, 0, 75) cv2.imwrite("cleaned.png", gray_filtered) How can I improve the same? Note : I tried all the suggestion that I was getting for this question and none of them worked for me. EDIT : According to Elias, I tried finding the color of the captcha text using photoshop by converting it to grayscale which came out to be somewhere in between [100, 105]. I then threshold the image based on this range. But the result which I got did not give satisfactory result from tesseract. gray_filtered = cv2.inRange(gray_image, 100, 105) cv2.imwrite("cleaned.png", gray_filtered) gray_inv = ~gray_filtered cv2.imwrite("cleaned.png", gray_inv) data = pytesseract.image_to_string(gray_inv, lang='eng') Output : 'KEP wKA' Result : EDIT 2 : def get_text(img_name): lower = (100, 100, 100) upper = (104, 104, 104) img = cv2.imread(img_name) img_rgb_inrange = cv2.inRange(img, lower, upper) neg_rgb_image = ~img_rgb_inrange cv2.imwrite('neg_img_rgb_inrange.png', neg_rgb_image) data = pytesseract.image_to_string(neg_rgb_image, lang='eng') return data gives : and the text as GXuMuUZ Is there any way to soften it a little
Here are two potential approaches and a method to correct distorted text: Method #1: Morphological operations + contour filtering Obtain binary image. Load image, grayscale, then Otsu's threshold. Remove text contours. Create a rectangular kernel with cv2.getStructuringElement() and then perform morphological operations to remove noise. Filter and remove small noise. Find contours and filter using contour area to remove small particles. We effectively remove the noise by filling in the contour with cv2.drawContours() Perform OCR. We invert the image then apply a slight Gaussian blur. We then OCR using Pytesseract with the --psm 6 configuration option to treat the image as a single block of text. Look at Tesseract improve quality for other methods to improve detection and Pytesseract configuration options for additional settings. Input image -> Binary -> Morph opening Contour area filtering -> Invert -> Apply blur to get result Result from OCR YabVzu Code import cv2 import pytesseract import numpy as np pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe" # Load image, grayscale, Otsu's threshold image = cv2.imread('2.png') gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] # Morph open to remove noise kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=1) # Find contours and remove small noise cnts = cv2.findContours(opening, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] for c in cnts: area = cv2.contourArea(c) if area < 50: cv2.drawContours(opening, [c], -1, 0, -1) # Invert and apply slight Gaussian blur result = 255 - opening result = cv2.GaussianBlur(result, (3,3), 0) # Perform OCR data = pytesseract.image_to_string(result, lang='eng', config='--psm 6') print(data) cv2.imshow('thresh', thresh) cv2.imshow('opening', opening) cv2.imshow('result', result) cv2.waitKey() Method #2: Color segmentation With the observation that the desired text to extract has a distinguishable contrast from the noise in the image, we can use color thresholding to isolate the text. The idea is to convert to HSV format then color threshold to obtain a mask using a lower/upper color range. From were we use the same process to OCR with Pytesseract. Input image -> Mask -> Result Code import cv2 import pytesseract import numpy as np pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe" # Load image, convert to HSV, color threshold to get mask image = cv2.imread('2.png') hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) lower = np.array([0, 0, 0]) upper = np.array([100, 175, 110]) mask = cv2.inRange(hsv, lower, upper) # Invert image and OCR invert = 255 - mask data = pytesseract.image_to_string(invert, lang='eng', config='--psm 6') print(data) cv2.imshow('mask', mask) cv2.imshow('invert', invert) cv2.waitKey() Correcting distorted text OCR works best when the image is horizontal. To ensure that the text is in an ideal format for OCR, we can perform a perspective transform. After removing all the noise to isolate the text, we can perform a morph close to combine individual text contours into a single contour. From here we can find the rotated bounding box using cv2.minAreaRect and then perform a four point perspective transform using imutils.perspective.four_point_transform. Continuing from the cleaned mask, here's the results: Mask -> Morph close -> Detected rotated bounding box -> Result Output with the other image Updated code to include perspective transform import cv2 import pytesseract import numpy as np from imutils.perspective import four_point_transform pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe" # Load image, convert to HSV, color threshold to get mask image = cv2.imread('1.png') hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) lower = np.array([0, 0, 0]) upper = np.array([100, 175, 110]) mask = cv2.inRange(hsv, lower, upper) # Morph close to connect individual text into a single contour kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5)) close = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel, iterations=3) # Find rotated bounding box then perspective transform cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = cnts[0] if len(cnts) == 2 else cnts[1] rect = cv2.minAreaRect(cnts[0]) box = cv2.boxPoints(rect) box = np.int0(box) cv2.drawContours(image,[box],0,(36,255,12),2) warped = four_point_transform(255 - mask, box.reshape(4, 2)) # OCR data = pytesseract.image_to_string(warped, lang='eng', config='--psm 6') print(data) cv2.imshow('mask', mask) cv2.imshow('close', close) cv2.imshow('warped', warped) cv2.imshow('image', image) cv2.waitKey() Note: The color threshold range was determined using this HSV threshold script import cv2 import numpy as np def nothing(x): pass # Load image image = cv2.imread('2.png') # Create a window cv2.namedWindow('image') # Create trackbars for color change # Hue is from 0-179 for Opencv cv2.createTrackbar('HMin', 'image', 0, 179, nothing) cv2.createTrackbar('SMin', 'image', 0, 255, nothing) cv2.createTrackbar('VMin', 'image', 0, 255, nothing) cv2.createTrackbar('HMax', 'image', 0, 179, nothing) cv2.createTrackbar('SMax', 'image', 0, 255, nothing) cv2.createTrackbar('VMax', 'image', 0, 255, nothing) # Set default value for Max HSV trackbars cv2.setTrackbarPos('HMax', 'image', 179) cv2.setTrackbarPos('SMax', 'image', 255) cv2.setTrackbarPos('VMax', 'image', 255) # Initialize HSV min/max values hMin = sMin = vMin = hMax = sMax = vMax = 0 phMin = psMin = pvMin = phMax = psMax = pvMax = 0 while(1): # Get current positions of all trackbars hMin = cv2.getTrackbarPos('HMin', 'image') sMin = cv2.getTrackbarPos('SMin', 'image') vMin = cv2.getTrackbarPos('VMin', 'image') hMax = cv2.getTrackbarPos('HMax', 'image') sMax = cv2.getTrackbarPos('SMax', 'image') vMax = cv2.getTrackbarPos('VMax', 'image') # Set minimum and maximum HSV values to display lower = np.array([hMin, sMin, vMin]) upper = np.array([hMax, sMax, vMax]) # Convert to HSV format and color threshold hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) mask = cv2.inRange(hsv, lower, upper) result = cv2.bitwise_and(image, image, mask=mask) # Print if there is a change in HSV value if((phMin != hMin) | (psMin != sMin) | (pvMin != vMin) | (phMax != hMax) | (psMax != sMax) | (pvMax != vMax) ): print("(hMin = %d , sMin = %d, vMin = %d), (hMax = %d , sMax = %d, vMax = %d)" % (hMin , sMin , vMin, hMax, sMax , vMax)) phMin = hMin psMin = sMin pvMin = vMin phMax = hMax psMax = sMax pvMax = vMax # Display result image cv2.imshow('image', result) if cv2.waitKey(10) & 0xFF == ord('q'): break cv2.destroyAllWindows()
13
22
60,100,344
2020-2-6
https://stackoverflow.com/questions/60100344/vscode-not-picking-up-ipykernel
I'm trying to use vscode with jupyter via the python extension. My pipfile looks like this: [[source]] name = "pypi" url = "https://pypi.org/simple" verify_ssl = true [packages] opencv-python = "*" [requires] python_version = "3.6" [dev-packages] ipykernel = "*" ipython = "*" jupyter = "*" To start the ipython interpreter i follow these steps: $ pipenv install $ pipenv shell $ code . using the Python: Select interpreter, i select the pipenv environment run code when i got to the code block by pressing shift + enter, i see the errors: Code block: #%% import cv2 I have also tried using all dependencies in the [packages] section, reinstalling my pipenv from scratch, and repeating the above process. Always the same error, what am i missing? $ code -v 1.41.1 26076a4de974ead31f97692a0d32f90d735645c0 x64
I had the same issue do the following 1- remove python and fresh install new python the latest version has pip in it 2- open terminal as administrator and run the following command: pip install ipykernel --trusted-host=pypi.python.org --trusted-host=pypi.org --trusted-host=files.pythonhosted.org
9
13
60,168,579
2020-2-11
https://stackoverflow.com/questions/60168579/how-to-make-conda-build-work-correctly-and-find-the-setup-py
I am trying to create an anaconda python package. My meta.yaml looks like this: package: name: liveprint-lib version: "0.1.0" build: number: 0 requirements: build: - pip - python=3.7 - setuptools run: - python=3.7 - numpy - opencv about: home: https://github.com/monomonedula/liveprint license: Apache License 2.0 license_file: LICENSE.txt summary: Python utility library for dynamic animations projections build.sh: $PYTHON setup.py install The folder structure: . ├── bld.bat ├── build.sh ├── LICENSE.txt ├── liveprint ├── meta.yaml ├── README.md ├── resources ├── setup.py └── test The error I get when running conda build . is the following: /home/vhhl/programs/anaconda3/conda-bld/liveprint-lib_1581422598848/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_/bin/python: can't open file 'setup.py': [Errno 2] No such file or directory What am I doing wrong?
Your meta.yaml file is missing a source section. Also, it's usually best to keep your recipe files in a directory of their own, rather than in the top repo. I recommend the following: mkdir conda-recipe mv meta.yaml build.sh bld.bat conda-recipe Then, edit meta.yaml to add a source section, which points to the top-level directory of your repo. package: name: liveprint-lib version: "0.1.0" source: # Relative path to the parent directory. path: .. build: number: 0 requirements: build: - pip - python=3.7 - setuptools run: - python=3.7 - numpy - opencv Then try: conda build conda-recipe
9
4
60,107,347
2020-2-7
https://stackoverflow.com/questions/60107347/createprocessw-failed-error2-ssh-askpass-posix-spawn-no-such-file-or-director
So I was following a tutorial to connect to my jupyter notebook which is running on my remote server so that I can access it on my local windows machine. These were the steps that I followed. On my remote server : jupyter notebook --no-browser --port=8889 Then on my local machine ssh -N -f -L localhost:8888:localhost:8889 *******@**********.de.gyan.com But I am getting an error CreateProcessW failed error:2 ssh_askpass: posix_spawn: No such file or directory Host key verification failed. How do I resolve this? Or is there is any other way to achieve the same?
If you need the DISPLAY variable set because you want to use VcXsrc or another X-Server in Windows 10 the workaround is to add the host you want to connect to your known_hosts file. This can be done by calling ssh-keyscan -t rsa host.example.com | Out-File ~/.ssh/known_hosts -Append -Encoding ASCII;
11
6
60,089,947
2020-2-6
https://stackoverflow.com/questions/60089947/creating-pydantic-model-schema-with-dynamic-key
I'm trying to implement Pydantic Schema Models for the following JSON. { "description": "Best Authors And Their Books", "authorInfo": { "KISHAN": { "numberOfBooks": 10, "bestBookIds": [0, 2, 3, 7] }, "BALARAM": { "numberOfBooks": 15, "bestBookIds": [10, 12, 14] }, "RAM": { "numberOfBooks": 6, "bestBookIds": [3,5] } } } Here are the schema objects in Pydantic from typing import List, Type, Dict from pydantic import BaseModel class AuthorBookDetails(BaseModel): numberOfBooks: int bestBookIds: List[int] class AuthorInfoCreate(BaseModel): __root__: Dict[str, Type[AuthorBookDetails]] #pass class ScreenCreate(BaseModel): description: str authorInfo: Type[AuthorInfoCreate] I'm parsing the AuthorInfoCreate as follows: y = AuthorBookDetails( numberOfBooks = 10, bestBookIds = [3,5]) print(y) print(type(y)) x = AuthorInfoCreate.parse_obj({"RAM" : y}) print(x) I see the following error. numberOfBooks=10 bestBookIds=[3, 5] <class '__main__.AuthorBookDetails'> Traceback (most recent call last): File "test.py", line 44, in <module> x = AuthorInfoCreate.parse_obj({"RAM": y}) File "C:\sources\rep-funds\env\lib\site-packages\pydantic\main.py", line 402, in parse_obj return cls(**obj) File "C:\sources\rep-funds\env\lib\site-packages\pydantic\main.py", line 283, in __init__ raise validation_error pydantic.error_wrappers.ValidationError: 1 validation error for AuthorInfoCreate __root__ -> RAM subclass of AuthorBookDetails expected (type=type_error.subclass; expected_class=AuthorBookDetails) I want to understand how can I change AuthorInfoCreate so that I have the json schema mentioned.
Actually you should remove Type from type annotations. You need an instance of a class, not an actual class. Try the solution below: from typing import List,Dict from pydantic import BaseModel class AuthorBookDetails(BaseModel): numberOfBooks: int bestBookIds: List[int] class AuthorInfoCreate(BaseModel): __root__: Dict[str, AuthorBookDetails] class ScreenCreate(BaseModel): description: str authorInfo: AuthorInfoCreate
16
9
60,068,313
2020-2-5
https://stackoverflow.com/questions/60068313/include-minimum-pip-version-in-setup-py
I've created a setup.py for my application. Some of the dependencies i set in install_requires require pip version 19.3.1 or greater. Is there a way to check for pip version as part of setup.py? and to upgrade pip prior to build?
This is not your responsibility to build workarounds in your project for the issues in the packaging of other projects. This is kind of a bad practice. There is also not much point in doing this as part of a setup.py anyway since in many cases this file is not executed during install time. The best thing you can do is try and fix the faulty packaging of these dependency projects directly: contact the maintainers, file an issue, propose a fix, etc. The second best thing is to inform the users of your project. Clearly state this problem in the documentation of your own project and how to prevent it (i.e. "install pip version 19.3.1 or greater"). Update: If you decide to enforce a check in setup.py anyway, here are some techniques that might help... But I would still recommend against those, since your setup.py is not actually at fault here, but the issue seems to lie in the packaging of the dependencies. 1. __requires__ = ['pip >= 19.3.1'] # make sure it's before 'import setuptools' import setuptools setuptools.setup( # ... ) This would trigger an exception: pkg_resources.DistributionNotFound: The 'pip>=19.3.1' distribution was not found and is required by the application The drawback of this technique is that it doesn't trigger when called from pip (for example: pip install .), since in that case the __main__ module is not setup.py but a module of pip. Reference: https://setuptools.readthedocs.io/en/stable/pkg_resources.html?highlight=requires#workingset-objects 2. import pkg_resources import setuptools pkg_resources.require(['pip >= 19.3.1']) setuptools.setup( # ... ) This would trigger a pkg_resources.VersionConflict exception. This should work even if called from pip, but... This doesn't seem to work with build isolation (PEP 517, pyproject.toml), because in such a case there is usually no pip at all in the build environment. Reference: https://setuptools.readthedocs.io/en/stable/pkg_resources.html?highlight=require#basic-workingset-methods
10
5
60,074,344
2020-2-5
https://stackoverflow.com/questions/60074344/reserved-word-as-an-attribute-name-in-a-dataclass-when-parsing-a-json-object
I stumbled upon a problem, when I was working on my ETL pipeline. I am using dataclasses dataclass to parse JSON objects. One of the keywords of the JSON object is a reserved keyword. Is there a way around this: from dataclasses import dataclass import jsons out = {"yield": 0.21} @dataclass class PriceObj: asOfDate: str price: float yield: float jsons.load(out, PriceObj) This will obviously fail because yield is reserved. Looking at the dataclasses field definition, there doesn't seem to be anything in there that can help. Go, allows one to define the name of the JSON field, wonder if there is such a feature in the dataclass?
You can decode / encode using a different name with the dataclasses_json lib, from their docs: from dataclasses import dataclass, field from dataclasses_json import config, dataclass_json @dataclass_json @dataclass class Person: given_name: str = field(metadata=config(field_name="overriddenGivenName")) Person(given_name="Alice") # Person('Alice') Person.from_json('{"overriddenGivenName": "Alice"}') # Person('Alice') Person('Alice').to_json() # {"overriddenGivenName": "Alice"}
10
8
60,111,305
2020-2-7
https://stackoverflow.com/questions/60111305/getting-error-while-installing-apache-airflow
I am getting above error when i try airflow -version and airflow initdb File "/home/ravi/sandbox/bin/airflow", line 26, in <module> from airflow.bin.cli import CLIFactory File "/home/ravi/sandbox/lib/python3.6/site-packages/airflow/bin/cli.py", line 70, in <module> from airflow.www.app import (cached_app, create_app) File "/home/ravi/sandbox/lib/python3.6/site-packages/airflow/www/app.py", line 37, in <module> from airflow.www.blueprints import routes File "/home/ravi/sandbox/lib/python3.6/site-packages/airflow/www/blueprints.py", line 25, in <module> from airflow.www import utils as wwwutils File "/home/ravi/sandbox/lib/python3.6/site-packages/airflow/www/utils.py", line 39, in <module> from flask_admin.model import filters File "/home/ravi/sandbox/lib/python3.6/site-packages/flask_admin/model/__init__.py", line 2, in <module> from .base import BaseModelView File "/home/ravi/sandbox/lib/python3.6/site-packages/flask_admin/model/base.py", line 8, in <module> from werkzeug import secure_filename ImportError: cannot import name 'secure_filename' ```
With this, it worked, thanks: pip install werkzeug==0.16.0 DB: sqlite:////home/centos/airflow/airflow.db [2020-02-07 12:02:02,523] {db.py:368} INFO - Creating tables
16
45
60,077,137
2020-2-5
https://stackoverflow.com/questions/60077137/conda-how-to-ignore-if-a-conda-channel-is-not-reachable-ignore-unavailableinva
The situation At work we have a private conda channel in our network that is used for some internal packages. Since I do not want to type the channel location every time I install something via conda install, I added it to condas default channels in .condarc. The problem Obviously the channel is only available inside my company's network. When I am outside the network and want to install for example numpy (so a normal package available on the conda default channel) I get the following error because the private channel is not available: conda.exceptions.UnavailableInvalidChannel: The channel is not accessible or is invalid. channel name: privateChannel channel url: file://address/in/companys/network error code: 404 independent from what package I want to install! What I am looking for An option to tell conda to ignore the UnavailableInvalidChannel error or something similar that solves my problem. Because I do not want to edit my .condarc every time I switch to another network... Usually I am aware of, if I am going to install an internal package that I need the company's channel for so I would not mind if conda skips the internal channel silently or with a warning for everything else if it is not available. I just do not want conda to abort everything if it is not available. Another small related question: Is there a way to define channel aliases? I am aware of channel-alias but that just changes the default channel prefix.
Solution More or less by accident I found the answer to my own question recently and do not want to keep it by myself. To prevent conda to fail when a channel is not available during install/update of packages from other available channels you have to set the following parameter in your .condarc file: allow_non_channel_urls = True Or instead of editing your .condarc directly, you can type in your terminal: conda config --set allow_non_channel_urls True The parameter with the not very intuitive name allow_non_channel_urls is not explained in the conda docs about using the .condarc conda configuration file. But you can find it in their full .condarc example here and nowhere else. What does it do? The official explanation is "Warn, but do not fail, when conda detects a channel url is not a valid channel". This means for example, that if the channel URL is simply not reachable from the network you are currently using (maybe you are outside of your companies network) conda will just print a warning instead of abort installing packages. This is exactly what I wanted! The warning can look quite excessive because it is printed for every architecture (linux-32, win-64, osx-64, noarch, etc.) Note Conda will still fail with an error if your package is not found on the available channels. But in that case you want conda to fail.
9
8
60,067,953
2020-2-5
https://stackoverflow.com/questions/60067953/is-it-possible-to-specify-the-pickle-protocol-when-writing-pandas-to-hdf5
Is there a way to tell Pandas to use a specific pickle protocol (e.g. 4) when writing an HDF5 file? Here is the situation (much simplified): Client A is using python=3.8.1 (as well as pandas=1.0.0 and pytables=3.6.1). A writes some DataFrame using df.to_hdf(file, key). Client B is using python=3.7.1 (and, as it happened, pandas=0.25.1 and pytables=3.5.2 --but that's irrelevant). B tries to read the data written by A using pd.read_hdf(file, key), and fails with ValueError: unsupported pickle protocol: 5. Mind you, this doesn't happen with a purely numerical DataFrame (e.g. pd.DataFrame(np.random.normal(size=(10,10))). So here is a reproducible example: (base) $ conda activate py38 (py38) $ python Python 3.8.1 (default, Jan 8 2020, 22:29:32) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> df = pd.DataFrame(['hello', 'world'])) >>> df.to_hdf('foo', 'x') >>> exit() (py38) $ conda deactivate (base) $ python Python 3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pandas as pd >>> df = pd.read_hdf('foo', 'x') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 407, in read_hdf return store.select(key, auto_close=auto_close, **kwargs) File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 782, in select return it.get_result() File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 1639, in get_result results = self.func(self.start, self.stop, where) File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 766, in func return s.read(start=_start, stop=_stop, where=_where, columns=columns) File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 3206, in read "block{idx}_values".format(idx=i), start=_start, stop=_stop File "/opt/anaconda3/lib/python3.7/site-packages/pandas/io/pytables.py", line 2737, in read_array ret = node[0][start:stop] File "/opt/anaconda3/lib/python3.7/site-packages/tables/vlarray.py", line 681, in __getitem__ return self.read(start, stop, step)[0] File "/opt/anaconda3/lib/python3.7/site-packages/tables/vlarray.py", line 825, in read outlistarr = [atom.fromarray(arr) for arr in listarr] File "/opt/anaconda3/lib/python3.7/site-packages/tables/vlarray.py", line 825, in <listcomp> outlistarr = [atom.fromarray(arr) for arr in listarr] File "/opt/anaconda3/lib/python3.7/site-packages/tables/atom.py", line 1227, in fromarray return six.moves.cPickle.loads(array.tostring()) ValueError: unsupported pickle protocol: 5 >>> Note: I tried also reading using pandas=1.0.0 (and pytables=3.6.1) in python=3.7.4. That fails too, so I believe it is simply the Python version (3.8 writer vs 3.7 reader) that causes the problem. This makes sense since pickle protocol 5 was introduced as PEP-574 for Python 3.8.
Update: I was wrong to assume this was not possible. In fact, based on the excellent "monkey-patch" suggestion of @PiotrJurkiewicz, here is a simple context manager that lets us temporarily change the highest pickle protocol. It: Hides the monkey-patching, and Has no side-effect outside of the context; it can be used at any time, whether pickle was previously imported or not, before or after pandas, no matter. Here is the code (e.g. in a file pickle_prot.py): import importlib import pickle class PickleProtocol: def __init__(self, level): self.previous = pickle.HIGHEST_PROTOCOL self.level = level def __enter__(self): importlib.reload(pickle) pickle.HIGHEST_PROTOCOL = self.level def __exit__(self, *exc): importlib.reload(pickle) pickle.HIGHEST_PROTOCOL = self.previous def pickle_protocol(level): return PickleProtocol(level) Usage example in a writer: import pandas as pd from pickle_prot import pickle_protocol pd.DataFrame(['hello', 'world']).to_hdf('foo_0.h5', 'x') with pickle_protocol(4): pd.DataFrame(['hello', 'world']).to_hdf('foo_1.h5', 'x') pd.DataFrame(['hello', 'world']).to_hdf('foo_2.h5', 'x') And, using a simple test reader: import pandas as pd from glob import glob for filename in sorted(glob('foo_*.h5')): try: df = pd.read_hdf(filename, 'x') print(f'could read {filename}') except Exception as e: print(f'failed on {filename}: {e}') Now, trying to read in py37 after having written in py38, we get: failed on foo_0.h5: unsupported pickle protocol: 5 could read foo_1.h5 failed on foo_2.h5: unsupported pickle protocol: 5 But, using the same version (37 or 38) to read and write, we of course get no exception. Note: the issue 33087 is still on Pandas issue tracker.
8
7
60,088,889
2020-2-6
https://stackoverflow.com/questions/60088889/how-do-you-permanently-delete-an-experiment-in-mlflow
Permanent deletion of an experiment isn't documented anywhere. I'm using Mlflow w/ backend postgres db Here's what I've run: client = MlflowClient(tracking_uri=server) client.delete_experiment(1) This deletes the the experiment, but when I run a new experiment with the same name as the one I just deleted, it will return this error: mlflow.exceptions.MlflowException: Cannot set a deleted experiment 'cross-sell' as the active experiment. You can restore the experiment, or permanently delete the experiment to create a new one. I cannot find anywhere in the documentation that shows how to permanently delete everything.
Unfortunately it seems there is no way to do this via the UI or CLI at the moment :-/ The way to do it depends on the type of backend file store that you are using. Filestore: If you are using the filesystem as a storage mechanism (the default) then it is easy. The 'deleted' experiments are moved to a .trash folder. You just need to clear that out: rm -rf mlruns/.trash/* As of the current version of the documentation (1.7.2), they remark: It is recommended to use a cron job or an alternate workflow mechanism to clear .trash folder. SQL Database: This is more tricky, as there are dependencies that need to be deleted. I am using MySQL, and these commands work for me: USE mlflow_db; # the name of your database DELETE FROM experiment_tags WHERE experiment_id=ANY( SELECT experiment_id FROM experiments where lifecycle_stage="deleted" ); DELETE FROM latest_metrics WHERE run_uuid=ANY( SELECT run_uuid FROM runs WHERE experiment_id=ANY( SELECT experiment_id FROM experiments where lifecycle_stage="deleted" ) ); DELETE FROM metrics WHERE run_uuid=ANY( SELECT run_uuid FROM runs WHERE experiment_id=ANY( SELECT experiment_id FROM experiments where lifecycle_stage="deleted" ) ); DELETE FROM tags WHERE run_uuid=ANY( SELECT run_uuid FROM runs WHERE experiment_id=ANY( SELECT experiment_id FROM experiments where lifecycle_stage="deleted" ) ); DELETE FROM runs WHERE experiment_id=ANY( SELECT experiment_id FROM experiments where lifecycle_stage="deleted" ); DELETE FROM experiments where lifecycle_stage="deleted";
30
30
60,134,947
2020-2-9
https://stackoverflow.com/questions/60134947/why-couldnt-i-download-images-from-google-with-python
The code helped me download bunch of images from google. It used to work a few days back and now all of the sudden the code breaks. Code : # importing google_images_download module from google_images_download import google_images_download # creating object response = google_images_download.googleimagesdownload() search_queries = ['Apple', 'Orange', 'Grapes', 'water melon'] def downloadimages(query): # keywords is the search query # format is the image file format # limit is the number of images to be downloaded # print urs is to print the image file url # size is the image size which can # be specified manually ("large, medium, icon") # aspect ratio denotes the height width ratio # of images to download. ("tall, square, wide, panoramic") arguments = {"keywords": query, "format": "jpg", "limit":4, "print_urls":True, "size": "medium", "aspect_ratio": "panoramic"} try: response.download(arguments) # Handling File NotFound Error except FileNotFoundError: arguments = {"keywords": query, "format": "jpg", "limit":4, "print_urls":True, "size": "medium"} # Providing arguments for the searched query try: # Downloading the photos based # on the given arguments response.download(arguments) except: pass # Driver Code for query in search_queries: downloadimages(query) print() Output log: Item no.: 1 --> Item name = Apple Evaluating... Starting Download... Unfortunately all 4 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter! Errors: 0 Item no.: 1 --> Item name = Orange Evaluating... Starting Download... Unfortunately all 4 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter! Errors: 0 Item no.: 1 --> Item name = Grapes Evaluating... Starting Download... Unfortunately all 4 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter! Errors: 0 Item no.: 1 --> Item name = water melon Evaluating... Starting Download... Unfortunately all 4 could not be downloaded because some images were not downloadable. 0 is all we got for this search filter! Errors: 0 This actually create a folder but no images in it.
Indeed the issue has appeared not so long ago, there are already a bunch of similar Github issues: https://github.com/hardikvasa/google-images-download/pull/298 https://github.com/hardikvasa/google-images-download/issues/301 https://github.com/hardikvasa/google-images-download/issues/302 Unfortunately, there is no official solution, for now, you could use the temporary solution that was provided in the discussions.
18
2
60,157,335
2020-2-10
https://stackoverflow.com/questions/60157335/cant-pip-install-tensorflow-msvcp140-1-dll-missing
I am currently trying to pip install tensorflow, which works but after I install it, and then import it into my python module via import tensorflow as tf I get following error message: ImportError: Could not find the DLL(s) 'msvcp140_1.dll'. TensorFlow requires that these DLLs be installed in a directory that is named in your %PATH% environment variable. You may install these DLLs by downloading "Microsoft C++ Redistributable for Visual Studio 2015, 2017 and 2019" for your platform from this URL: https://support.microsoft.com/help/2977003/the-latest-supported-visual-c-downloads I installed the msvcp140_1.dll and put it into C:\Users\User\AppData\Local\Programs\Python\Python37 which is contained in my path environment variable. As you can see I am using Python 3.7 as 3.8 is not supported by tensorflow. Any ideas how to fix this?
You can find msvcp140.dll in your %windows%/System32 folder, once you installed VC++ DIST for VS 2015, for msvcp140_1.dll you need to goto this page https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads and in the section :Visual Studio 2015, 2017 and 2019, pick the correct package with the arch of your PC.
14
9
60,119,934
2020-2-7
https://stackoverflow.com/questions/60119934/how-to-read-from-a-high-io-dataset-in-pytorch-which-grows-from-epoch-to-epoch
I use Tensorflow, but I'm writing documentation for users that will typically vary across deep learning frameworks. When working with datasets that don't fit on the local filesystem (TB+) I sample data from a remote data store and write samples locally to a Tensorflow standardtfrecords format. During the first epoch of training I will have only sampled a few values, therefore an epoch of local data is very small, I train on it. On epoch 2 I re-examine what data files have been produced by my sampling subprocesses (now more) and train on the expanded set of local data files for the next epoch. Repeat the process each epoch. In this way I build up a local cache of samples and can evict older samples as I fill up the local storage. The local samples cache grows at about the time the model needs the variance the most (towards the latter part of training). In Python/Tensorflow it's crucial that I not deserialize the data in the Python training loop process because the Python GIL can't support the data transfer rates (300-600 MB/sec, the data is raw scientific uncompressible), and thus GPU performance suffers when the Python GIL can't service the training loop fast. Writing the samples to a tfrecords file from subprocesses (python multiprocessing) allows tensorflow's native TFRecordsDataset to do deserialization outside of Python and thus we sidestep the Python GIL issues, and I can saturate a GPU with high IO data rates. I would like to know how I would address this issue in Pytorch. I'm writing about the sampling strategy that's being used, and want to provide specific recommendations to users of both Tensorflow and PyTorch, but I don't know the PyTorch preprocessing ecosystem well enough to write with sufficient detail. Side note: the only purely Python based solution to support these data transfer rates may come in Python 3.8 with System V shared memory and multiprocessing, but I haven't tried that yet as support for it isn't quite sufficient (soon it will be). Existing multiprocessing solutions aren't sufficient because they require deserialization in the training loop process and thus lock the GIL during deserialization at high IO rates.
Actually, you can easily deserialize data in a subprocess by using torch.utils.data.DataLoader. By setting num_workers argument to 1 or a bigger value, you can spawn subprocesses with their own python interpreters and GILs. loader = torch.utils.data.DataLoader(your_dataset, num_workers=n, **kwargs) for epoch in range(epochs): for batch_idx, data in enumerate(loader): # loader in the main process does not claim GIL at this point A Dataloader requires a torch.utils.data.Dataset to get data from. It may not be a trivial job to implement a proper subclass in your case. In case you need to recreate a Dataset instance for every epoch, you can do something like this. for epcoh in range(epochs): dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) for batch_idx, data in enumerate(loader): # Do training or even better dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) for epcoh in range(epochs): last_batch_idx = (len(dset)-1) // loader.batch_size for batch_idx, data in enumerate(loader): # Prepare next loader in advance to avoid blocking if batch_idx == last_batch_idx: dset = get_new_dataset() loader = torch.utils.data.DataLoader(dset, num_workers=n, **kwargs) # Do training As a side note, please note that it's CPU bound operation that is affected by GIL in most cases, not I/O bound operation, i.e., threading will do for any purely I/O heavy operation and you don't even need subprocess. For more information please refer to this question and this wikipedia article.
9
10
60,145,006
2020-2-10
https://stackoverflow.com/questions/60145006/cannot-import-name-easter-from-holidays
I am trying to import fbprophet on Python Anaconda, however, I get this error: ImportError: cannot import name 'easter' from 'holidays' Can anyone suggest what might have gone wrong? Code: from fbprophet import fbprophet
I'm using anaconda, and the only solution that worked for me was: Replace line 16 in fbprophet/hdays.py (\AppData\Local\Continuum\anaconda3\Lib\site-packages\fbprophet\hdays.py): from holidays import WEEKEND, HolidayBase, easter, rd to from holidays import WEEKEND, HolidayBase from dateutil.easter import easter from dateutil.relativedelta import relativedelta as rd
9
13
60,178,119
2020-2-11
https://stackoverflow.com/questions/60178119/signaturedoesnotmatch-boto3-django-storages
I have the following config: Django/DRF Boto3 Django-storages I am using an IAM user credentials with one set of keys. I have removed all other sets of keys including root keys from my account, to eliminate keys mismatch. I created a new bucket my-prod-bucket. Updated the bucket name settings in my env file. I ran python3 manage.py collectstatic and it created the new bucket without a problem. my .env: AWS_ACCESS_KEY_ID=something AWS_SECRET_ACCESS_KEY=something AWS_STORAGE_BUCKET_NAME=my-prod-bucket my settings.py (using python-decouple to grab from .env): AWS_ACCESS_KEY_ID = config('AWS_ACCESS_KEY_ID') AWS_SECRET_ACCESS_KEY = config('AWS_SECRET_ACCESS_KEY') AWS_STORAGE_BUCKET_NAME = config('AWS_STORAGE_BUCKET_NAME') AWS_S3_CUSTOM_DOMAIN = '%s.s3.ca-central-1.amazonaws.com' % AWS_STORAGE_BUCKET_NAME AWS_S3_REGION_NAME = 'ca-central-1' AWS_HEADERS = { 'CacheControl': 'max-age=86400', } AWS_STATIC_LOCATION = 'static' STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_STATIC_LOCATION) STATICFILES_STORAGE = 'portal.storage_backends.StaticStorage' # ======= AWS_DEFAULT_ACL = None AWS_AUTO_CREATE_BUCKET = True S3_USE_SIGV4 = True I can upload and delete however when I try to download a file I get: <Error> <Code>SignatureDoesNotMatch</Code> <Message> The request signature we calculated does not match the signature you provided. Check your key and signing method. </Message> <AWSAccessKeyId>AKIA6FUWELHP36HW6QOT</AWSAccessKeyId> <StringToSign> AWS4-HMAC-SHA256 20200211T215631Z 20200211/ca-central-1/s3/aws4_request 703b799a80d9efd9f9e06a01ab30a8a721f2a9bafe6a3d5c92b045ea769b0d87 </StringToSign> <SignatureProvided> 46bd882624f966d9cb8914d279f7c8f91a2b3e5e577525c13069e29f8891c1ee </SignatureProvided> <StringToSignBytes> 41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 0a 32 30 32 30 30 32 31 31 54 32 31 35 36 33 31 5a 0a 32 30 32 30 30 32 31 31 2f 63 61 2d 63 65 6e 74 </StringToSignBytes> <CanonicalRequest> GET /media/private/cities/20/2017/london_2016.csv X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA6FUWELHP36%2F20200211%2Fca-central-1%2Fs3%2Faws4_request&X-Amz-Date=20200211T215631Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host host:my-prod-bucket.s3.ca-central-1.amazonaws.com host UNSIGNED-PAYLOAD </CanonicalRequest> <CanonicalRequestBytes> 47 45 54 0a 2f 6d 65 64 69 61 2f 70 72 69 76 61 74 65 2f 63 69 74 69 65 73 2f 32 30 2f 32 30 31 37 2f 45 43 35 2e 31 2f 6c 6f 6e 64 6f 6e 5f 32 30 31 36 2e 63 73 76 0a 58 2d 41 6d 7a 2d 41 6c 67 6f 72 69 74 68 6d 3d 41 57 53 </CanonicalRequestBytes> <RequestId>6A85C2780914C0F5</RequestId> <HostId> WtPC4cEV60ybq2pEdfghdfg23tg123lVV6l/iHiaSAjL4DS0= </HostId> </Error> I'm not sure what I'm doing wrong. I searched every post on this error but couldn't find anything recent that fits my scenario. Any ideas on how to troubleshoot will be greatly appreciated.
Ok, so after spending nearly 2 days trying to make sense of this, this is what I came up with: The problem in my case was that the bucket created in zone ca-central-1. Once I changed the request to a bucket in us-east-1 everything was immediately working fine without that error. Everything on my end was set perfectly. Now, the next day I tried to connect to that same ca-central-1 bucket again and this time it worked. No signature mismatch error. At this point I'm thinking maybe there's a 'time-delay' on AWS S3 when creating buckets in some areas until they function properly. To test my theory, I created a new bucket in ca-central-1 and tried to connect to it. Again, same error as above for the new bucket. Waited till the next day, tried again - and everything was working fine. Keep the 'time-delay' (for a lack of a better explanation) in mind if ever encountering the same issue.
9
14
60,111,361
2020-2-7
https://stackoverflow.com/questions/60111361/how-to-download-a-file-from-google-drive-using-python-and-the-drive-api-v3
I have tried downloading file from Google Drive to my local system using python script but facing a "forbidden" issue while running a Python script. The script is as follows: import requests url = "https://www.googleapis.com/drive/v3/files/1wPxpQwvEEOu9whmVVJA9PzGPM2XvZvhj?alt=media&export=download" querystring = {"alt":"media","export":"download"} headers = { 'Authorization': "Bearer TOKEN", 'Host': "www.googleapis.com", 'Accept-Encoding': "gzip, deflate", 'Connection': "keep-alive", } response = requests.request("GET", url, headers=headers, params=querystring) print(response.url) # import wget import os from os.path import expanduser myhome = expanduser("/home/sunarcgautam/Music") ### set working dir os.chdir(myhome) url = "https://www.googleapis.com/drive/v3/files/1wPxpQwvEEOu9whmVVJA9PzGPM2XvZvhj?alt=media&export=download" print('downloading ...') wget.download(response.url) In this script, I have got forbidden issue. Am I doing anything wrong in the script? I have also tried another script that I found on a Google Developer page, which is as follows: import auth import httplib2 SCOPES = "https://www.googleapis.com/auth/drive.scripts" CLIENT_SECRET_FILE = "client_secret.json" APPLICATION_NAME = "test_Download" authInst = auth.auth(SCOPES, CLIENT_SECRET_FILE, APPLICATION_NAME) credentials = authInst.getCredentials() http = credentials.authorize(httplib2.Http()) drive_serivce = discovery.build('drive', 'v3', http=http) file_id = '1Af6vN0uXj8_qgqac6f23QSAiKYCTu9cA' request = drive_serivce.files().export_media(fileId=file_id, mimeType='application/pdf') fh = io.BytesIO() downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print ("Download %d%%." % int(status.progress() * 100)) This script gives me a URL mismatch error. So what should be given for redirect URL in Google console credentials? or any other solution for the issue? Do I have to authorise my Google console app from Google in both the script? If so, what will the process of authorising the app because I haven't found any document regarding that.
To make requests to Google APIs the work flow is in essence the following: Go to developer console, log in if you haven't. Create a Cloud Platform project. Enable for your project, the APIs you are interested in using with you projects' apps (for example: Google Drive API). Create and download OAuth 2.0 Client IDs credentials that will allow your app to gain authorization for using your enabled APIs. Head over to OAuth consent screen, click on and add your scope using the button. (scope: https://www.googleapis.com/auth/drive.readonly for you). Choose Internal/External according to your needs, and for now ignore the warnings if any. To get the valid token for making API request the app will go through the OAuth flow to receive the authorization token. (Since it needs consent) During the OAuth flow the user will be redirected to your the OAuth consent screen, where it will be asked to approve or deny access to your app's requested scopes. If consent is given, your app will receive an authorization token. Pass the token in your request to your authorized API endpoints.[2] Build a Drive Service to make API requests (You will need the valid token)[1] NOTE: The available methods for the Files resource for Drive API v3 are here. When using the Python Google APIs Client, then you can use export_media() or get_media() as per Google APIs Client for Python documentation IMPORTANT: Also, check that the scope you are using, actually allows you to do what you want (Downloading Files from user's Drive) and set it accordingly. ATM you have an incorrect scope for your goal. See OAuth 2.0 API Scopes Sample Code References: Building a Drive Service: import google_auth_oauthlib.flow from google.auth.transport.requests import Request from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build class Auth: def __init__(self, client_secret_filename, scopes): self.client_secret = client_secret_filename self.scopes = scopes self.flow = google_auth_oauthlib.flow.Flow.from_client_secrets_file(self.client_secret, self.scopes) self.flow.redirect_uri = 'http://localhost:8080/' self.creds = None def get_credentials(self): flow = InstalledAppFlow.from_client_secrets_file(self.client_secret, self.scopes) self.creds = flow.run_local_server(port=8080) return self.creds # The scope you app will use. # (NEEDS to be among the enabled in your OAuth consent screen) SCOPES = "https://www.googleapis.com/auth/drive.readonly" CLIENT_SECRET_FILE = "credentials.json" credentials = Auth(client_secret_filename=CLIENT_SECRET_FILE, scopes=SCOPES).get_credentials() drive_service = build('drive', 'v3', credentials=credentials) Making the request to export or get a file request = drive_service.files().export(fileId=file_id, mimeType='application/pdf') fh = io.BytesIO() downloader = MediaIoBaseDownload(fh, request) done = False while done is False: status, done = downloader.next_chunk() print("Download %d%%" % int(status.progress() * 100)) # The file has been downloaded into RAM, now save it in a file fh.seek(0) with open('your_filename.pdf', 'wb') as f: shutil.copyfileobj(fh, f, length=131072)
9
26
60,172,458
2020-2-11
https://stackoverflow.com/questions/60172458/sklearn-cross-val-score-returns-nan-values
i'm trying to predict next customer purchase to my job. I followed a guide, but when i tried to use cross_val_score() function, it returns NaN values.Google Colab notebook screenshot Variables: X_train is a dataframe X_test is a dataframe y_train is a list y_test is a list Code: X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=50) X_train = X_train.reset_index(drop=True) X_train X_test = X_test.reset_index(drop=True) y_train = y_train.astype('float') y_test = y_test.astype('float') models = [] models.append(("LR",LogisticRegression())) models.append(("NB",GaussianNB())) models.append(("RF",RandomForestClassifier())) models.append(("SVC",SVC())) models.append(("Dtree",DecisionTreeClassifier())) models.append(("XGB",xgb.XGBClassifier())) models.append(("KNN",KNeighborsClassifier()))´ for name,model in models: kfold = KFold(n_splits=2, random_state=22) cv_result = cross_val_score(model,X_train,y_train, cv = kfold,scoring = "accuracy") print(name, cv_result) >> LR [nan nan] NB [nan nan] RF [nan nan] SVC [nan nan] Dtree [nan nan] XGB [nan nan] KNN [nan nan] help me please!
Well thanks everyone for your answers. The answer of Anna helped me a lot!, but i don't used X_train.values, instead i assigned an unique ID to the Customers, then dropped Customers column and it works! Now the models has this output :) LR [0.73958333 0.74736842] NB [0.60416667 0.71578947] RF [0.80208333 0.82105263] SVC [0.79166667 0.77894737] Dtree [0.82291667 0.83157895] XGB [0.85416667 0.85263158] KNN [0.79166667 0.75789474]
11
1
60,147,431
2020-2-10
https://stackoverflow.com/questions/60147431/how-to-put-a-label-on-a-country-with-python-cartopy
Using python3 and cartopy, having this code: import matplotlib.pyplot as plt import cartopy import cartopy.io.shapereader as shpreader import cartopy.crs as ccrs ax = plt.axes(projection=ccrs.PlateCarree()) ax.add_feature(cartopy.feature.LAND) ax.add_feature(cartopy.feature.OCEAN) ax.add_feature(cartopy.feature.COASTLINE) ax.add_feature(cartopy.feature.BORDERS, linestyle='-', alpha=.5) ax.add_feature(cartopy.feature.LAKES, alpha=0.95) ax.add_feature(cartopy.feature.RIVERS) ax.set_extent([-150, 60, -25, 60]) shpfilename = shpreader.natural_earth(resolution='110m', category='cultural', name='admin_0_countries') reader = shpreader.Reader(shpfilename) countries = reader.records() for country in countries: if country.attributes['SOVEREIGNT'] == "Bulgaria": ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(0, 1, 0), label = "A") else: ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(1, 1, 1), label = country.attributes['SOVEREIGNT']) plt.rcParams["figure.figsize"] = (50,50) plt.show() I get this: Question: What should I write, in order to get a red "A" over Bulgaria (or any other country, which I refer to in country.attributes['SOVEREIGNT'])? Currently the label is not shown at all and I am not sure how to change the font of the label. Thus, it seems that the following only changes the color, without adding the label: ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(0, 1, 0), label = "A")
You can retrieve the centroid of the geometry and plot the text at that location: import matplotlib.patheffects as PathEffects for country in countries: if country.attributes['SOVEREIGNT'] == "Bulgaria": g = ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(0, 1, 0), label="A") x = country.geometry.centroid.x y = country.geometry.centroid.y ax.text(x, y, 'A', color='red', size=15, ha='center', va='center', transform=ccrs.PlateCarree(), path_effects=[PathEffects.withStroke(linewidth=5, foreground="k", alpha=.8)]) else: ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(1, 1, 1), label = country.attributes['SOVEREIGNT']) With the extent focused on "Bulgaria" it looks like: edit: To get "dependencies" separate, consider using the admin_0_map_units instead of admin_0_map_countries, see the Natural Earth documentation . To highlight small countries/regions you could add a buffer to the geometry with something like: highlight = ['Singapore', 'Liechtenstein'] for country in countries: if country.attributes['NAME'] in highlight: if country.geometry.area < 2: geom = [country.geometry.buffer(2)] else: geom = [country.geometry] g = ax.add_geometries(geom, ccrs.PlateCarree(), facecolor=(0, 0.5, 0, 0.6), label="A", zorder=99) x = country.geometry.centroid.x y = country.geometry.centroid.y ax.text(x, y+5, country.attributes['NAME'], color='red', size=14, ha='center', va='center', transform=ccrs.PlateCarree(), path_effects=[PathEffects.withStroke(linewidth=3, foreground="k", alpha=.8)]) else: ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(1, 1, 1), label=country.attributes['NAME']) You could split a specific country with something like this, It uses Shapely to perform an intersection at the middle of the geometry. Ultimately it might be "cleaner" to separate the plotting and spatial analysis (splitting etc) in to more distinct steps. Mixing it like this probably makes it harder to re-use the code for other cases. from shapely.geometry import LineString, MultiLineString for country in countries: if country.attributes['NAME'] in 'China': # line at the centroid y-coord of the country l = LineString([(-180, country.geometry.centroid.y), (180, country.geometry.centroid.y)]) north_poly = MultiLineString([l, north_line]).convex_hull south_poly = MultiLineString([l, south_line]).convex_hull g = ax.add_geometries([country.geometry.intersection(north_poly)], ccrs.PlateCarree(), facecolor=(0.8, 0.0, 0.0, 0.4), zorder=99) g = ax.add_geometries([country.geometry.intersection(south_poly)], ccrs.PlateCarree(), facecolor=(0.0, 0.0, 0.8, 0.4), zorder=99) x = country.geometry.centroid.x y = country.geometry.centroid.y ax.text(x, y, country.attributes['NAME'], color='k', size=16, ha='center', va='center', transform=ccrs.PlateCarree(), path_effects=[PathEffects.withStroke(linewidth=5, foreground="w", alpha=1)], zorder=100) else: ax.add_geometries(country.geometry, ccrs.PlateCarree(), facecolor=(1, 1, 1), label=country.attributes['NAME'])
10
13
60,149,801
2020-2-10
https://stackoverflow.com/questions/60149801/import-error-importerror-cannot-import-name-abc-from-bson-py3compat
How can i solve this error. It will generate while running program. from bson import ObjectId class JSONEncoder(json.JSONEncoder): def default(self, o): if isinstance(o, ObjectId): return str(o) return json.JSONEncoder.default(self, o)
That's most likely due to version mismatches. This worked for me: pip uninstall bson pip uninstall pymongo pip install pymongo
21
77
60,073,711
2020-2-5
https://stackoverflow.com/questions/60073711/how-to-build-c-extensions-via-poetry
To build a python project managed with poetry I need to build C extensions first (an equivalent to python setup.py build). poetry is able to do this according to this github issue. But to me it's not clear what to include into pyproject.toml that the C extension build is executed when building with poetry build?
Add build.py to the repo-root. E.g. if one has one header file directory and 2 source files: from distutils.command.build_ext import build_ext ext_modules = [ Extension("<module-path-imported-into-python>", include_dirs=["<header-file-directory>"], sources=["<source-file-0>", "<source-file-1>"], ), ] class BuildFailed(Exception): pass class ExtBuilder(build_ext): def run(self): try: build_ext.run(self) except (DistutilsPlatformError, FileNotFoundError): raise BuildFailed('File not found. Could not compile C extension.') def build_extension(self, ext): try: build_ext.build_extension(self, ext) except (CCompilerError, DistutilsExecError, DistutilsPlatformError, ValueError): raise BuildFailed('Could not compile C extension.') def build(setup_kwargs): """ This function is mandatory in order to build the extensions. """ setup_kwargs.update( {"ext_modules": ext_modules, "cmdclass": {"build_ext": ExtBuilder}} ) Add to pyproject.toml: [tool.poetry] build = "build.py" To build the extension execute poetry build. For an example refer to this PR.
13
8
60,158,357
2020-2-10
https://stackoverflow.com/questions/60158357/why-do-i-get-attributeerror-fields-set-when-subclassing-a-pydantic-basemo
I have this project where my base class and my sub-classes implement pydantic.BaseModel: from pydantic import BaseModel from typing import List from dataclasses import dataclass @dataclass class User(BaseModel): id: int @dataclass class FavoriteCar(User): car_names: List[str] car = FavoriteCar(id=1, car_names=["Acura"]) print(f"{car.id} {car.car_names[0]}") But this error appears: self.__fields_set__.add(name) E AttributeError: __fields_set__ Does someone mind explaining what is going on? The reason why I want to use pydantic is because I need a way to quickly convert Python objects to dict (or JSON) and back.
You need to decide whether to inherit from pydantic.BaseModel, or whether to use the @dataclass decorator (either from dataclasses, or from pydantic.dataclasses). Either is fine, but you cannot use both, according to the documentation (bold face added by myself): If you don't want to use pydantic's BaseModel you can instead get the same data validation on standard dataclasses
43
39
60,158,618
2020-2-10
https://stackoverflow.com/questions/60158618/plotly-how-to-add-elements-to-hover-data-using-plotly-express-piechart
I am playing with examples from plotly.express piechart help page and trying to add an extra element iso_num to the hover_data property (iso_num is an int64 column in the gapminder dataframe) import plotly.express as px df = px.data.gapminder().query("year == 2007").query("continent == 'Americas'") fig = px.pie(df, values='pop', names='country', title='Population of American continent', hover_data=['lifeExp','iso_num'], labels={'lifeExp':'life expectancy','iso_num':'iso num' }) fig.update_traces(textposition='inside', textinfo='percent+label') fig.show() Hovering over the slice of the pie chart then gives this: where iso num value is %{customdata[1]} instead of the numeric value from the column. What am I missing? Thanks!
This seems to be a relic from back when it was stated that Oh pie hover is a big mess Which since seems to be have been resolved. But perhaps not for px.pie()? I've tried numerous approaches, but I'm only able to get the customdata + hovertemplate approach to work for go.Pie and not for px.Pie. Here's a demonstration on how assigning values to customdata will make any variable otherwise not assigned to go.Pie() available for a custom hovertamplate: Plot: Code: import plotly.graph_objects as go import plotly.express as px df = px.data.gapminder().query("year == 2007").query("continent == 'Americas'") fig = go.Figure(go.Pie( name = "", values = df['pop'], labels = df['country'], customdata=df['iso_num'], hovertemplate = "Country:%{label}: <br>Population: %{value} </br> iso num:%{customdata}" )) fig.show()
9
6
60,069,977
2020-2-5
https://stackoverflow.com/questions/60069977/sharing-gpu-memory-between-process-on-a-same-gpu-with-pytorch
I'm trying to implement an efficient way of doing concurrent inference in Pytorch. Right now, I start 2 processes on my GPU (I have only 1 GPU, both process are on the same device). Each process load my Pytorch model and do the inference step. My problem is that my model takes quite some space on the memory. I have 12Gb of memory on the GPU, and the model takes ~3Gb of memory alone (without the data). Which means together, my 2 processes takes 6Gb of memory just for the model. Now I was wondering if it's possible to load the model only once, and use this model for inference on 2 different processes. What I want is only 3Gb of memory is consumed by the model, but still have 2 processes. I came accross this answer mentioning IPC, but as far as I understood it means the process #2 will copy the model from process #1, so I will still end up with 6Gb allocated for the model. I also checked on the Pytorch documentation, about DataParallel and DistributedDataParallel, but it seems not possible. This seems to be what I want, but I couldn't find any code example on how to use with Pytorch in inference mode. I understand this might be difficult to do such a thing for training, but please note I'm only talking about the inference step (the model is in read-only mode, no need to update gradients). With this assumption, I'm not sure if it's possible or not.
The GPU itself has many threads. When performing an array/tensor operation, it uses each thread on one or more cells of the array. This is why it seems that an op that can fully utilize the GPU should scale efficiently without multiple processes -- a single GPU kernel is already massively parallelized. In a comment you mentioned seeing better results with multiple processes in a small benchmark. I'd suggest running the benchmark with more jobs to ensure warmup, ten kernels seems like too small of a test. If you're finding a thorough representative benchmark to run faster consistently though, I'll trust good benchmarks over my intuition. My understanding is that kernels launched on the default CUDA stream get executed sequentially. If you want them to run in parallel, I think you'd need multiple streams. Looking in the PyTorch code, I see code like getCurrentCUDAStream() in the kernels, which makes me think the GPU will still run any PyTorch code from all processes sequentially. This NVIDIA discussion suggests this is correct: https://devtalk.nvidia.com/default/topic/1028054/how-to-launch-cuda-kernel-in-different-processes/ Newer GPUs may be able to run multiple kernels in parallel (using MPI?) but it seems like this is just implemented with time slicing under the hood anyway, so I'm not sure we should expect higher total throughput: How do I use Nvidia Multi-process Service (MPS) to run multiple non-MPI CUDA applications? If you do need to share memory from one model across two parallel inference calls, can you just use multiple threads instead of processes, and refer to the same model from both threads? To actually get the GPU to run multiple kernels in parallel, you may be able to use nn.Parallel in PyTorch. See the discussion here: https://discuss.pytorch.org/t/how-can-l-run-two-blocks-in-parallel/61618/3
11
2
60,150,031
2020-2-10
https://stackoverflow.com/questions/60150031/how-to-display-latex-f-strings-in-matplotlib
In Python 3.6, there is the new f-string to include variables in strings which is great, but how do you correctly apply these strings to get super or subscripts printed for matplotlib? (to actually see the result with the subscript, you need to draw the variable foo on a matplotlib plot) In other words how do I get this behaviour: var = 123 foo = r'text$_{%s}$' % var text<sub>123</sub> Using the new f-string syntax? So far, I have tried using a raw-string literal combined with an f-string, but this only seems to apply the subscript to the first character of the variable: var = 123 foo = fr'text$_{var}$' text<sub>1</sub>23 Because the { has an ambiguous function as delimiting what r should consider subscript and what f delimits as a place for the variable.
You need to escape the curly brackets by doubling them up, and then add in one more to use in the LaTeX formula. This gives: foo = f'text$_{{{var}}}$' Example: plt.figure() plt.plot([1,2,3], [3,4,5]) var = 123 plt.text(1, 4,f'text$_{{{var}}}$') Output: Incidentally, in this example, you don't actually need to use a raw-string literal.
28
46
60,156,202
2020-2-10
https://stackoverflow.com/questions/60156202/flask-app-wont-launch-importerror-cannot-import-name-cached-property-from-w
I've been working on a Flask app for a few weeks. I finished it today and went to deploy it... and now it won't launch. I haven't added or removed any code so assume something has changed in the deployment process? Anyway, here is the full error displayed in the terminal: Traceback (most recent call last): File "C:\Users\Kev\Documents\Projects\Docket\manage.py", line 5, in <module> from app import create_app, db File "C:\Users\Kev\Documents\Projects\Docket\app\__init__.py", line 21, in <module> from app.api import api, blueprint, limiter File "C:\Users\Kev\Documents\Projects\Docket\app\api\__init__.py", line 2, in <module> from flask_restplus import Api File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\__init_ _.py", line 4, in <module> from . import fields, reqparse, apidoc, inputs, cors File "C:\Users\Kev\.virtualenvs\Docket-LasDxOWU\lib\site-packages\flask_restplus\fields. py", line 17, in <module> from werkzeug import cached_property ImportError: cannot import name 'cached_property' from 'werkzeug' (C:\Users\Kev\.virtualen vs\Docket-LasDxOWU\lib\site-packages\werkzeug\__init__.py) Also here's the code in the three files mentioned. manage.py: from apscheduler.schedulers.background import BackgroundScheduler from flask_script import Manager from flask_migrate import Migrate, MigrateCommand from app import create_app, db app = create_app() app.app_context().push() manager = Manager(app) migrate = Migrate(app, db) manager.add_command('db', MigrateCommand) from app.routes import * from app.models import * def clear_data(): with app.app_context(): db.session.query(User).delete() db.session.query(Todo).delete() db.session.commit() print("Deleted table rows!") @manager.command def run(): scheduler = BackgroundScheduler() scheduler.add_job(clear_data, trigger='interval', minutes=15) scheduler.start() app.run(debug=True) if __name__ == '__main__': clear_data() manager.run() app/__init__.py: from flask import Flask from flask_sqlalchemy import SQLAlchemy from flask_login import LoginManager from config import Config db = SQLAlchemy() login = LoginManager() def create_app(): app = Flask(__name__) app.config.from_object(Config) db.init_app(app) login.init_app(app) login.login_view = 'login' from app.api import api, blueprint, limiter from app.api.endpoints import users, todos, register from app.api.endpoints.todos import TodosNS from app.api.endpoints.users import UserNS from app.api.endpoints.register import RegisterNS api.init_app(app) app.register_blueprint(blueprint) limiter.init_app(app) api.add_namespace(TodosNS) api.add_namespace(UserNS) api.add_namespace(RegisterNS) return app api/__init__.py: from logging import StreamHandler from flask_restplus import Api from flask import Blueprint from flask_limiter import Limiter from flask_limiter.util import get_remote_address blueprint = Blueprint('api', __name__, url_prefix='/api') limiter = Limiter(key_func=get_remote_address) limiter.logger.addHandler(StreamHandler()) api = Api(blueprint, doc='/documentation', version='1.0', title='Docket API', description='API for Docket. Create users and todo items through a REST API.\n' 'First of all, begin by registering a new user via the registration form in the web interface.\n' 'Or via a `POST` request to the `/Register/` end point', decorators=[limiter.limit("50/day", error_message="API request limit has been reached (50 per day)")]) I've tried reinstalling flask & flask_restplus but no-luck.
Try: from werkzeug.utils import cached_property https://werkzeug.palletsprojects.com/en/1.0.x/utils/
39
15
60,149,105
2020-2-10
https://stackoverflow.com/questions/60149105/userwarning-warn-box-bound-precision-lowered-by-casting-to-float32
I continuously get this error UserWarning: WARN: Box bound precision lowered by casting to float32 warnings.warn(colorize('%s: %s'%('WARN', msg % args), 'yellow')) when I start my training session. I guess it's coming from this line self.action_space = spaces.Box(low, high) The code is running but I want to stop this error from showing up I am using a cuda pc to run the code.
Explicitly specify the dtype as float32 in the call like so... self.action_space = spaces.Box(low, high, dtype=np.float32) If that doesn't work, set the logger level lower in gym like so... import gym gym.logger.set_level(40)
14
8
60,153,981
2020-2-10
https://stackoverflow.com/questions/60153981/scikit-learn-one-hot-encoding-certain-columns-of-a-pandas-dataframe
I have a dataframe X with integer, float and string columns. I'd like to one-hot encode every column that is of "Object" type, so I'm trying to do this: encoding_needed = X.select_dtypes(include='object').columns ohe = preprocessing.OneHotEncoder() X[encoding_needed] = ohe.fit_transform(X[encoding_needed].astype(str)) #need astype bc I imputed with 0, so some rows have a mix of zeroes and strings. However, I end up with IndexError: tuple index out of range. I don't quite understand this as per the documentation the encoder expects X: array-like, shape [n_samples, n_features], so I should be OK passing a dataframe. How can I one-hot encode the list of columns specifically marked in encoding_needed? EDIT: The data is confidential so I cannot share it and I cannot create a dummy as it has 123 columns as is. I can provide the following: X.shape: (40755, 123) encoding_needed.shape: (81,) and is a subset of columns. Full stack: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-90-6b3e9fdb6f91> in <module>() 1 encoding_needed = X.select_dtypes(include='object').columns 2 ohe = preprocessing.OneHotEncoder() ----> 3 X[encoding_needed] = ohe.fit_transform(X[encoding_needed].astype(str)) ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/frame.py in __setitem__(self, key, value) 3365 self._setitem_frame(key, value) 3366 elif isinstance(key, (Series, np.ndarray, list, Index)): -> 3367 self._setitem_array(key, value) 3368 else: 3369 # set column ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/frame.py in _setitem_array(self, key, value) 3393 indexer = self.loc._convert_to_indexer(key, axis=1) 3394 self._check_setitem_copy() -> 3395 self.loc._setitem_with_indexer((slice(None), indexer), value) 3396 3397 def _setitem_frame(self, key, value): ~/anaconda3/envs/python3/lib/python3.6/site-packages/pandas/core/indexing.py in _setitem_with_indexer(self, indexer, value) 592 # GH 7551 593 value = np.array(value, dtype=object) --> 594 if len(labels) != value.shape[1]: 595 raise ValueError('Must have equal len keys and value ' 596 'when setting with an ndarray') IndexError: tuple index out of range
# example data X = pd.DataFrame({'int':[0,1,2,3], 'float':[4.0, 5.0, 6.0, 7.0], 'string1':list('abcd'), 'string2':list('efgh')}) int float string1 string2 0 0 4.0 a e 1 1 5.0 b f 2 2 6.0 c g 3 3 7.0 d h Using pandas With pandas.get_dummies, it will automatically select your object columns and drop these columns while appenind the one-hot-encoded columns: pd.get_dummies(X) int float string1_a string1_b string1_c string1_d string2_e \ 0 0 4.0 1 0 0 0 1 1 1 5.0 0 1 0 0 0 2 2 6.0 0 0 1 0 0 3 3 7.0 0 0 0 1 0 string2_f string2_g string2_h 0 0 0 0 1 1 0 0 2 0 1 0 3 0 0 1 Using sklearn Here we have to specify that we only need the object columns: from sklearn.preprocessing import OneHotEncoder ohe = OneHotEncoder() X_object = X.select_dtypes('object') ohe.fit(X_object) codes = ohe.transform(X_object).toarray() feature_names = ohe.get_feature_names(['string1', 'string2']) X = pd.concat([df.select_dtypes(exclude='object'), pd.DataFrame(codes,columns=feature_names).astype(int)], axis=1) int float string1_a string1_b string1_c string1_d string2_e \ 0 0 4.0 1 0 0 0 1 1 1 5.0 0 1 0 0 0 2 2 6.0 0 0 1 0 0 3 3 7.0 0 0 0 1 0 string2_f string2_g string2_h 0 0 0 0 1 1 0 0 2 0 1 0 3 0 0 1
9
21
60,145,652
2020-2-10
https://stackoverflow.com/questions/60145652/no-module-named-sklearn-neighbors-base
I have recently installed imblearn package in jupyter using !pip show imbalanced-learn But I am not able to import this package. from tensorflow.keras import backend from imblearn.over_sampling import SMOTE I get the following error --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-20-f19c5a0e54af> in <module> 1 # from sklearn.utils import resample 2 from tensorflow.keras import backend ----> 3 from imblearn.over_sampling import SMOTE 4 5 ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/__init__.py in <module> 32 Module which allowing to create pipeline with scikit-learn estimators. 33 """ ---> 34 from . import combine 35 from . import ensemble 36 from . import exceptions ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/combine/__init__.py in <module> 3 """ 4 ----> 5 from ._smote_enn import SMOTEENN 6 from ._smote_tomek import SMOTETomek 7 ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/combine/_smote_enn.py in <module> 8 from sklearn.utils import check_X_y 9 ---> 10 from ..base import BaseSampler 11 from ..over_sampling import SMOTE 12 from ..over_sampling.base import BaseOverSampler ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/base.py in <module> 14 from sklearn.utils.multiclass import check_classification_targets 15 ---> 16 from .utils import check_sampling_strategy, check_target_type 17 18 ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/utils/__init__.py in <module> 5 from ._docstring import Substitution 6 ----> 7 from ._validation import check_neighbors_object 8 from ._validation import check_target_type 9 from ._validation import check_sampling_strategy ~/.virtualenvs/p3/lib/python3.6/site-packages/imblearn/utils/_validation.py in <module> 11 12 from sklearn.base import clone ---> 13 from sklearn.neighbors._base import KNeighborsMixin 14 from sklearn.neighbors import NearestNeighbors 15 from sklearn.utils.multiclass import type_of_target ModuleNotFoundError: No module named 'sklearn.neighbors._base' Other packages in the environment numpy==1.16.2 pandas==0.24.2 paramiko==2.1.1 matplotlib==2.2.4 scikit-learn==0.22.1 Keras==2.2.4 tensorflow==1.12.0 tensorboard==1.12.0 tensorflow-hub==0.4.0 xlrd==1.2.0 flask==1.0.2 wtforms==2.2.1 bs4==0.0.1 gensim==3.8.1 spacy==2.2.3 nltk==3.4.5 wordcloud==1.6.0 pymongo==3.10.1 imbalanced-learn==0.6.1 I checked the sklearn package, it contains base module, not _base. But modifying it may not be the right solution. Any other solution to fix this issue.
Previous sklearn.neighbors.base has been renamed to sklearn.neighbors._base in version 0.22.1. You have probably a version of scikit-learn older than that. Installing the latest release solves the problem: pip install -U scikit-learn or pip install scikit-learn==0.22.1
15
11
60,148,137
2020-2-10
https://stackoverflow.com/questions/60148137/what-happens-if-i-dont-join-a-python-thread
I have a query. I have seen examples where developers write something like the code as follows: import threading def do_something(): return true t = threading.Thread(target=do_something) t.start() t.join() I know that join() signals the interpreter to wait till the thread is completely executed. But what if I do not write t.join()? Will the thread get closed automatically and will it be reused later? Please let me know the answer. It's my first attempt at creating a multi-threaded application in Python 3.5.0.
A Python thread is just a regular OS thread. If you don't join it, it still keeps running concurrently with the current thread. It will eventually die, when the target function completes or raises an exception. No such thing as "thread reuse" exists, once it's dead it rests in peace. Unless the thread is a "daemon thread" (via a constructor argument daemon or assigning the daemon property) it will be implicitly joined for before the program exits, otherwise, it is killed abruptly. One thing to remember when writing multithreading programs in Python, is that they only have limited use due to infamous Global interpreter lock. In short, using threads won't make your CPU-intensive program any faster. They can be useful only when you perform something involving waiting (e.g. you wait for certain file system event to happen in a thread).
12
13
60,144,693
2020-2-10
https://stackoverflow.com/questions/60144693/show-image-in-its-original-resolution-in-jupyter-notebook
I have a very high resolution(3311, 4681, 3) image, which I want to show in my jupyter notebook using opencv but as other answers stated its not possible to use cv2.imshow in the jupyter notebook, so i used plt.imshow to do the same but the problem is I have to define the fig_size parameter if I want to display my image larger. How can I read the image in its original resolution in jupyter notebook or is it possible to open the image in another window? This is what I have tried : import cv2 from matplotlib import pyplot as plt %matplotlib inline img = cv2.imread(r"0261b27431-07_D_01.jpg") plt.figure(figsize= (20,20)) plt.imshow(img) plt.show() So basically I want my image to show in its original resolution in jupyter notebook or in another window.
You can imshow the image in its original resolution by calculating the corresponding figure size, which depends on the dpi (dots per inch) value of matplotlib. The default value is 100 dpi and is stored in matplotlib.rcParams['figure.dpi']. So imshowing the image like this import cv2 from matplotlib import pyplot as plt import matplotlib %matplotlib inline # Acquire default dots per inch value of matplotlib dpi = matplotlib.rcParams['figure.dpi'] img = cv2.imread(r'0261b27431-07_D_01.jpg') # Determine the figures size in inches to fit your image height, width, depth = img.shape figsize = width / float(dpi), height / float(dpi) plt.figure(figsize=figsize) plt.imshow(img) plt.show() prints it in its large resolution, but with the drawback, that the axis labels are tiny compared to the large image. You can workaround this by setting other rcParams to larger values, e.g. # Do the same also for the 'y' axis matplotlib.rcParams['xtick.labelsize'] = 50 matplotlib.rcParams['xtick.major.size'] = 15 matplotlib.rcParams['xtick.major.width'] = 5 ... Your second suggestion to open the image in another window would work like this, that you change the matplotlib backend using Ipython magic commands by replacing %matplotlib inline in the above example with, e.g. %matplotlib qt # opens the image in an interactive window with original resolution or %matplotlib notebook # opens the image in an interactive window 'inline' See here for more backend possibilites. Note that the calculation of the original figure size has to be done before also.
9
9
60,120,849
2020-2-7
https://stackoverflow.com/questions/60120849/outputting-attention-for-bert-base-uncased-with-huggingface-transformers-torch
I was following a paper on BERT-based lexical substitution (specifically trying to implement equation (2) - if someone has already implemented the whole paper that would also be great). Thus, I wanted to obtain both the last hidden layers (only thing I am unsure is the ordering of the layers in the output: last first or first first?) and the attention from a basic BERT model (bert-base-uncased). However, I am a bit unsure whether the huggingface/transformers library actually outputs the attention (I was using torch, but am open to using TF instead) for bert-base-uncased? From what I had read, I was expected to get a tuple of (logits, hidden_states, attentions), but with the example below (runs e.g. in Google Colab), I get of length 2 instead. Am I misinterpreting what I am getting or going about this the wrong way? I did the obvious test and used output_attention=False instead of output_attention=True (while output_hidden_states=True does indeed seem to add the hidden states, as expected) and nothing change in the output I got. That's clearly a bad sign about my understanding of the library or indicates an issue. import numpy as np import torch !pip install transformers from transformers import (AutoModelWithLMHead, AutoTokenizer, BertConfig) bert_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attention=True) # Nothign changes, when I switch to output_attention=False bert_model = AutoModelWithLMHead.from_config(config) sequence = "We went to an ice cream cafe and had a chocolate ice cream." bert_tokenized_sequence = bert_tokenizer.tokenize(sequence) indexed_tokens = bert_tokenizer.encode(bert_tokenized_sequence, return_tensors='pt') predictions = bert_model(indexed_tokens) ########## Now let's have a look at what the predictions look like ############# print(len(predictions)) # Length is 2, I expected 3: logits, hidden_layers, attention print(predictions[0].shape) # torch.Size([1, 16, 30522]) - seems to be logits (shape is 1 x sequence length x vocabulary print(len(predictions[1])) # Length is 13 - the hidden layers?! There are meant to be 12, right? Is one somehow the attention? for k in range(len(predictions[1])): print(predictions[1][k].shape) # These all seem to be torch.Size([1, 16, 768]), so presumably the hidden layers? Explanation of what worked in the end inspired by accepted answer import numpy as np import torch !pip install transformers from transformers import BertModel, BertConfig, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True, output_attentions=True) model = BertModel.from_pretrained('bert-base-uncased', config=config) sequence = "We went to an ice cream cafe and had a chocolate ice cream." tokenized_sequence = tokenizer.tokenize(sequence) indexed_tokens = tokenizer.encode(tokenized_sequence, return_tensors='pt' enter code here`outputs = model(indexed_tokens) print( len(outputs) ) # 4 print( outputs[0].shape ) #1, 16, 768 print( outputs[1].shape ) # 1, 768 print( len(outputs[2]) ) # 13 = input embedding (index 0) + 12 hidden layers (indices 1 to 12) print( outputs[2][0].shape ) # for each of these 13: 1,16,768 = input sequence, index of each input id in sequence, size of hidden layer print( len(outputs[3]) ) # 12 (=attenion for each layer) print( outputs[3][0].shape ) # 0 index = first layer, 1,12,16,16 = , layer, index of each input id in sequence, index of each input id in sequence
The reason is that you are using AutoModelWithLMHead which is a wrapper for the actual model. It calls the BERT model (i.e., an instance of BERTModel) and then it uses the embedding matrix as a weight matrix for the word prediction. In between the underlying model indeed returns attentions, but the wrapper does not care and only returns the logits. You can either get the BERT model directly by calling AutoModel. Note that this model does not return the logits, but the hidden states. bert_model = AutoModel.from_config(config) Or you can get it from the BertWithLMHead object by calling: wrapped_model = bert_model.base_model
10
3
60,140,174
2020-2-9
https://stackoverflow.com/questions/60140174/basic-flask-app-not-running-typeerror-required-field-type-ignores-missing-fr
I have a very basic flask app with dependencies installed from my requirements.txt. All of these dependencies are installed in my virtual environment. requirements.txt given below, aniso8601==6.0.0 Click==7.0 Flask==1.0.3 Flask-Cors==3.0.7 Flask-RESTful==0.3.7 Flask-SQLAlchemy==2.4.0 itsdangerous==1.1.0 Jinja2==2.10.1 MarkupSafe==1.1.1 # psycopg2-binary==2.8.2 pytz==2019.1 six==1.12.0 # SQLAlchemy==1.3.4 Werkzeug==0.15.4 python-dotenv requests authlib My code in NewTest.py file, from flask import Flask, request, jsonify, abort, url_for app = Flask(__name__) @app.route('/') def index(): return jsonify({ 'success': True, 'index': 'Test Pass' }) if __name__ == '__main__': app.run(debug=True) When I run the app through, export FLASK_APP=NewTest.py export FLASK_ENV=development export FLASK_DEBUG=true flask run or flask run --reload I get the following error, 127.0.0.1 - - [09/Feb/2020 12:43:40] "GET / HTTP/1.1" 500 - Traceback (most recent call last): File "/projects/env/lib/python3.8/site-packages/flask/_compat.py", line 36, i n reraise raise value File "/projects/NewTest.py", line 3, in <module> app = Flask(__name__) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 559, in _ _init__ self.add_url_rule( File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 67, in wr apper_func return f(self, *args, **kwargs) File "/projects/env/lib/python3.8/site-packages/flask/app.py", line 1217, in add_url_rule self.url_map.add(rule) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 1388, in add rule.bind(self) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 730, in bind self.compile() File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 794, in compile self._build = self._compile_builder(False).__get__(self, None) File "/projects/env/lib/python3.8/site-packages/werkzeug/routing.py", line 951, in _compile_builder code = compile(module, "<werkzeug routing>", "exec") TypeError: required field "type_ignores" missing from Module Can anyone please point out what am I missing or doing wrong and how can I fix it? Thanks.
The bug was fixed in werkzeug 0.15.5. Upgrade from 0.15.4 to to a later version.
21
48
60,137,570
2020-2-9
https://stackoverflow.com/questions/60137570/explanation-of-generator-close-with-exception-handling
I am reading the python doc https://docs.python.org/3/reference/expressions.html about generator.close(). My translation of the documentation is: ##generator.close() Raises a GeneratorExit at the point where the generator function was paused. If the generator function then exits gracefully : 1.1 is already closed, 1.2 or raises GeneratorExit (by not catching the exception), close returns to its caller. If the generator yields a value, a RuntimeError is raised. If the generator raises any other exception, it is propagated to the caller. close() does nothing if the generator has already exited due to an exception or normal exit. I don't understand how the close() behavior corresponds to the documentation. >>> def echo(value=None): ... print("Execution starts when 'next()' is called for the first time.") ... try: ... while True: ... try: ... value = (yield value) ... except Exception as e: ... value = e ... finally: ... print("Don't forget to clean up when 'close()' is called.") ... >>> generator = echo(1) >>> next(generator) Execution starts when 'next()' is called for the first time. >>> generator.close() Don't forget to clean up when 'close()' is called. Which rule applies to generator.close() ? I am confused. My understanding: generator.close() raise a GeneratorExit exception GeneratorExit is catched by except Exception as e: and loop continues value = (yield value) executes according to rule 2 above, a RuntimeError will be raised. But that doesn't seem to be the case. Please tell me what's going on inside.
GeneratorExit does not inherit from Exception, but from the more fundamental BaseException. Thus, it is not caught by your except Exception block. So your assumption 2 is wrong. The generator exits gracefully via case 1.3, since GeneratorExit is not stopped. The GeneratorExit is thrown at (yield value). The try: except Exception as e: checks whether the current exception is subclass of Exception. Since this is not the case, it unwinds. The while True: unwinds due the current exception. The try: finally: unwinds, running its finally: block. This causes the message to be displayed. The generator exits with the current exception, i.e. GeneratorExit. generator.close detects and suppresses GeneratorExit.
11
12
60,107,946
2020-2-7
https://stackoverflow.com/questions/60107946/how-to-add-column-delimiter-to-pandas-dataframe-display
For example, define df=pd.DataFrame(np.random.randint(0,10,(6,6))) df Which gives below display in Jupyter notebook My question is that is it possible to add a column delimiter to the dataframe like Thank you for all the answers, currently I use below custom functions def css_border(x,pos): return ["border-left: 1px solid red" if i in pos else "border: 0px" for i, col in enumerate(x)] def display_df_with_delimiter(df,pos): return df.style.apply(partial(css_border,pos=pos), axis=1) and display_df_with_delimiter(df,[0,1,2,5]) gives
This piece of code should add the desired lines to the table. from IPython.display import display, HTML CSS = """ .rendered_html td:nth-child(even) { border-left: 1px solid red; } """ HTML('<style>{}</style>'.format(CSS)) Note that you can change the style of those linse by simply changing the definition of border-left attribute, i.e border-left: 2px solid green to make the lines thicker and green. Here is a snapshot demonstrating the output.
10
10
60,113,143
2020-2-7
https://stackoverflow.com/questions/60113143/how-to-properly-use-asyncio-run-coroutine-threadsafe-function
I am trying to understand asyncio module and spend about one hour with run_coroutine_threadsafe function, I even came to the working example, it works as expected, but works with several limitations. First of all I do not understand how should I properly call asyncio loop in main (any other) thread, in the example I call it with run_until_complete and give it a coroutine to make it busy with something until another thread will not give it a coroutine. What are other options I have? What are situations when I have to mix asyncio and threading (in Python) in real life? Since as far as I understand asyncio is supposed to take place of threading in Python (due to GIL for not IO ops), if I am wrong, do not be angry and share your suggestions. Python version is 3.7/3.8 import asyncio import threading import time async def coro_func(): return await asyncio.sleep(3, 42) def another_thread(_loop): coro = coro_func() # is local thread coroutine which we would like to run in another thread # _loop is a loop which was created in another thread future = asyncio.run_coroutine_threadsafe(coro, _loop) print(f"{threading.current_thread().name}: {future.result()}") time.sleep(15) print(f"{threading.current_thread().name} is Finished") if __name__ == '__main__': loop = asyncio.get_event_loop() main_th_cor = asyncio.sleep(10) # main_th_cor is used to make loop busy with something until another_thread will not send coroutine to it print("START MAIN") x = threading.Thread(target=another_thread, args=(loop, ), name="Some_Thread") x.start() time.sleep(1) loop.run_until_complete(main_th_cor) print("FINISH MAIN")
First of all I do not understand how should I properly call asyncio loop in main (any other) thread, in the example I call it with run_until_complete and give it a coroutine to make it busy with something until another thread will not give it a coroutine. What are other options I have? This is a good use case for loop.run_forever(). The loop will run and serve the coroutines you submit using run_coroutine_threadsafe. (You can even submit such coroutines from multiple threads in parallel; you never need to instantiate more than one event loop.) You can stop the loop from a different thread by calling loop.call_soon_threadsafe(loop.stop). What are situations when I have to mix asyncio and threading (in Python) in real life? Ideally there should be none. But in the real world, they do crop up; for example: When you are introducing asyncio into an existing large program that uses threads and blocking calls and cannot be converted to asyncio all at once. run_coroutine_threadsafe allows regular blocking code to make use of asyncio. When you are dealing with older "async" APIs which use threads under the hood and call the user-supplied APIs from other threads. There are many examples, such as Python's own multiprocessing. When you need to call blocking functions that have no async equivalent from asyncio - e.g. CPU-bound functions, legacy database drivers, things like that. This is not a use case for run_coroutine_threadsafe, here you'd use run_in_executor, but it is another example of mixing threads and asyncio.
24
29
60,132,045
2020-2-8
https://stackoverflow.com/questions/60132045/fastapi-uvicorn-not-working-when-specifying-host
I'm running a FastAPI app in Python using uvicorn on a Windows machine without a frontend (e.g. Next.js, etc.) so there should NOT be any iteraction between a local frontend and backend like there is in this question. Plus the answer(s) to that question would not have solved my issue/question. That question was also asked AFTER this one so THIS QUESTION IS NOT A DUPLICATE! It works fine when I do any one of the following options: Run the following code on my mac, or When I don't specify the port for uvicorn (remove the host parameter from the uvicorn.run call) When I specify port '127.0.0.1', which is the host it uses when I don't specify a host at all. from fastapi import FastAPI import uvicorn app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} if __name__ == '__main__': uvicorn.run(app, port=8080, host='0.0.0.0') When I go to 0.0.0.0:8080 on my browser, I get an error that says "This site can’t be reached". I have checked my current active ports to make sure I'm not getting a collision using netstat -ao |find /i "listening" and 0.0.0.0:8080 is not in use. My current file configuration looks like this: working_directory └── app ├── gunicorn_conf.py └── main.py My gunicorn_conf.py is super simple and just tries to set the host and port: host = "0.0.0.0" port = "8080" How can I get this to work when I specify host '0.0.0.0'?
As I was writing the question above, I found the solution and thought I would share in case someone else runs into this. To get it to work put "http://localhost:8080" into the web browser instead of "http://0.0.0.0:8080" and it will work fine. This also works if you're hitting the endpoint via the python requests package, etc.
29
28
60,130,622
2020-2-8
https://stackoverflow.com/questions/60130622/warningtensorflow-with-constraint-is-deprecated-and-will-be-removed-in-a-future
I am following Tensorflow's tutorial on building a simple neural network, and after importing the necessary libraries (tensorflow, keras, numpy & matplotlib) and datasets (fashion_mnist) I ran this code as per the tutorial: model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(128, activation='relu'), keras.layers.Dense(10, activation='softmax') ]) after running this code i received this warning message: WARNING:tensorflow:From /Applications/anaconda3/envs/tensorfloe/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. How do i fix this? Your help is highly appreciated.
This is internal TensorFlow message, you can safely ignore it. It will be gone in future versions of TensorFlow, no actions from your side is needed.
8
11
60,123,611
2020-2-8
https://stackoverflow.com/questions/60123611/how-to-position-legends-inside-a-plot-in-plotly
I have got this code from Plotly page. I need to make the background transparent and the axis highlighted. And also the legends positioned inside the plot. import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter( x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5], name="Increasing" )) fig.add_trace(go.Scatter( x=[1, 2, 3, 4, 5], y=[5, 4, 3, 2, 1], name="Decreasing" )) fig.update_layout(legend_title='<b> Trend </b>') fig.show() The code above shows the output below: My expected output: Hoow can i convert the first image to get the features of the second image?
To change the background color you need specify it by plot_bgcolor='rgba(0,0,0,0)',, while to move the legend inside the plot, on the left, you need to explicitly define the position: import plotly.graph_objects as go trace0 = go.Scatter( x=[1, 2, 3, 4, 5], y=[1, 2, 3, 4, 5], name="Increasing" ) trace1 = go.Scatter( x=[1, 2, 3, 4, 5], y=[5, 4, 3, 2, 1], name="Decreasing" ) data = [trace0, trace1] layout = go.Layout( plot_bgcolor='rgba(0,0,0,0)', legend=dict( x=0, y=0.7, traceorder='normal', font=dict( size=12,), ), annotations=[ dict( x=0, y=0.75, xref='paper', yref='paper', text='Trend', showarrow=False ) ] ) fig = go.Figure(data = data, layout = layout) fig.update_xaxes(showgrid=True, gridwidth=1, gridcolor='LightGray') fig.update_yaxes(showgrid=True, gridwidth=1, gridcolor='LightGray') fig.show() and you get:
10
8
60,127,165
2020-2-8
https://stackoverflow.com/questions/60127165/pytest-test-function-that-creates-plots
I have several functions that create plots, which I use in Jupyter notebooks to visualise data. I want to create basic tests for these, checking that they still run without erroring on various inputs if I make changes. However, if I call these functions using pytest, creating the plots causes the program to hang until I manually minimise the plot. import pytest import matplotlib.pyplot as plt def plot_fn(): plt.plot([1,2,3]) plt.show() def test_plot_fn(): plot_fn() How can I test that functions like 'plot_fn' run without erroring using Pytest? I tried the following, but it doesn't work, I think because plt.show() causes the script to hang, and so not reach plt.close('all'). def test_plot_fn(): plot_fn() plt.close('all') I'm happy to change the behaviour of my plotting function, for example to return the plt object?
This works. from unittest.mock import patch import pytest import matplotlib.pyplot as plt def plot_fn(): plt.plot([1,2,3]) plt.show() @patch("matplotlib.pyplot.show") def test_plot_fn(mock_show): plot_fn() Based on this answer (possible duplicate) Turn off graphs while running unittests
10
4
60,082,546
2020-2-5
https://stackoverflow.com/questions/60082546/airflow-proper-way-to-run-dag-for-each-file
I have the following task to solve: Files are being sent at irregular times through an endpoint and stored locally. I need to trigger a DAG run for each of these files. For each file the same tasks will be performed Overall the flows looks as follows: For each file, run tasks A->B->C->D Files are being processed in batch. While this task seemed trivial to me, I have found several ways to do this and I am confused about which one is the "proper" one (if any). First pattern: Use experimental REST API to trigger dag. That is, expose a web service which ingests the request and the file, stores it to a folder, and uses the experimental REST api to trigger the DAG, by passing the file_id as conf Cons: REST apis are still experimental, not sure how Airflow can handle a load test with many requests coming at one point (which shouldn't happen, but, what if it does?) Second pattern: 2 dags. One senses and triggers with TriggerDagOperator, one processes. Always using the same ws as described before, but this time it justs stores the file. Then we have: First dag: Uses a FileSensor along with the TriggerDagOperator to trigger N dags given N files Second dag: Task A->B->C Cons: Need to avoid that the same files are being sent to two different DAG runs. Example: Files in folder x.json Sensor finds x, triggers DAG (1) Sensor goes back and scheduled again. If DAG (1) did not process/move the file, the sensor DAG might reschedule a new DAG run with the same file. Which is unwanted. Third pattern: for file in files, task A->B->C As seen in this question. Cons: This could work, however what I dislike is that the UI will probably get messed up because every DAG run will not look the same but it will change with the number of files being processed. Also if there are 1000 files to be processed the run would probably be very difficult to read Fourth pattern: Use subdags I am not yet sure how they completely work as I have seen they are not encouraged (at the end), however it should be possible to spawn a subdag for each file and have it running. Similar to this question. Cons: Seems like subdags can only be used with the sequential executor. Am I missing something and over-thinking something that should be (in my mind) quite straight-forward? Thanks
I found this article: https://medium.com/@igorlubimov/dynamic-scheduling-in-airflow-52979b3e6b13 where a new operator, namely TriggerMultiDagRunOperator is used. I think this suits my needs.
9
1
60,111,684
2020-2-7
https://stackoverflow.com/questions/60111684/geometry-must-be-a-point-or-linestring-error-using-cartopy
I'm trying to run a simple Cartopy example: import cartopy.crs as ccrs import matplotlib.pyplot as plt ax = plt.axes(projection=ccrs.PlateCarree()) ax.coastlines() plt.show() But I'm getting this error: Geometry must be a Point or LineString python: geos_ts_c.cpp:4179: int GEOSCoordSeq_getSize_r(GEOSContextHandle_t, const geos::geom::CoordinateSequence*, unsigned int*): Assertion0 != cs' failed`. I installed Cartopy using miniconda3: conda install -c conda-forge cartopy I also tried to install Cartopy with pip (in a virtual environment), but I get the same error. My OS is Debian Buster. Any idea?
The problem is a wrong version of shapely, with Cartopy the binary package shoudn't be used, it should be built from source instead. This is explained here and here. So I did: pip uninstall shapely pip install shapely --no-binary shapely
14
29
60,115,855
2020-2-7
https://stackoverflow.com/questions/60115855/difference-between-viewset-modelviewset-and-apiview
What are the advantages of ViewSet, ModelViewSet and APIView. In the django-rest-framework documents it is not clear, it does not say when to use ViewSet, ModelViewSet and APIView. I want to implement an API that will have a business logic in there, a great business logic with data processing as well, what should be used for this case? I researched a lot and managed to understand a little about routers and urlpatterns but I didn't understand which one about views.
Summarizing: on one hand you have the APIView, which is the most generic of the three, but also in which you must do almost all business logic 'manually'. You have the class methods mapping http methods (get, post, ...) plus some class attributes to configure things like authentication, rendering, etc. Often you'll be developing endpoints to interact with resources (entities, like Users, Products, Orders, etc.) via CRUD operations, and that is what ViewSet is for: they have more semantic class methods like list, create, retrieve ... that the router can then automatically map to urls and http methods at the expense of making some assumptions: for example, the retrieve assumes the http call to be GET /you_resource/<pk>. It is more rigid than a generic APIView but it takes away from you some boilerplate/manual config that you would have to repeat again and again in most cases. One step further is the ModelViewSet, which is an extension of the ViewSet for when you are working with Django models. Just specifying a serializer_class and a queryset you have all the CRUD operations of the ViewSet ready to go. Obviously, you can also add your own methods to a ViewSet or customize the behavior of its default methods. In my experience, it pays off to use ViewSets. The code looks cleaner and you avoid some boilerplate code. The assumptions it makes are reasonable, and I would even say that you probably will end up with a cleaner API design following them.
8
13
60,098,005
2020-2-6
https://stackoverflow.com/questions/60098005/fastapi-starlette-get-client-real-ip
I have an API on FastAPI and i need to get the client real IP address when he request my page. I'm ty to use starlette Request. But it returns my server IP, not client remote IP. My code: @app.post('/my-endpoint') async def my_endpoint(stats: Stats, request: Request): ip = request.client.host print(ip) return {'status': 1, 'message': 'ok'} What i'm doing wrong? How to get real IP (like in Flask request.remote_addr)?
request.client should work, unless you're running behind a proxy (e.g. nginx) in that case use uvicorn's --proxy-headers flag to accept these incoming headers and make sure the proxy forwards them.
56
58
60,105,443
2020-2-7
https://stackoverflow.com/questions/60105443/how-do-i-correctly-use-mock-call-args-with-pythons-unittest-mock
Consider the following files: holy_hand_grenade.py def count(one, two, five='three'): print('boom') test_holy_hand_grenade.py from unittest import mock import holy_hand_grenade def test_hand_grenade(): mock_count = mock.patch("holy_hand_grenade.count", autospec=True) with mock_count as fake_count: fake_count(1, 2, five=5) # According to https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.call_args # this should work assert fake_count.call_args.kwargs['five'] == 5 According to the docs, call_args should be: This is either None (if the mock hasn’t been called), or the arguments that the mock was last called with. This will be in the form of a tuple: the first member, which can also be accessed through the args property, is any ordered arguments the mock was called with (or an empty tuple) and the second member, which can also be accessed through the kwargs property, is any keyword arguments (or an empty dictionary). (emphasis mine) But this blows up in my face, with TypeError: tuple indices must be integers or slices, not str Um. No? The thing I really don't understand, is that if this is a call object, which it is, because assert isinstance(fake_count.call_args, (mock._Call,)) passes, it's supposed to have kwargs and args. And it... well, it sort of does. But they appear to not actually be the correct thing: assert isinstance(fake_count.call_args.kwargs, (mock._Call,)) #this works assert isinstance(fake_count.call_args.kwargs, (dict,)) # doesn't work What am I doing wrong here?
This is a feature introduced in Python 3.8 in this issue. The 3.7 documentation does not mention it (while the newest docs do) - so you have to access the arguments by index in Python < 3.8.
12
14
60,115,633
2020-2-7
https://stackoverflow.com/questions/60115633/pytorch-flatten-doesnt-maintain-batch-size
In Keras, using the Flatten() layer retains the batch size. For eg, if the input shape to Flatten is (32, 100, 100), in Keras output of Flatten is (32, 10000), but in PyTorch it is 320000. Why is it so?
As OP already pointed out in their answer, the tensor operations do not default to considering a batch dimension. You can use torch.flatten() or Tensor.flatten() with start_dim=1 to start the flattening operation after the batch dimension. Alternatively since PyTorch 1.2.0 you can define an nn.Flatten() layer in your model which defaults to start_dim=1.
11
19
60,101,240
2020-2-6
https://stackoverflow.com/questions/60101240/finding-mean-and-standard-deviation-across-image-channels-pytorch
Say I have a batch of images in the form of tensors with dimensions (B x C x W x H) where B is the batch size, C is the number of channels in the image, and W and H are the width and height of the image respectively. I'm looking to use the transforms.Normalize() function to normalize my images with respect to the mean and standard deviation of the dataset across the C image channels, meaning that I want a resulting tensor in the form 1 x C. Is there a straightforward way to do this? I tried torch.view(C, -1).mean(1) and torch.view(C, -1).std(1) but I get the error: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. Edit After looking into how view() works in PyTorch, I know realize why my approach doesn't work; however, I still can't figure out how to get the per-channel mean and standard deviation.
You just need to rearrange batch tensor in a right way: from [B, C, W, H] to [B, C, W * H] by: batch = batch.view(batch.size(0), batch.size(1), -1) Here is complete usage example on random data: Code: import torch from torch.utils.data import TensorDataset, DataLoader data = torch.randn(64, 3, 28, 28) labels = torch.zeros(64, 1) dataset = TensorDataset(data, labels) loader = DataLoader(dataset, batch_size=8) nimages = 0 mean = 0. std = 0. for batch, _ in loader: # Rearrange batch to be the shape of [B, C, W * H] batch = batch.view(batch.size(0), batch.size(1), -1) # Update total number of images nimages += batch.size(0) # Compute mean and std here mean += batch.mean(2).sum(0) std += batch.std(2).sum(0) # Final step mean /= nimages std /= nimages print(mean) print(std) Output: tensor([-0.0029, -0.0022, -0.0036]) tensor([0.9942, 0.9939, 0.9923])
11
10
60,107,982
2020-2-7
https://stackoverflow.com/questions/60107982/attributeerror-function-object-has-no-attribute-func-name-and-python-3
I downloaded the following code : from __future__ import print_function from time import sleep def callback_a(i, result): print("Items processed: {}. Running result: {}.".format(i, result)) def square(i): return i * i def processor(process, times, report_interval, callback): print("Entered processor(): times = {}, report_interval = {}, callback = {}".format( times, report_interval, callback.func_name)) # Can also use callback.__name__ instead of callback.func_name in line above. result = 0 print("Processing data ...") for i in range(1, times + 1): result += process(i) sleep(1) if i % report_interval == 0: # This is the call to the callback function # that was passed to this function. callback(i, result) processor(square, 20, 5, callback_a) It works fine under python 2, but I get the following error under python3: Traceback (most recent call last): File "test/python/cb_demo.py", line 33, in <module> processor(square, 20, 5, callback_a) File "test/python/cb_demo.py", line 21, in processor times, report_interval, callback.func_name)) AttributeError: 'function' object has no attribute 'func_name' I need to work under python3.
That behaviour in Python 3 is expected as it was changed from Python 2. Per the documentation here: https://docs.python.org/3/whatsnew/3.0.html#operators-and-special-methods The function attributes named func_X have been renamed to use the __X__ form, freeing up these names in the function attribute namespace for user-defined attributes. To wit, func_closure, func_code, func_defaults, func_dict, func_doc, func_globals, func_name were renamed to __closure__, __code__, __defaults__, __dict__, __doc__, __globals__, __name__, respectively. You will notice the mention of func_name as one of the attributes that were renamed. You will need to use __name__. Sample code in Python 3: >>> def foo(a): ... print(a.__name__) ... >>> def c(): ... pass ... >>> >>> foo(c) c
14
15
60,104,564
2020-2-6
https://stackoverflow.com/questions/60104564/when-and-why-to-use-self-dict-instead-of-self-variable
I'm trying to understand some code which is using this class below: class Base(object): def __init__(self, **kwargs): self.client = kwargs.get('client') self.request = kwargs.get('request') ... def to_dict(self): data = dict() for key in iter(self.__dict__): # <------------------------ this if key in ('client', 'request'): continue value = self.__dict__[key] if value is not None: if hasattr(value, 'to_dict'): data[key] = value.to_dict() else: data[key] = value return data I understand that it gets keyword arguments passed to the Base class like for example, Base(client="foo", request="bar"). My confusion is, why is it using self.__dict__ which turns variables inside __init__ to a dict (e.g {"client": "foo", "request": "bar"}) instead of just calling them by self.client & self.request inside other methods? When and why I should use self.__dict__ instead?
Almost all of the time, you shouldn't use self.__dict__. If you're accessing an attribute like self.client, i.e. the attribute name is known and fixed, then the only difference between that and self.__dict__['client'] is that the latter won't look up the attribute on the class if it's missing on the instance. There is very rarely any reason to do this, but the difference is demonstrated below: >>> class A: ... b = 3 # class attribute, not an instance attribute ... >>> A.b # the class has this attribute 3 >>> a = A() >>> a.b # the instance doesn't have this attribute, fallback to the class 3 >>> a.__dict__['b'] # the instance doesn't have this attribute, but no fallback Traceback (most recent call last): File "<stdin>", line 1, in <module> KeyError: 'b' The main use-case for self.__dict__ is when you don't want to access a fixed, known attribute name. In almost all code, you always know which attribute you want to access; and if you do need to look something up dynamically using an unknown string, you should create a dictionary yourself, and write self.that_dict[key] instead of self.__dict__[key]. So the only times you should really use __dict__ is when you are writing code which needs to work regardless of which attributes the instance might have; i.e. you specifically want code which will work even if you change the class's structure or its attribute names, or code which will work across multiple classes with different structures. I'll show one example below. The __repr__ method The __repr__ method is meant to return a string representing the instance, for the programmer's convenience when using a REPL. For debugging/testing purposes this string usually contains information about the object's state. Here's a common way to implement it: class Foo: def __init__(self, foo, bar, baz): self.foo = foo self.bar = bar self.baz = baz def __repr__(self): return 'Foo({!r}, {!r}, {!r})'.format(self.foo, self.bar, self.baz) This means if you write obj = Foo(1, 'y', True) to create an instance, then repr(obj) will be the string "Foo(1, 'y', True)", which is convenient because it shows the instance's entire state, and also the string itself is Python code which creates an instance with the same state. But there are a few issues with the above implementation: we have to change it if the class's attributes change, it won't give useful results for instances of subclasses, and we have to write lots of similar code for different classes with different attributes. If we use __dict__ instead, we can solve all of those problems: def __repr__(self): return '{}({})'.format( self.__class__.__name__, ', '.join('{}={!r}'.format(k, v) for k, v in self.__dict__.items()) ) Now repr(obj) will be Foo(foo=1, bar='y', baz=True), which also shows the instance's entire state, and is also executable Python code. This generalised __repr__ method will still work if the structure of Foo changes, it can be shared between multiple classes via inheritance, and it returns executable Python code for any class whose attributes are accepted as keyword arguments by __init__.
10
20
60,102,928
2020-2-6
https://stackoverflow.com/questions/60102928/pandas-fillna-only-numeric-int-or-float-columns
I would like to apply fillna only in numeric columns. Is possible? Right now, I'm applying it in all columns: df = df.replace(r"^\s*$", np.nan, regex=True)
You can select numeric columns and then fillna E.g: import pandas as pd df = pd.DataFrame({'a': [1, None] * 3, 'b': [True, None] * 3, 'c': [1.0, None] * 3}) # select numeric columns numeric_columns = df.select_dtypes(include=['number']).columns # fill -1 to all NaN df[numeric_columns] = df[numeric_columns].fillna(-1) # print print(df)
11
19
60,101,168
2020-2-6
https://stackoverflow.com/questions/60101168/pytorch-runtimeerror-dataloader-worker-pids-15332-exited-unexpectedly
I am a beginner at PyTorch and I am just trying out some examples on this webpage. But I can't seem to get the 'super_resolution' program running due to this error: RuntimeError: DataLoader worker (pid(s) 15332) exited unexpectedly I searched the Internet and found that some people suggest setting num_workers to 0. But if I do that, the program tells me that I am running out of memory (either with CPU or GPU): RuntimeError: [enforce fail at ..\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 9663676416 bytes. Buy new RAM! or RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 0 bytes free; 2.03 GiB reserved in total by PyTorch) How do I fix this? I am using python 3.8 on Win10(64bit) and pytorch 1.4.0. More complete error messages (--cuda means using GPU, --threads x means passing x to the num_worker parameter): with command line arguments --upscale_factor 1 --cuda File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "E:\Python38\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 48, in train for iteration, batch in enumerate(training_data_loader, 1): File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 808, in _get_data success, data = self._try_get_data() File "E:\Python38\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 16596, 9376, 12756, 9844) exited unexpectedly with command line arguments --upscale_factor 1 --cuda --threads 0 File "Z:\super_resolution\main.py", line 81, in <module> train(epoch) File "Z:\super_resolution\main.py", line 52, in train loss = criterion(model(input), target) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "Z:\super_resolution\model.py", line 21, in forward x = self.relu(self.conv2(x)) File "E:\Python38\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 345, in forward return self.conv2d_forward(input, self.weight) File "E:\Python38\lib\site-packages\torch\nn\modules\conv.py", line 341, in conv2d_forward return F.conv2d(input, weight, self.bias, self.stride, RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 4.00 GiB total capacity; 2.03 GiB already allocated; 954.35 MiB free; 2.03 GiB reserved in total by PyTorch)
There is no "complete" solve for GPU out of memory errors, but there are quite a few things you can do to relieve the memory demand. Also, make sure that you are not passing the trainset and testset to the GPU at the same time! Decrease batch size to 1 Decrease the dimensionality of the fully-connected layers (they are the most memory-intensive) (Image data) Apply centre cropping (Image data) Transform RGB data to greyscale (Text data) Truncate input at n chars (which probably won't help that much) Alternatively, you can try running on Google Colaboratory (12 hour usage limit on K80 GPU) and Next Journal, both of which provide up to 12GB for use, free of charge. Worst case scenario, you might have to conduct training on your CPU. Hope this helps!
35
17
60,079,644
2020-2-5
https://stackoverflow.com/questions/60079644/how-do-you-edit-an-existing-tensorboard-training-loss-summary
I've trained my network and generated some training/validation losses which I saved via the following code example (example of training loss only, validation is perfectly equivalent): valid_summary_writer = tf.summary.create_file_writer("/path/to/logs/") with train_summary_writer.as_default(): tf.summary.scalar('Training Loss', data=epoch_loss, step=current_step) After training I would then like to view the loss curves using Tensorboard. However because I saved the loss curves under the names 'Training Loss' and 'Validation Loss' these curves are plotted on separate graphs. I know that I should change the name to be simply 'loss' to solve this problem for future writes to the log directory. But how do I edit my existing log files for the training/validation losses to account for this? I attempted to modify the following post's solution: https://stackoverflow.com/a/55061404 which edits the steps of a log file and re-writes the file; where my version involves changing the tags in the file. But I had no success in this area. It also requires importing older Tensorflow code through 'tf.compat.v1'. Is there a way to achieve this (maybe in TF 2.X)? I had thought to simply acquire the loss and step values from each log directory containing the losses and write them to new log files via my previous working method, but I only managed to obtain the step, and not the loss value itself. Has anyone had any success here? ---=== EDIT ===--- I managed to fix the problem using the code from @jhedesa I had to slightly alter the way that the function "rename_events_dir" was called as I am using Tensorflow collaboratively inside of a Google Colab Notebook. To do this I changed the final part of the code which read: if __name__ == '__main__': if len(sys.argv) != 5: print(f'{sys.argv[0]} <input dir> <output dir> <old tags> <new tag>', file=sys.stderr) sys.exit(1) input_dir, output_dir, old_tags, new_tag = sys.argv[1:] old_tags = old_tags.split(';') rename_events_dir(input_dir, output_dir, old_tags, new_tag) print('Done') To read this: rootpath = '/path/to/model/' dirlist = [dirname for dirname in os.listdir(rootpath) if dirname not in ['train', 'valid']] for dirname in dirlist: rename_events_dir(rootpath + dirname + '/train', rootpath + '/train', 'Training Loss', 'loss') rename_events_dir(rootpath + dirname + '/valid', rootpath + '/valid', 'Validation Loss', 'loss') Notice that I called "rename_events_dir" twice, once for editing the tags for the training loss, and once for the validation loss tags. I could have used the previous method of calling the code by setting "old_tags = 'Training Loss;Validation Loss'" and using "old_tags = old_tags.split(';')" to split the tags. I used my method simply to understand the code and how it processed the data.
As mentioned in How to load selected range of samples in Tensorboard, TensorBoard events are actually stored record files, so you can read them and process them as such. Here is a script similar to the one posted there but for the purpose of renaming events, and updated to work in TF 2.x. #!/usr/bin/env python3 # -*- coding: utf-8 -*- # rename_events.py import sys from pathlib import Path import os # Use this if you want to avoid using the GPU os.environ['CUDA_VISIBLE_DEVICES'] = '-1' import tensorflow as tf from tensorflow.core.util.event_pb2 import Event def rename_events(input_path, output_path, old_tags, new_tag): # Make a record writer with tf.io.TFRecordWriter(str(output_path)) as writer: # Iterate event records for rec in tf.data.TFRecordDataset([str(input_path)]): # Read event ev = Event() ev.MergeFromString(rec.numpy()) # Check if it is a summary if ev.summary: # Iterate summary values for v in ev.summary.value: # Check if the tag should be renamed if v.tag in old_tags: # Rename with new tag name v.tag = new_tag writer.write(ev.SerializeToString()) def rename_events_dir(input_dir, output_dir, old_tags, new_tag): input_dir = Path(input_dir) output_dir = Path(output_dir) # Make output directory output_dir.mkdir(parents=True, exist_ok=True) # Iterate event files for ev_file in input_dir.glob('**/*.tfevents*'): # Make directory for output event file out_file = Path(output_dir, ev_file.relative_to(input_dir)) out_file.parent.mkdir(parents=True, exist_ok=True) # Write renamed events rename_events(ev_file, out_file, old_tags, new_tag) if __name__ == '__main__': if len(sys.argv) != 5: print(f'{sys.argv[0]} <input dir> <output dir> <old tags> <new tag>', file=sys.stderr) sys.exit(1) input_dir, output_dir, old_tags, new_tag = sys.argv[1:] old_tags = old_tags.split(';') rename_events_dir(input_dir, output_dir, old_tags, new_tag) print('Done') You would use it like this: > python rename_events.py my_log_dir renamed_log_dir "Training Loss;Validation Loss" loss
9
13
60,095,973
2020-2-6
https://stackoverflow.com/questions/60095973/find-peaks-does-not-identify-a-peak-at-the-start-of-the-array
I am trying to find a vectorized approach of finding the first position in an array where the values did not get higher than the maximum of n previous numbers. I thought about using the find_peaks method of scipy.signal to find a local maximum. I think it does exactly that if you define the distance to let's say 10 n is 10. But unfortunately, the condition for the distance has to be fulfilled in both directions - previous and upcoming numbers. Is there any other method or approach to finding such a thing? Example: arr1 = np.array([1. , 0.73381293, 0.75649351, 0.77693474, 0.77884614, 0.81055903, 0.81402439, 0.78798586, 0.78839588, 0.82967961, 0.8448 , 0.83276451, 0.82539684, 0.81762916, 0.82722515, 0.82101804, 0.82871127, 0.82825041, 0.82086957, 0.8347826 , 0.82666665, 0.82352942, 0.81270903, 0.81191224, 0.83180428, 0.84975767, 0.84044236, 0.85057473, 0.8394649 , 0.80000001, 0.83870965, 0.83962262, 0.85039371, 0.83359748, 0.84019768, 0.83281732, 0.83660132]) from scipy.signal import find_peaks peaks, _ = find_peaks(arr1, distance=10) In this case, it finds positions 10 and 27. But also position 0 has 10 following elements which are not higher. How can I find those?
def rolling_window(a, window): shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) def get_peaks(arr, window): maxss = np.argmax(rolling_window(arr1, window), axis=1) return np.where(maxss == 0)[0] >>> arr1 = np.array([1. , 0.73381293, 0.75649351, 0.77693474, 0.77884614, 0.81055903, 0.81402439, 0.78798586, 0.78839588, 0.82967961, 0.8448 , 0.83276451, 0.82539684, 0.81762916, 0.82722515, 0.82101804, 0.82871127, 0.82825041, 0.82086957, 0.8347826 , 0.82666665, 0.82352942, 0.81270903, 0.81191224, 0.83180428, 0.84975767, 0.84044236, 0.85057473, 0.8394649 , 0.80000001, 0.83870965, 0.83962262, 0.85039371, 0.83359748, 0.84019768, 0.83281732, 0.83660132]) >>> get_peaks(arr1, 10) array([ 0, 10, 27]) Credit for rolling window function : Rolling window for 1D arrays in Numpy?
9
2
60,086,741
2020-2-6
https://stackoverflow.com/questions/60086741/docker-so-slow-while-installing-pip-requirements
I am trying to implement a docker for a dummy local Django project. I am using docker-compose as a tool for defining and running multiple containers. Here I tried to containerize the Django-web-app and PostgreSQL two services. Configuration used in Dockerfile and docker-compose.yml Dockerfile # Pull base image FROM python:3.7-alpine # Set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set work directory WORKDIR /code # Install dependencies COPY requirements.txt /code/ RUN pip install -r requirements.txt # Copy project COPY . /code/ docker-compose.yml version: '3.7' services: web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db db: image: postgres:11 volumes: - postgres_data:/var/lib/postgresql/data/ volumes: postgres_data: All seems okay. The path postgres integrations and all except one thing pip install -r requirements.txt. This is taking too much time to install from requirements. Last time I was giving up on this but at last the installation does completed but takes lots of time to complete. In my scenario, the only issue is why the pip install so slow. If there is anything that I am missing? I am new to docker and any help on this topic will be highly appreciated. Thank you. I was following this Link.
Probably this is because PyPI wheels don’t work on Alpine. Instead of using precompile files Alpine downloads the source code and compile it. Try to use python:3.7-slim image instead: # Pull base image FROM python:3.7-slim # Set environment variables ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONUNBUFFERED 1 # Set work directory WORKDIR /code # Install dependencies COPY requirements.txt /code/ RUN pip install -r requirements.txt # Copy project COPY . /code/ Check this article for more details: Alpine makes Python Docker builds 50× slower.
11
31
60,079,783
2020-2-5
https://stackoverflow.com/questions/60079783/difference-between-keras-batchnormalization-and-pytorchs-batchnorm2d
I've a sample tiny CNN implemented in both Keras and PyTorch. When I print summary of both the networks, the total number of trainable parameters are same but total number of parameters and number of parameters for Batch Normalization don't match. Here is the CNN implementation in Keras: inputs = Input(shape = (64, 64, 1)). # Channel Last: (NHWC) model = Conv2D(filters=32, kernel_size=(3, 3), padding='SAME', activation='relu', input_shape=(IMG_SIZE, IMG_SIZE, 1))(inputs) model = BatchNormalization(momentum=0.15, axis=-1)(model) model = Flatten()(model) dense = Dense(100, activation = "relu")(model) head_root = Dense(10, activation = 'softmax')(dense) And the summary printed for above model is: Model: "model_8" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_9 (InputLayer) (None, 64, 64, 1) 0 _________________________________________________________________ conv2d_10 (Conv2D) (None, 64, 64, 32) 320 _________________________________________________________________ batch_normalization_2 (Batch (None, 64, 64, 32) 128 _________________________________________________________________ flatten_3 (Flatten) (None, 131072) 0 _________________________________________________________________ dense_11 (Dense) (None, 100) 13107300 _________________________________________________________________ dense_12 (Dense) (None, 10) 1010 ================================================================= Total params: 13,108,758 Trainable params: 13,108,694 Non-trainable params: 64 _________________________________________________________________ Here's the implementation of the same model architecture in PyTorch: # Image format: Channel first (NCHW) in PyTorch class CustomModel(nn.Module): def __init__(self): super(CustomModel, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(in_channels=1, out_channels=32, kernel_size=(3, 3), padding=1), nn.ReLU(True), nn.BatchNorm2d(num_features=32), ) self.flatten = nn.Flatten() self.fc1 = nn.Linear(in_features=131072, out_features=100) self.fc2 = nn.Linear(in_features=100, out_features=10) def forward(self, x): output = self.layer1(x) output = self.flatten(output) output = self.fc1(output) output = self.fc2(output) return output And following is the output of summary of the above model: ---------------------------------------------------------------- Layer (type) Output Shape Param # ================================================================ Conv2d-1 [-1, 32, 64, 64] 320 ReLU-2 [-1, 32, 64, 64] 0 BatchNorm2d-3 [-1, 32, 64, 64] 64 Flatten-4 [-1, 131072] 0 Linear-5 [-1, 100] 13,107,300 Linear-6 [-1, 10] 1,010 ================================================================ Total params: 13,108,694 Trainable params: 13,108,694 Non-trainable params: 0 ---------------------------------------------------------------- Input size (MB): 0.02 Forward/backward pass size (MB): 4.00 Params size (MB): 50.01 Estimated Total Size (MB): 54.02 ---------------------------------------------------------------- As you can see in above results, Batch Normalization in Keras has more number of parameters than PyTorch (2x to be exact). So what's the difference in above CNN architectures? If they are equivalent, then what am I missing here?
Keras treats as parameters (weights) many things that will be "saved/loaded" in the layer. While both implementations naturally have the accumulated "mean" and "variance" of the batches, these values are not trainable with backpropagation. Nevertheless, these values are updated every batch, and Keras treats them as non-trainable weights, while PyTorch simply hides them. The term "non-trainable" here means "not trainable by backpropagation", but doesn't mean the values are frozen. In total they are 4 groups of "weights" for a BatchNormalization layer. Considering the selected axis (default = -1, size=32 for your layer) scale (32) - trainable offset (32) - trainable accumulated means (32) - non-trainable, but updated every batch accumulated std (32) - non-trainable, but updated every batch The advantage of having it like this in Keras is that when you save the layer, you also save the mean and variance values the same way you save all other weights in the layer automatically. And when you load the layer, these weights are loaded together.
15
20
60,077,401
2020-2-5
https://stackoverflow.com/questions/60077401/rotate-x-axis-labels-facetgrid-seaborn-not-working
I'm attempting to create a faceted plot using seaborn in python, but I'm having issues with a number of things, one thing being rotating the x-axis labels. I am currently attempting to use the following code: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt vin = pd.Series(["W1","W1","W2","W2","W1","W3","W4"]) word1 = pd.Series(['pdi','pdi','tread','adjust','fill','pdi','fill']) word2 = pd.Series(['perform','perform','fill','measure','tire','check','tire']) date = pd.Series(["01-07-2020","01-07-2020","01-07-2020","01-07-2020","01-08-2020","01-08-2020","01-08-2020"]) bigram_with_dates = pd.concat([vin,word1,word2,date], axis = 1) names = ["vin", "word1","word2","date"] bigram_with_dates.columns = names bigram_with_dates['date'] = pd.to_datetime(bigram_with_dates['date']) bigram_with_dates['text_concat'] = bigram_with_dates['word1'] + "," + bigram_with_dates['word2'] plot_params = sns.FacetGrid(bigram_with_dates, col="date", height=3, aspect=.5, col_wrap = 10,sharex = False, sharey = False) plot = plot_params.map(sns.countplot, 'text_concat', color = 'c', order = bigram_with_dates['text_concat']) plot_adjust = plot.fig.subplots_adjust(wspace=0.5, hspace=0.5) for axes in plot.axes.flat: axes.set_xticklabels(axes.get_xticklabels(), rotation=90) When I use this I get an error that states: AttributeError: 'NoneType' object has no attribute 'axes' Which I think I understand to mean that there is no returned object so setting axes on nothing does nothing. This code seems to work in other SO posts I've come across, but I can't seem to get it to work. Any suggestions as to what I'm doing wrong would be greatly appreciated. Thanks, Curtis
Try this, it seems you were over-writing the 'plot' variable.: import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline vin = pd.Series(["W1","W1","W2","W2","W1","W3","W4"]) word1 = pd.Series(['pdi','pdi','tread','adjust','fill','pdi','fill']) word2 = pd.Series(['perform','perform','fill','measure','tire','check','tire']) date = pd.Series(["01-07-2020","01-07-2020","01-07-2020","01-07-2020","01-08-2020","01-08-2020","01-08-2020"]) bigram_with_dates = pd.concat([vin,word1,word2,date], axis = 1) names = ["vin", "word1","word2","date"] bigram_with_dates.columns = names bigram_with_dates['date'] = pd.to_datetime(bigram_with_dates['date']).dt.strftime('%m-%d-%Y') bigram_with_dates['text_concat'] = bigram_with_dates['word1'] + "," + bigram_with_dates['word2'] plot = sns.FacetGrid(bigram_with_dates, col="date", height=3, aspect=.5, col_wrap = 10,sharex = False, sharey = False) plot1 = plot.map(sns.countplot, 'text_concat', color = 'c', order = bigram_with_dates['text_concat'].value_counts(ascending = False).iloc[:5].index)\ .fig.subplots_adjust(wspace=0.5, hspace=12) for axes in plot.axes.flat: _ = axes.set_xticklabels(axes.get_xticklabels(), rotation=90) plt.tight_layout() Output:
10
14
60,077,695
2020-2-5
https://stackoverflow.com/questions/60077695/how-to-get-the-last-row-value-in-pandas-through-dataframe-get-value
I follow this instruction https://www.geeksforgeeks.org/python-pandas-dataframe-get_value/ and know how to get the value from a dateframe data in pandas: df.get_value(10, 'Salary') My question is how to get the value of 'Salary' in the last row?
First of all I would advise against using get_value since it is/will be deprecated. (see: https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.get_value.html ) There are a couple of solutions: df['Salary'].iloc[-1] df.Salary.iloc[-1] are synonymous. Iloc is the way to retrieve items in a pandas df by their index. df['Salary'].values[-1] creates a list of the Salary column and returns the last items df['Salary'].tail(1) df.Salary.tail(1) returns the last row of the salary column. Which of the solutions is best, depends on the context and your personal preference.
8
22
60,071,680
2020-2-5
https://stackoverflow.com/questions/60071680/django-shell-api-keyerror
I am importing models in django shell API but i get the below error. Here is how it occurs: python manage.py shell from .models import Device I get: File "<console>", line 1, in <module> KeyError: "'__name__' not in globals"
Try putting the app name before ".models". Here .models trying to import from models.py in the current directory but models.py is actually located in the app directory. >> from [app_name].models import Device
8
24
60,067,548
2020-2-5
https://stackoverflow.com/questions/60067548/clone-base-environment-in-anaconda
My conda version is 4.7.11. I am trying to clone the base env to a new one so I can install some specific packages and will not mess up the base environment. I tried as some other answers suggested: conda create --name <myenv> --clone base and conda create --name <myenv> --clone root But none of them works. The message from terminal is "The system cannot find the file specified". Below is my cuurent env list: base * D:\LabTest\Dave\Anaconda dlc-windowsCPU D:\LabTest\Dave\Anaconda\envs\dlc-windowsCPU dlc-windowsGPU D:\LabTest\Dave\Anaconda\envs\dlc-windowsGPU dlc-windowsGPU-dave D:\LabTest\Dave\Anaconda\envs\dlc-windowsGPU-dave dlc-windowsGPU-yc D:\LabTest\Dave\Anaconda\envs\dlc-windowsGPU-yc I also cannot clone from my anaconda navigator. Don't know what to do.
I would recommend that you try the method as shown on this official documentation. In summary, you can get all the list of modules installed in the virtual environment, save it as a .txt file, and create a new environment from that .txt file. For example, conda list --explicit > spec-file.txt Then, create a new environment using that specification. conda create --name myenv --file spec-file.txt While this is not exactly "cloning" the base environment, you should be able to reproduce a virtual environment identical to the base through this process.
33
30
59,981,914
2020-1-30
https://stackoverflow.com/questions/59981914/missing-dependancies-of-rtree
I am currently using Spyder for Python, and I have this error message when I open the program: Error: You have missing dependencies! rtree>= 0.8.3: None (NOK) Please install them to avoid this message. Note: Spyder could work without some of these dependencies, however to have a smooth experience, we strongly recommend. I tried pip install rtree and got: Collecting rtree Downloading https://files.pythonhosted.org/packages/11/1d/42d6904a436076df813d1df632575529991005b33aa82f169f01750e39e4/Rtree-0.9.3.tar.gz (520kB) |████████████████████████████████| 522kB 467kB/s ERROR: Command errored out with exit status 1: command: 'C:\Users\gitte\Anaconda3\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\gitte\\AppData\\Local\\Temp\\pip-install-kmbt5h2t\\rtree\\setup.py'"'"'; __file__='"'"'C:\\Users\\gitte\\AppData\\Local\\Temp\\pip-install-kmbt5h2t\\rtree\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base pip-egg-info cwd: C:\Users\gitte\AppData\Local\Temp\pip-install-kmbt5h2t\rtree\ Complete output (11 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\gitte\AppData\Local\Temp\pip-install-kmbt5h2t\rtree\setup.py", line 3, in <module> import rtree File "C:\Users\gitte\AppData\Local\Temp\pip-install-kmbt5h2t\rtree\rtree\__init__.py", line 1, in <module> from .index import Rtree File "C:\Users\gitte\AppData\Local\Temp\pip-install-kmbt5h2t\rtree\rtree\index.py", line 6, in <module> from . import core File "C:\Users\gitte\AppData\Local\Temp\pip-install-kmbt5h2t\rtree\rtree\core.py", line 128, in <module> raise OSError("could not find or load %s" % lib_name) OSError: could not find or load spatialindex_c-64.dll ---------------------------------------- ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Please advise what I can do. Spyder works great so far, I just don't want to have issues along the way. Thanks!
It looks like Rtree requires libspatialindex (https://libspatialindex.org) which is not automatically installed. It seems some devs are aware of the problem and working on a fix: https://github.com/Toblerity/rtree/issues/146 https://github.com/Toblerity/rtree/issues/147
9
8