GUI and Desktop Applications
int64 0
1
| A_Id
int64 5.3k
72.5M
| Networking and APIs
int64 0
1
| Python Basics and Environment
int64 0
1
| Other
int64 0
1
| Database and SQL
int64 0
1
| Available Count
int64 1
13
| is_accepted
bool 2
classes | Q_Score
int64 0
1.72k
| CreationDate
stringlengths 23
23
| Users Score
int64 -11
327
| AnswerCount
int64 1
31
| System Administration and DevOps
int64 0
1
| Title
stringlengths 15
149
| Q_Id
int64 5.14k
60M
| Score
float64 -1
1.2
| Tags
stringlengths 6
90
| Answer
stringlengths 18
5.54k
| Question
stringlengths 49
9.42k
| Web Development
int64 0
1
| Data Science and Machine Learning
int64 1
1
| ViewCount
int64 7
3.27M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 55,169,600 | 0 | 0 | 0 | 0 | 1 | true | 7 | 2019-03-14T18:08:00.000 | 5 | 3 | 0 | How to make Altair plots responsive | 55,169,344 | 1.2 | python,vega-lite,altair | There is no way to do this. The dimensions of Altair/Vega-Lite charts are pre-determined by the chart specification and data, and cannot be made to scale with the size of the browser window. | Can one make Altair plots fit the screen size, rather than have a pixel-defined width and height? I've read things about autosize "fit", but I am unsure about where to specify these. | 0 | 1 | 2,245 |
0 | 55,172,773 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-14T21:59:00.000 | 0 | 2 | 0 | No module named scipy, spacy, nltk | 55,172,651 | 0 | python,pip,jupyter-notebook | Did you install Anaconda + Python ? Python doesn't come with package, maybe you're using Python path instead of Anaconda to run jupyter | (base) C:\Users\Kevin>pip install scipy Requirement already satisfied:
scipy in c:\programdata\anaconda3\lib\site-packages (1.1.0)
etc
Suddenly my jupyter notebook denies to import several packages. pandas and numpy work, but all the other packages do not (spacy, nltk, scipy, requests)
I tried reinstall packages, but it says I already have installed them.
Why is this happening? | 0 | 1 | 279 |
0 | 55,189,726 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-15T19:42:00.000 | 1 | 1 | 0 | Weird Indexing by Python and Numpy | 55,189,686 | 0.197375 | python,numpy | X[:100] means slice X from 0 to 100 or the end (whichever comes first)
But X[100] means the 100th element of X, and if it doesn't exist it throws an index out of range error | I have a variable X, it contains a list (Python list), of 10 Numpy 1-D arrays (basically vectors).
If I ask for X[100], it throws an error saying: IndexError: list index out of range
Which makes total sense, but, when I ask for X[:100], it doesn't throw an error and it returns the entire list!
Why is that? | 0 | 1 | 156 |
0 | 55,225,618 | 0 | 1 | 0 | 0 | 1 | false | 2 | 2019-03-17T09:02:00.000 | 1 | 2 | 0 | Pandas timestamp to and from json | 55,205,436 | 0.099668 | python,json,pandas,numpy | If I have correctly understood your problem, you are looking for a serialization way preserving the data types of a dataframe.
The problem is that the interchange formats internally use few types: only strings for csv, strings and numbers for json. Of course there are ways to give formatting hints at read time (date format for date columns in csv), and it is generaly easy to convert back to the proper type after extraction, by I think that you would hope a more natural way. As suggested by Attack68 you could use a database but for example a SQLite database would be off because it has no internal date type.
IMHO a simple way would be to rely on the good old pickle module. After all, a dataframe is a Python object that contains other Python objects, so pickle is good at serializing that. The only point to remember is that, at deserialization time, pandas will have to be imported before calling pickle.load.
But I have just tested with a (tiny) dataframe containing various datatypes, and pickle was great as correctly saving and restoring them. | Objects cannot be serialised to json so therefore need to be converted or parsed through a custom JsonEncoder class.
pandas Dataframe has a number of methods, like from_records to read json data. Yet when you read that json data back it is returned as int64 instead of timestamp.
There are many ways to skin a cat in pandas. What is the best way to preserve data structures when reading and writing json? | 0 | 1 | 2,822 |
0 | 55,211,330 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-03-17T19:55:00.000 | 5 | 1 | 0 | Mask RCNN uses CPU instead of GPU | 55,211,277 | 1.2 | python,tensorflow,machine-learning,keras | It is either because that GPU_COUNT is set to 0 in config.py or you don't have tensorflow-gpu installed (which is required for tensorflow to run on GPU) | I'm using the Mask RCNN library which is based on tenserflow and I can't seem to get it to run on my GPU (1080TI). The inference time is 4-5 seconds, during which I see a usage spike on my cpu but not my gpu. Any possible fixes for this? | 0 | 1 | 3,990 |
0 | 55,231,076 | 0 | 1 | 0 | 0 | 2 | false | 1 | 2019-03-18T19:47:00.000 | 0 | 2 | 0 | Python tensorflow: asyncio or threading | 55,229,009 | 0 | python,tensorflow,websocket,python-asyncio,python-multithreading | I am not an expert in threading/asyncio but maybe it would be easier to spawn an instance of Kafka and have a piece of code that would listen to a Kafka topic? To this topic you would push images or paths to images if you already store them locally. Moreover, using consumer-groups you would get a load balancing like thing for free as it is a part of Kafka. | I am implementing a server for recognizing objects in photos using tensorflow-gpu in "semi-real" time. It will listen for new photos on a websocket connection, then enqueue it into a list for the detector run when it is free. Would it be simpler to use asyncio or threading to handle the websocket listener and the recognition queue? | 0 | 1 | 924 |
0 | 61,762,278 | 0 | 1 | 0 | 0 | 2 | true | 1 | 2019-03-18T19:47:00.000 | 0 | 2 | 0 | Python tensorflow: asyncio or threading | 55,229,009 | 1.2 | python,tensorflow,websocket,python-asyncio,python-multithreading | Ultimately I used asyncio to handle the websocket connection, enqueuing incoming images to a queue. I used threading which had a thread to read the image into RAM, extracted some metadata, and queued it for the object detector. The detector, running in another thread, tagged the images and queued the tags in the database handler (yet another thread). | I am implementing a server for recognizing objects in photos using tensorflow-gpu in "semi-real" time. It will listen for new photos on a websocket connection, then enqueue it into a list for the detector run when it is free. Would it be simpler to use asyncio or threading to handle the websocket listener and the recognition queue? | 0 | 1 | 924 |
0 | 55,440,002 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-18T19:56:00.000 | 0 | 1 | 0 | Get this error when trying to run tensorboard? | 55,229,123 | 0 | python,tensorflow | is your tensorboard 1.13.1?
if so,make it come to 1.12.1,solved the problem.
but i can't find out the reason. | File "C:\ProgramData\Anaconda3\Scripts\tensorboard-script.py", line 10, in
sys.exit(run_main())
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\main.py", line 57, in run_main
app.run(tensorboard.main, flags_parser=tensorboard.configure)
File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "C:\ProgramData\Anaconda3\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\program.py", line 228, in main
self._register_info(server)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\program.py", line 274, in _register_info
manager.write_info_file(info)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\manager.py", line 269, in write_info_file
payload = "%s\n" % _info_to_string(tensorboard_info)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\manager.py", line 129, in _info_to_string
for k in _TENSORBOARD_INFO_FIELDS
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\manager.py", line 129, in
for k in _TENSORBOARD_INFO_FIELDS
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorboard\manager.py", line 51, in
(dt - datetime.datetime.fromtimestamp(0)).total_seconds()),
OSError: [Errno 22] Invalid argument | 0 | 1 | 669 |
0 | 55,229,729 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-18T20:10:00.000 | 0 | 1 | 0 | How is quantization applied/simulated in software? | 55,229,311 | 0 | python,c,approximation | In general there are three approaches:
Analysis
Simulation
Testing
To analyze you must, of course, understand the calculation, and be a skilled mathematician.
To simulate you must still understand the calculation since you need to re-write it in the simulation language, but you don't need to be so good at math ;-)
Testing is the easiest since you need neither understand of the calculation nor deep math skills. In your case this should be pretty trivial: since there are only 16-bit parameters, you can test all combinations of 2 arguments with 2^16 x 2^16 = 2^32 iterations of your test... a blink of an eye on a modern processor. Compare the result using 16-bit floats with 6-bit ints and keep some simple stats (mean error, max error, etc.). If you have more than two arguments you can save time over an exhaustive test by trying a large number of random inputs, but otherwise the same approach. | How is quantization applied/simulated in software in practice? Suppose for example that I'd like to compute how much error in an output of some function I will get if instead of using 16 bit floating point values I were to use 6 bit integer values in the parameters of the function. If it matters for this question, I am interested in applying quantization to neural networks and the like.
My naive thoughts about this: either somehow force the machine to use reduced bit precision (doesn't seem feasible or easy to do on general purpose OS like Linux, but I'd be interested to know if it is done in practice), or artificially simulate the quantization by mapping ranges of floats to a single integer value, where the integer value represents one quantized value.
I put C and python as a tags because I can only understand those languages if you'd like to answer with code. | 0 | 1 | 77 |
0 | 55,230,793 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-18T21:53:00.000 | 1 | 1 | 0 | Is there an alternative to fully loading pre-trained word embeddings in memory? | 55,230,575 | 1.2 | python,machine-learning,memory-management,nlp,word-embedding | What task do you have in mind? If this is a similarity based task, you could simply use the load_word2vec_format method in gensim, this allows you to pass in a limit to the number of vectors loaded. The vectors in something like the Googlenews set are ordered by frequency, this will give you the critical vectors.
This also makes sense theoretically as the words with low frequency will usually have relatively bad representations. | I want to use pre-trained word embeddings in my machine learning model. The word embedings file I have is about 4GB. I currently read the entire file into memory in a dictionary and whenever I want to map a word to its vector representation I perform a lookup in that dictionary.
The memory usage is very high and I would like to know if there is another way of using word embeddings without loading the entire data into memory.
I have recently come across generators in Python. Could they help me reduce the memory usage?
Thank you! | 0 | 1 | 237 |
0 | 55,238,524 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-18T22:14:00.000 | 0 | 5 | 0 | Python, how to combine integer matrix to a list | 55,230,862 | 0 | python,numpy | using numpy:
list(np.array(a).flatten()) | say I have a matrix : a = [[1,2,3],[4,5,6],[7,8,9]]. How can I combine it to b = [1,2,3,4,5,6,7,8,9]?
Many thanks | 0 | 1 | 61 |
0 | 55,241,306 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-19T12:35:00.000 | -1 | 1 | 0 | can we generate loss curve for mlpregressor with lbfgs solver | 55,241,237 | -0.197375 | python,regression | You can plot model.loss_curve_ on your dataframe, and you're good to go! | is it possible to generate loss curve for MLPregressor with lbfgs solver? it has been specified that it can be generated only for 'adam' solver.
if it can be done, kindly help me in this regard. | 0 | 1 | 286 |
0 | 57,216,578 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-19T12:43:00.000 | 0 | 1 | 0 | WinError 193] %1 is not a valid Win32 application | 55,241,360 | 1.2 | pandas,python-3.7 | Another thing might have happened. VS code automatically searches for the numpy and other packages from predefined OS locations. It might have found out 32 bit version of numpy instead of a 64 bit version.
To fix this Uninstall numpy from all OS locations.
* In VS code terminal Type pip uninstall numpy or conda uninstall numpy (If you use Anaconda)
* Restart VS code
* Voila! (Reinstall numpy if the problem persists) | I'm using spyder and trying to import pandas as pd and its giving me the following error:
import pandas as pd
Traceback (most recent call last):
File "", line 1, in
import pandas as pd
File "C:\Users\omer qureshi\AppData\Roaming\Python\Python37\site-packages\pandas__init__.py", line 13, in
import(dependency)
File "C:\Users\omer qureshi\AppData\Roaming\Python\Python37\site-packages\numpy__init__.py", line 142, in
from . import core
File "C:\Users\omer qureshi\AppData\Roaming\Python\Python37\site-packages\numpy\core__init__.py", line 23, in
WinDLL(os.path.abspath(filename))
File "C:\Users\omer qureshi\Anaconda3\lib\ctypes__init__.py", line 356, in init
self._handle = _dlopen(self._name, mode)
OSError: [WinError 193] %1 is not a valid Win32 application
Can someone explain what's wrong? | 0 | 1 | 1,376 |
0 | 55,662,072 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-03-19T16:50:00.000 | 0 | 3 | 0 | Remove default formatting in header when converting pandas DataFrame to excel sheet | 55,246,202 | 0 | python,excel,pandas,dataframe,xlsxwriter | The key explanation is that: pandas writes a df's header with set_cell(). A cell format (in xlsxwriter speak, a "format" is a FormatObject that you have to add to the worksheetObject) can NOT be overridden with set_row(). If you are using set_row() to your header row, it will not work, you have to use set_cell(). | This is something that has been answered and re-answered time and time again because the answer keeps changing with updates to pandas. I tried some of the solutions I found here and elsewhere online and none of them have worked for me on the current version of pandas. Does anyone know the current, March 2019, pandas 0.24.2, fix for removing the default styling that a DataFrame gives to its header when converting it to an excel sheet? Simply using xlsxwriter to overwrite the styling does not work because of an issue with precedence. | 0 | 1 | 11,224 |
0 | 55,250,331 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-19T21:08:00.000 | 1 | 1 | 0 | Pyomo warm start | 55,250,019 | 0.197375 | python,cplex,pyomo | Make sure that the solution you give to CPLEX is feasible. Otherwise, CPLEX will reject it and start from scratch.
If your solution is feasible, it is possible that CPLEX simply found a better solution than yours, since, after all, it is CPLEX's job, and in my own experience, CPLEX is very good at it. Is this a maximization problem? If so, in your example, CPLEX found a better solution (objective=60) than yours (objective=16), which is the expected behavior.
Sadly, CPLEX is often greedy in term of verbose, so it is hard to know from the solver log if warmstart was used or not (unlike its competitor GUROBI where it is clearly written in the log). However, it seems like you started the warmstart correctly, using the warmstart=True parameter.
If, however, your problem isn't a maximization problem, it is possible that CPLEX will not make a differenciation between the variables that you gave a value and the variable that still holds a solution from last solve. Plus, giving values to only a fraction of your variables might make the problem infeasible, considering that all values not manually specified are the values previously found by CPLEX. ex: contraint x<=2y. The solver found x=2, y=1 as a feasible solution. You define x:=3, then your constraint is not respected (y is still =1 for CPLEX, so the constraint x<=2y is 3<=2, which is false). CPLEX will see it as infeasible and will reject your solution.
One alternative that I can give you, if you absolutely want to use your own values in the final solution, is instead of defining values for your variables, create a constraint that explicitly defines your variable value. This constraint can afterward be "deactivated" if needed. But be careful, as this does not necessarily yield the optimal solution, but the "optimal solution when some variables have the specific value". | I've a MIP to solve with Pyomo and I want to set an initial solution for cplex.
So googling I find that I can set some variable of instance to some value and then execute this:
solver.solve(instance,warmstart=True,tee=True)
But when I run cplex it seem that it doesn't use the warm start, because for example i pass a solution with value 16, but in 5 seconds it return a solution with value 60.
So I don't know there is some error or other stuff that doesn't work.
P.S.
I don't know if is a problem but my warm start solution set only some variale to a value, but not all. Could be a problem? | 0 | 1 | 1,303 |
0 | 55,257,099 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-20T01:46:00.000 | 0 | 1 | 0 | What are the reasons to use MonitoredTrainingSession vs Estimator in TensorFlow | 55,252,406 | 1.2 | python,tensorflow,machine-learning,tensorflow-estimator | Short answer is that MonitoredTrainingSession allows user to access Graph and Session objects, and training loop, while Estimator hides the details of graphs and sessions from the user, and generally, makes it easier to run training, especially, with train_and_evaluate, if you need to evaluate periodically.
MonitoredTrainingSession different from plain tf.Session() in a way that it handles variables initialization, setting up file writers and also incorporates functionality for distributed training.
Estimator API, on the other hand, is a high-level construct just like Keras. It's maybe used less in the examples because it was introduced later. It also allows to distribute training/evaluation with DistibutedStrategy, and it has several canned estimators which allow rapid prototyping.
In terms of model definition they are pretty equal, both allow to use either keras.layers, or define completely custom model from the ground up. So, if, for whatever reason, you need to access graph construction or customize training loop, use MonitoredTrainingSession. If you just want to define model, train it, run validation and prediction without additional complexity and boilerplate code, use Estimator | I see many examples with either MonitoredTrainingSession or tf.Estimator as the training framework. However it's not clear why I would use one over the other. Both are configurable with SessionRunHooks. Both integrate with tf.data.Dataset iterators and can feed training/val datasets. I'm not sure what the benefits of one setup would be. | 0 | 1 | 283 |
0 | 55,256,209 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-20T04:25:00.000 | 0 | 1 | 0 | Is Feature Scaling recommended for AutoEncoder? | 55,253,587 | 1.2 | python,neural-network,deep-learning,pytorch,autoencoder | With a few exceptions, you should always apply feature scaling in machine learning, especially when working with gradient descent as in your SAE. Scaling your features will ensure a much smoother cost function and thus faster convergence to global (hopefully) minima.
Also worth noting that your much smaller loss after 1 epoch with scaling should be a result of much smaller values used to compute the loss.
No | Problem:
The Staked Auto Encoder is being applied to a dataset with 25K rows and 18 columns, all float values.
SAE is used for feature extraction with encoding & decoding.
When I train the model without feature scaling, the loss is around 50K, even after 200 epochs. But, when scaling is applied the loss is around 3 from the first epoch.
My questions:
Is it recommended to apply feature scaling when SAE is used for feature extraction
Does it impact accuracy during decoding? | 0 | 1 | 535 |
0 | 55,263,805 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-20T11:16:00.000 | 1 | 1 | 0 | Hash computation vs bucket walkthrough | 55,259,486 | 0.197375 | python,algorithm,optimization | You need to benchmark the code in somewhat realistic scenario.
The reason why it's so hard to say is that you are not just comparing division (by the way, modern compilers avoid divisions with a large number of tricks). On modern CPUs you have large caches so likely the list will fit into L2 or L3 which decreases the run-time dramatically. There's also the fancy vector/SIMD instructions that might be used to speed up all the checks in the linear case.
I would guess that going through the list sequentially will be faster, in addition the code will be simpler.
But don't take my word for it, take a real example and benchmark the two versions and pick based on the results. Especially if this is critical for your system's performance. | I have a nested r-tree like datastructure in Python (list of lists). The key is a large number (about 10 digits). On each level there are about x number of items (eg:10) in the list. Then within each list, it recurses and has x items and so on. The height of the tree is h levels (eg: 5). Each level also has an indication of what range of keys it contains (like r-tree).
For a given key, I need to locate the corresponding entry in the tree. This can be trivially done by scanning through each level, check if the given key lies within the range. If so, then step into that layer and recurse till it reaches the leaf.
This can also be done by successively dividing the key by x and taking the quotient as list index.
So the question is, what is more effecient : walking through list sequentially (complexity = depth * x (eg:50)) or successively dividing the large number by x to get the actual list indices (complexity = h divisions (eg: 5 divisions)).
(ie) 50 range checks or 5 divisions ?
This needs to be scalable. So if this code is being accessed in cloud by very large number of users, what is efficient ? May be division is more expensive to perform at scale than range checks ? | 0 | 1 | 24 |
0 | 55,267,001 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 1 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0.039979 | python,python-3.x,linear-regression | So in terms of a function to approximate your population, high bias means underfit, high variance overfit. To detect which, partition dataset into training, cross validation and test sets.
A low training error but high cross validation error means its overfit.
A high training error means its underfit.
High Bias: add polynomial features, get more samples. High Variance: increase regularisation (squeeze polynomial parameters small), or gather more data so it trains better | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 62,696,336 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 0 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0 | python,python-3.x,linear-regression | In real life, we cannot calculate bias & variance. Recap: Bias measures how much the estimator (can be any machine learning algorithm) is wrong with respect to varying samples, and similarly variance measures how much the estimator fluctuate around the expected value of the estimator. To calculate the bias & variance, we need to generate a number of datasets from some known function by adding noise and train a separate model (estimator) using each dataset. Since we don't know neither the above mentioned known function nor the added noise, we cannot do it. In practise, we can only calculate the overall error. In order to combat with bias/variance dilemma, we do cross-validation. | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 61,523,772 | 0 | 0 | 0 | 0 | 3 | false | 0 | 2019-03-20T17:13:00.000 | 0 | 5 | 0 | bias and variance calculation for linear regression | 55,266,588 | 0 | python,python-3.x,linear-regression | Evaluation of Variance:
Variance = np.var(Prediction) # Where Prediction is a vector variable obtained post the
# predict() function of any Classifier.
SSE = np.mean((np.mean(Prediction) - Y)** 2) # Where Y is your dependent variable.
# SSE : Sum of squared errors.
Bias = SSE - Variance | If we have 4 parameters of X_train, y_train, X_test, and y_test, how can we calculate the bias and variance of a machine learning algorithm like linear regression?
I have searched a lot but I could not find a single code for this. | 0 | 1 | 4,892 |
0 | 55,269,360 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-20T19:26:00.000 | 0 | 2 | 0 | Decision Tree Learning | 55,268,748 | 1.2 | python,decision-tree | Lets assume after some splits you are left with two records with 3 features/attributes (last column being the truth label)
1 1 1 2
2 2 2 1
Now you are about to select the next best feature to split on, so you call this method remainder(examples, attribute) as part of selection which internally calls nk1, pk1 = pk_nk(1, examples, attribute).
The value returned by pk_nk for the above mentioned rows and features will be 0, 0 which will result in divide by zero exception for e1 = b(pk1/(pk1 + nk1)). This is a valid scenario based on how you coded the DT and you should be handling the case. | I want to implement the decision-tree learning alogorithm.
I am pretty new to coding so I know it's not the best code, but I just want it to work. Unfortunately i get the error: e2 = b(pk2/(pk2 + nk2))
ZeroDivisionError: division by zero
Can someone explain to me what I am doing wrong? | 0 | 1 | 104 |
0 | 55,297,814 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-21T14:22:00.000 | 0 | 1 | 0 | Training with mixed-precision using the tensorflow estimator api | 55,282,603 | 0 | python,tensorflow,tensorflow-estimator | I found the issue: I used tf.get_variable to store the learning rate. This variable has no gradient. Normal optimizers do not care, but tf.contrib.mixed_precision.LossScaleOptimizer crashes. Therefore, make sure these variables are not added to tf.GraphKeys.TRAINABLE_VARIABLES. | Does anyone has experience with mixed-precision training using the tensorflow estimator api?
I tried casting my inputs to tf.float16 and the results of the network back to tf.float32. For scaling the loss I used tf.contrib.mixed_precision.LossScaleOptimizer.
The error messages I get are relatively uninformative: "Tried to convert 'x' to a tensor and failed. Error: None values not supported", | 0 | 1 | 293 |
0 | 55,283,054 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-21T14:35:00.000 | 0 | 1 | 0 | How to prevent Dataframe.to_dict() to generate timestamps | 55,282,891 | 0 | python,dataframe,dictionary,timestamp | Simply convert those columns to object type with .astype(str) before calling to_dict(). | I am trying to use the python Dataframe to_dict() method without generating timestamps.
My problem: I have a dataframe with cells containing dates such as this: "2019-06-01". When I call the dataframe method "to-dict()" to generate a dictionnary, it converts the datevalue into something like: "Timestamp('2019-06-01 00:00:00')".
This is not what I would like. I would just like it to return a simple string such as: "2019-06-01"
Is it possible with optional parameters ?
Thank you | 0 | 1 | 252 |
0 | 55,286,759 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-21T17:23:00.000 | 2 | 1 | 0 | Handling Categorical Data with Many Values in sklearn | 55,285,986 | 1.2 | python,pandas,scikit-learn,categorical-data | org_id does not seem to be a feature that brings any info for the classification, you should drop this value and not pass it into the classifier.
In a classifier you only want to pass features that are discriminative for the task that you are trying to perform: here the elements that can impact the retention or churn. The ID of a company does not bring any valuable information in this context therefore it should not be used.
Edit following OP's comment:
Before going further let's state something: with respect to the number of samples (12000) and the relative simplicity of the model, one can make multiple attempts to try different configurations of features easily.
So, As a baseline, I would do as I said before, drop this feature all together. Here is your baseline score i.e., a score you can compare your other combinations of features against.
I think it cost nothing to try to hot-encode org_id, whichever result you observe is going to add up to your experience and knowledge of how the Random Forest behaves in such cases. As you only have 10 more features, the Boolean features is_org_id_1, is_org_id_2, ... will be highly preponderant and the classification results may be highly influenced by these features.
Then I would try to reduce the number of Boolean features by finding new features that can "describe" these 400+ organizations. For instance, if they are only US organizations, their state which is ~50 features, or their number of users (which would be a single numerical feature), their years of existence (another single numerical feature). Let's note that these are only examples to illustrate the process of creating new features, only someone knowing the full problematic can design these features in a smart way.
Also, I would find interesting that, once you solve your problem, you come back here and write another answer to your question as I believe, many people run into such problems when working with real data :) | I am trying to predict customer retention with a variety of features.
One of these is org_id which represents the organization the customer belongs to. It is currently a float column with numbers ranging from 0.0 to 416.0 and 417 unique values.
I am wondering what the best way of preprocessing this column is before feeding it to a scikit-learn RandomForestClassifier. Generally, I would one-hot-encode categorical features, but there are so many values here so it would radically increase my data dimensionality. I have 12,000 rows of data, so I might be OK though, and only about 10 other features.
The alternatives are to leave the column with float values, convert the float values to int values, or convert the floats to pandas' categorical objects.
Any tips are much appreciated. | 0 | 1 | 257 |
0 | 55,295,961 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-21T20:01:00.000 | 0 | 1 | 0 | Python: Iterate through every pixel in an image for image recognition | 55,288,421 | 0 | python-3.x,algorithm,image-recognition | Comparing every pixel with a "pattern" can be done with convolution. You should take a look at Haar cascade algorithm. | I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example "[28, 78, 72]") and creates some kind of values through comparing it to every other pixel. I did manage to access one single number in an array element /pixel (output: 28) through a bunch of for loops, but I just couldn't figure out how to access every number in every pixel, in every row. Does anyone know a good algorithm to solve my problem? I use OpenCV for reading in the image by the way. | 0 | 1 | 63 |
0 | 55,289,028 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-21T20:38:00.000 | 1 | 1 | 0 | numpy.savetxt() rounding values | 55,288,883 | 0.197375 | python,save | You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points | I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this? | 0 | 1 | 789 |
0 | 55,294,058 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-22T03:06:00.000 | 0 | 1 | 0 | Training SVM in Python with pictures | 55,292,341 | 0 | python,svm | as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to your SVM training (the corresponding label for those would be the name of the folders i.e a,b,c,d) you can train your SVM using the features only and during the inference time , you can simply calculate the HOG feature of the image and feed it to your SVM and it will give you the desired output. | I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on.
Now how can I use the images as my training data in SVM in Python. | 0 | 1 | 72 |
0 | 55,298,046 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-22T10:44:00.000 | 1 | 1 | 0 | Adding numpy arrays of different shape | 55,297,902 | 0.197375 | python,numpy | with .squeeze you can convert a (n,1) vector into an (n,) vector, then adding should work | I would like to add two vectors, one of which is (n,1) and the other (n,) such that the type is (n,)
Just adding them with + gives the type (n,1).
What is the function to convert it to a vector (same type as np.zeros(n))?
Or to compute the sum directly into this format? | 0 | 1 | 65 |
0 | 55,310,781 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-22T11:09:00.000 | 0 | 1 | 0 | can we find the required string in image using CNN/LSTM? or do we need to apply NLP after extracting text using CNN/LSTM. can someone please clarify? | 55,298,360 | 0 | deep-learning,lstm,python-tesseract | NLP is used to allow the network to try and "understand" text. I think what you want here is to see if a picture contains text. For this, NLP would not be required, since you are not trying to get the network to analyze or understand the text. Instead, this should be more of an object detection type problem.
There are many models that do object detection.
Some off the top of my head are YOLO, R-CNN, and Mask R-CNN. | Im building a parser algorithm on images. tesseract not giving accuracy. so im thinking to build a CNN+LSTM based model for image to text conversion. is my approach is the right one? can we extract only the required string directly from CNN_LSTM model instead of NLP? or you see any other ways to improve tesseract accuracy? | 0 | 1 | 39 |
0 | 55,303,353 | 0 | 1 | 0 | 0 | 1 | false | 3 | 2019-03-22T13:48:00.000 | 1 | 1 | 0 | opencv_createsamples command it is not recognized in pycharm | 55,301,045 | 0.197375 | python,opencv,pycharm | I tried this a quickie and I got the same message. I also use PyCharm. Are you sure you're using the right version of opencv? I'm on 2.4 which is quite old. Maybe this is a method that's been added in a later version. If you can import cv2 it shouldn't be the pythonpath. | I just started fiddling with opencv and python using pycharm. I followed a tutorial on how to create a Haar Cascade file, but when I reached the step where I had to use 'opencv_createsamples' command, it returned:
"is not recognized as an internal or external command"
I searched for a solution. Most of them said to add opencv to path in the enviroment variables, so I downloaded opencv, extracted it in the C directory and added it to the path, but it still did not work. Could someone help me? | 0 | 1 | 1,017 |
0 | 67,337,028 | 0 | 0 | 0 | 0 | 2 | false | 203 | 2019-03-23T12:14:00.000 | 2 | 14 | 0 | ImportError: libGL.so.1: cannot open shared object file: No such file or directory | 55,313,610 | 0.028564 | python,ubuntu-14.04 | had the same issue on centos 8 after using pip3 install opencv on a non gui server which is lacking all sorts of graphics libraries.
dnf install opencv
pulls in all needed dependencies. | I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this on Docker, and I am not able to check the OpenCV version. I tried importing matplotlib and that imports fine. | 0 | 1 | 205,048 |
0 | 71,321,056 | 0 | 0 | 0 | 0 | 2 | false | 203 | 2019-03-23T12:14:00.000 | 0 | 14 | 0 | ImportError: libGL.so.1: cannot open shared object file: No such file or directory | 55,313,610 | 0 | python,ubuntu-14.04 | For me, the problem was related to proxy setting. For pypi, I was using nexus mirror to pypi, for opencv nothing worked. Until I connected to a different network. | I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this on Docker, and I am not able to check the OpenCV version. I tried importing matplotlib and that imports fine. | 0 | 1 | 205,048 |
0 | 55,328,727 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-24T21:05:00.000 | 0 | 1 | 0 | Pandas Data Frame Data Type Recognized as object and not Numeric | 55,328,568 | 0 | python | After digging around some more.. I found a better way to debug the data.
pd.to_numeric(model_data['Value2SPY'])
Did the trick because when it bombed out it told me the line item..
ValueError: Unable to parse string "#DIV/0!" at position 241396
The code I was using before "if not isinstance(val, int):" just was a bad method to debug the data... | I looked at the data and it seemed numeric?. I wrote a little loop and it displays values like 84 as not int, or 214.56 as not float. It just seems broken. Do Pandas Data Frames just have a randomness to them?
My data set has this shape:
(622380, 45)
When I isolate the column it still has a problem. But when I shorten the column it seems to be OK.
Is there a length at which the data frame becomes unstable? Can I force the data types? | 0 | 1 | 349 |
0 | 55,332,549 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-25T03:15:00.000 | 0 | 1 | 0 | how to drop multiple (~5000) columns in the pandas dataframe? | 55,330,844 | 0 | python-3.x,pandas,dataframe | Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use:
df1 = df.iloc[:, cols]
This statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame. | I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this? | 0 | 1 | 84 |
0 | 72,260,042 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2019-03-25T13:31:00.000 | 0 | 4 | 0 | OSError: [E050] Can't find model 'fr_core_web_md'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory | 55,338,972 | 0 | python,spacy | first install the package then import it (but not vice versa)
first : !python3 -m spacy download fr_core_news_md
then : nlp = spacy.load("fr_core_news_md") | I am working on NLP project so I am using spacy, the problem is when I import nlp=spacy.load('fr_core_news_md'), it doesn't work for me and I get this error:
OSError: [E050] Can't find model 'fr_core_news_md'. It doesn't seem to
be a shortcut link, a Python package or a valid path to a data
directory."
Despite the use of python -m spacy download fr_core_news_md | 1 | 1 | 7,779 |
0 | 69,374,319 | 0 | 0 | 0 | 0 | 2 | false | 3 | 2019-03-25T13:31:00.000 | 1 | 4 | 0 | OSError: [E050] Can't find model 'fr_core_web_md'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory | 55,338,972 | 0.049958 | python,spacy | It is worth mentioning the bug I had recently.
I installed the fr_core_news_md, and then I tried to load the fr_core_news_sm.
It was around 2:00 AM I wasn't able to find it.
I slept and came back in the morning and found the solution. | I am working on NLP project so I am using spacy, the problem is when I import nlp=spacy.load('fr_core_news_md'), it doesn't work for me and I get this error:
OSError: [E050] Can't find model 'fr_core_news_md'. It doesn't seem to
be a shortcut link, a Python package or a valid path to a data
directory."
Despite the use of python -m spacy download fr_core_news_md | 1 | 1 | 7,779 |
0 | 55,343,775 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-03-25T17:42:00.000 | 0 | 1 | 0 | Count Name list (mixed with Number) | 55,343,651 | 0 | python | Courtesy of @JohnGordon:
Use if val == 'Jacob Lee' or val.startswith('Jacob Lee ') or val == '30220' or val.startswith('30220 '): | I'm trying to count the Customer's Name in my data.
For example, if there are ["Jacob Lee", "Jacob Lee 30220", "30220"] in the column, I want to count these cases as a same person. Because 30220 is Jacob Lee's account number.
I'm not sure how to code this function.
FYI: I'm using python 3. | 0 | 1 | 53 |
0 | 55,377,993 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-26T13:57:00.000 | 0 | 1 | 0 | Update trained object detection Models to correspond to TF updates | 55,358,952 | 1.2 | python,tensorflow,object-detection-api | After doing some looking, the graph has to be updated. Since I did not still have the training checkpoints, I was successful in updating the graph by exporting from the previously frozen graph as the checkpoint.
python3 export_inference_graph.py --input_type image_tensor --pipeline_config_path FROZENGRAPHDIRECTORY/pipeline.config --trained_checkpoint_prefix FROZENGRAPHDIRECTORY/model.ckpt --output_directory FROZENGRAPHDIRECTORY_tfNEWTFVERSION | I am transitioning to new version of TF for stability reasons (I was using a nightly docker build on Ubuntu 18.04 from before mainline switched to CUDA 10). When I attempt to run my models in the new version I get the following error, which I assume to mean that there is an incompatibility with the models trained on the older version.
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 426, in import_graph_def
graph._c_graph, serialized, options) # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'explicit_paddings' not in Op<name=Conv2D; signature=input:T, filter:T -> output:T; attr=T:type,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE]; attr=strides:list(int); attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW"]; attr=dilations:list(int),default=[1, 1, 1, 1]>; NodeDef: {{node FirstStageFeatureExtractor/resnet_v1_101/resnet_v1_101/conv1/Conv2D}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
What do I need to do to update the previously trained models to work with the new version of TF or do I need to continue running that version until my next training session? | 0 | 1 | 1,299 |
0 | 58,124,093 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-26T16:48:00.000 | 0 | 1 | 0 | Highest Polarity Score (Sentiment Analysis) using the TextBlob library | 55,362,335 | 0 | python,textblob | You can use .sentiment_assessments to get some more idea of how your sentence is being evaluated.
Sentiment(polarity=0.6, subjectivity=0.6000000000000001, assessments=[(['really', 'really', 'really', 'love'], 0.5, 0.6, None), (['good'], 0.7, 0.6000000000000001, None)]) | I've started to use the TextBlob library; for sentiment analysis.
I have run a few tests on a few phrases and I have the polarity and subjectivity score - fine.
What sentence would return the highest polarity value within TextBlob?
For instance
"I really, really, really love and admire your beauty, my good friend"
returns a polarity score of 0.6.
I understand that +1.0 is the highest score (-1.0) is the least.
What sentence, have you found which returns a score closer to +1.0?
TextBlob("I really, really, really love and admire your beauty my good friend").sentiment
Sentiment(polarity=0.6, subjectivity=0.6000000000000001)
TextBlob("I really, really, really love my place of work").sentiment
Sentiment(polarity=0.5, subjectivity=0.6)
TextBlob("I really love my place of work").sentiment
Sentiment(polarity=0.5, subjectivity=0.6)
I expect that the "really" should increase the sentiment score, at least a bit. (i.e. really, really like = at least 0.9)
I expect that the score overall, without the really (I really like my work) should return a score closer to 1.0. | 0 | 1 | 247 |
0 | 55,368,146 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-26T23:03:00.000 | 0 | 1 | 0 | Tower of colored cubes | 55,367,429 | 0 | python,artificial-intelligence,evolutionary-algorithm | First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get.
Apparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available orientations. I suggest that you use a simple structure that can return the four faces in order -- say, front-right-back-left -- given a cube and the rotation number.
I think you can effectively represent a cube as three pairs of opposing sides. Once you've represented that opposition, the remaining organization is arbitrary numbering: any valid choice is isomorphic to any other. Each rotation will produce an interleaved sequence of two opposing pairs. For instance, a standard D6 has opposing pairs [(1, 6), (2, 5), (3, 4)]. The first 8 rotations would put 1 and 6 on the hidden faces (top and bottom), giving you the sequence 2354 in each of its 4 rotations and their reverses.
That class is one large subsystem of your problem; the other, the genetic algorithm, you seem to have well in hand. Stack all of your cubes randomly; "fitness" is a count of the most prevalent 4-show (sequence of 4 sides) in the stack. At the start, this will generally be 1, as nothing will match.
From there, you seem to have an appropriate handle on mutation. You might give a higher chance of mutating a non-matching cube, or perhaps see if some cube is a half-match: two opposite faces match the "best fit" 4-show, so you merely rotate it along that axis, preserving those two faces, and swapping the other pair for the top-bottom pair (note: two directions to do that).
Does that get you moving? | Consider a set of n cubes with colored facets (each one with a specific color
out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm.
What I did so far:
I thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube).
Then, the Population consists of multiple Individuals.
The Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent).
Now, my biggest issue is related to the Mutate and Fitness methods.
In Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12).
In Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a "count" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method.
My question is the following:
How can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected?
I think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged.
Shuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent). | 0 | 1 | 227 |
0 | 55,368,249 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-27T00:15:00.000 | 1 | 1 | 0 | Can I get [0 0] from Categorical Labeling in CNN's? | 55,367,994 | 1.2 | python,keras,conv-neural-network | That should not be possible. Your "garbage" would be a third class, requiring labels of [1 0 0], [0 1 0], and [0 0 1].
Very simply, the model you've described will return one of two categories, whichever has a higher rating in your final layer. This happens whether the input values are 0.501 and 0.499, or 0.011 and 0.010 with a large "not sure" portion. If you don't explicitly code "not sure" into your model, then that portion of the decision will not be considered in the classification. | From what I understand from keras labeling, one hot encoding does not permit the values to be [0 0]? is this assumption correct?
We are trying to classify 2 classes and we want to be able to detect garbage when a garbage image is fed. However, it always detects either
[0 1] or [1 0]. Is it possible to get [0 0] as a label without introducing a class the will handle the garbage or no?
So basically, can the CNN predict it to be something else if its not the 2 classes? | 0 | 1 | 74 |
0 | 55,376,254 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-27T09:49:00.000 | 0 | 3 | 0 | What's the difference between shape(150,) and shape (150,1)? | 55,374,185 | 0 | python,numpy | Although they both occupy same space and positions in memory,
I think they are the same, I mean they both represent a column vector.
No they are not and certainly not according to NumPy (ndarrays).
The main difference is that the
shape (150,) => is a 1D array, whereas
shape (150,1) => is a 2D array | What's the difference between shape(150,) and shape (150,1)?
I think they are the same, I mean they both represent a column vector. | 0 | 1 | 331 |
0 | 55,629,323 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-28T13:20:00.000 | 1 | 1 | 0 | Feed complex64 data into Keras sequential model | 55,398,697 | 1.2 | python,keras,sequential | Adding an InputLayer(... dtype='complex64') layer, i.e. an InputLayer() with data type specified as 'complex64' as the first layer of the sequential model allowed me to pass complex64 data to the model. | I am working in training a CNN in fourier domain. To speed up training, I thought of taking the fft of the entire dataset before training and feeding this data to the sequential model. But inside the first layer of the model, which is a custom Keras layer, the training data is shown to have float32 data type. Does the sequential model take in only real input data?
Thanks. | 0 | 1 | 127 |
0 | 55,557,061 | 0 | 0 | 1 | 0 | 1 | true | 1 | 2019-03-28T15:05:00.000 | 1 | 1 | 0 | Finding any feasible flow in graph as fast as possible | 55,400,911 | 1.2 | python,graph,network-flow | So I finally got time to sum this up. The solution I used is to take the initial graph and transform it in these steps.
(Weights are in this order: lower bound, current flow, upper bound.)
1. Connect t to s by edge of (0, 0, infinity).
2. To each node of the
initial graph add balance value equal to: (sum of lower bound of
incoming edges - sum of lower bound of outgoing edges).
3. Set upper
bound of every edge to (upper bound - lower bound). Set lower bound
and current flow of each edge to 0.
4. Now make new s (s') and new t (t') which will be our new start and end (DO NOT DELETE s and t already in the graph, they just became
normal nodes).
5. Create edge from s' to every vertex with positive balance with (0,
0, vertex.balance) bounds.
6. Create edge from every vertex with negative balance to t' with (0,
0, abs(vertex.balance)).
7. Run Ford-Fulkerson (or other maximum flow algorithm of your choice)
on the new graph.
8. For every edge of initial graph sum value of the
edge with the initial old lower bound before transformation and you have your
initial flows for every edge of the initial graph.
This problem is actually a bit harder than to maximize flow when feasible flow is provided. | I have a flow graph with lower and upper bounds and my task is to find any feasible solution as fast as possible. I found many algorithms and approaches to maximum/minimum flow and so on (also many times uses feasible solution as start point) but nothing specific for any feasible solution. Is there any algorithm/approach that is specific for it and fast? | 0 | 1 | 861 |
0 | 55,402,030 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-03-28T15:43:00.000 | 1 | 1 | 0 | How does image resolution affect result and accuracy in Keras? | 55,401,716 | 1.2 | python-2.7,tensorflow,keras | 1- Of course it will affect the training speed as the spatial dimensions is one of the most important key of the model speed performance.
2- We can say sure it'll affect the accuracy, but how much exactly that depends on many of other aspects like what objects are you classifying and what dataset are you working with. | I'm using Keras (with Tensorflow backend) for an image classification project. I have a total of almost 40 000 hi-resolution (1920x1080) images that I use as training input data. Training takes about 45 minutes and this is becoming a problem so I was thinking that I might be able to speed things up by lowering the resolution of the image files. Looking at the code (I didn't write it myself) it seems all images are re-sized to 30x30 pixels anyway before processing
I have two general questions about this.
Is it reasonable to expect this to improve the training speed?
Would resizing the input image files affect the accuracy of the image classification? | 0 | 1 | 404 |
0 | 55,794,331 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-03-28T15:58:00.000 | 1 | 1 | 0 | What is TargetEncoder and BinaryEncoder in sklearn category_encoders? | 55,402,010 | 0.197375 | python,python-3.x,scikit-learn,categorical-data | Target encoding maps the categorical variable to the mean of the target variable. As it uses the target, steps must be taken to avoid overfitting (usually done with smoothing).
Binary encoding converts each integer into binary digits with each binary digit having its one column. It is essentially a form of feature hashing.
Both help with lowering the cardinality of categorical variables which helps improve some model performance, most notably with tree-based models. | I've been looking for a way to vectorize categorical variable and then I've come across category_encoders. It supports multiple ways to categorize.
I tried TargetEncoder and BinaryEncoder but the docs doesn't explain much about the working of it?
I really appreciate if anyone could explain how target encoder and binary encoder work and how they are different from one hot encoding? | 0 | 1 | 754 |
0 | 55,404,739 | 0 | 0 | 1 | 0 | 1 | false | 1 | 2019-03-28T17:45:00.000 | 1 | 3 | 0 | Concepts to measure text "relevancy" to a subject? | 55,403,920 | 0.066568 | python,machine-learning,nlp,data-science | There are many many ways to do this, and the best method changes depending on the project. Perhaps the easiest way to do this is to keyword search in your articles and then empirically choose a cut off score. Although simple, this actually works pretty well, especially in a topic like this one where you can think of a small list of words that are highly likely to appear somewhere in a relevant article.
When a topic is more broad with something like 'business' or 'sports', keyword search can be prohibitive and lacking. This is when a machine learning approach might start to become the better idea. If machine learning is the way you want to go, then there are two steps:
Embed your articles into feature vectors
Train your model
Step 1 can be something simple like a TFIDF vector. However, embedding your documents can also be deep learning on its own. This is where CBOW and Skip-Gram come into play. A popular way to do this is Doc2Vec (PV-DM). A fine implementation is in the Python Gensim library. Modern and more complicated character, word, and document embeddings are much more of a challenge to start with, but are very rewarding. Examples of these are ELMo embeddings or BERT.
Step 2 can be a typical model, as it is now just binary classification. You can try a multilayer neural network, either fully-connected or convolutional, or you can try simpler things like logistic regression or Naive Bayes.
My personal suggestion would be to stick with TFIDF vectors and Naive Bayes. From experience, I can say that this works very well, is by far the easiest to implement, and can even outperform approaches like CBOW or Doc2Vec depending on your data. | I do side work writing/improving a research project web application for some political scientists. This application collects articles pertaining to the U.S. Supreme Court and runs analysis on them, and after nearly a year and half, we have a database of around 10,000 articles (and growing) to work with.
One of the primary challenges of the project is being able to determine the "relevancy" of an article - that is, the primary focus is the federal U.S. Supreme Court (and/or its justices), and not a local or foreign supreme court. Since its inception, the way we've addressed it is to primarily parse the title for various explicit references to the federal court, as well as to verify that "supreme" and "court" are keywords collected from the article text. Basic and sloppy, but it actually works fairly well. That being said, irrelevant articles can find their way into the database - usually ones with headlines that don't explicitly mention a state or foreign country (the Indian Supreme Court is the usual offender).
I've reached a point in development where I can focus on this aspect of the project more, but I'm not quite sure where to start. All I know is that I'm looking for a method of analyzing article text to determine its relevance to the federal court, and nothing else. I imagine this will entail some machine learning, but I've basically got no experience in the field. I've done a little reading into things like tf-idf weighting, vector space modeling, and word2vec (+ CBOW and Skip-Gram models), but I'm not quite seeing a "big picture" yet that shows me how just how applicable these concepts can be to my problem. Can anyone point me in the right direction? | 0 | 1 | 336 |
0 | 55,405,781 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-03-28T19:44:00.000 | 1 | 1 | 0 | Sometimes it is necessary to show my dataframe to properly ask the question. How can I do that? | 55,405,730 | 0.197375 | python,pandas,dataframe | You could either provide the code the generate sample data or you could do print(df) and paste the result with code format as a part of your question. For us it is possible to copy a dataframe as text a load it into a proper dataframe. Usually you can provide less than 20 rows of sample data and that should be enough to replicate the desired output | I need to ask a question related to a DataFrame. I tried to add screenshots before but I got -3 reputation and it says I am not allowed to upload the image. What is the best way then. I am new to stack overflow. Please help. | 0 | 1 | 531 |
0 | 55,413,219 | 0 | 0 | 0 | 0 | 1 | false | 8 | 2019-03-29T06:50:00.000 | 5 | 2 | 0 | Is there a pytorch method to check the number of cpus? | 55,411,921 | 0.462117 | python,neural-network,deep-learning,pytorch | At present pytorch doesn't support multiple cpu cluster in DistributedDataParallel implementation. So, I am assuming you mean number of cpu cores.
There's no direct equivalent for the gpu count method but you can get the number of threads which are available for computation in pytorch by using
torch.get_num_threads() | I can use this torch.cuda.device_count() to check the number of GPUs. I was wondering if there was something equivalent to check the number of CPUs. | 0 | 1 | 6,996 |
0 | 55,742,886 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-03-29T08:44:00.000 | 0 | 1 | 0 | Installing tensorflow-gpu on a Laptop with two graphic cards | 55,413,488 | 1.2 | python,tensorflow,installation | At what phase are you getting the "Download wasn't..." message? did
you try manually downloading the wheel file and installing it directly
and locally? – Ido_f
Downloading the local CUDA installation solved my issues. | A lot of people have issues installing tensorflow-gpu on their computers and I have read a lot of them and tried out a lot of them as well. So I'm not coming for an easy answer without searching the web beforehand.
I'm running W10 with an NVIDIA Quadro P600 which can supposedly run CUDA.
The thing is whenever I'm trying to install CUDA (10.0 as suggested from tensorflow) the installation breaks with no clear indication of the error ("The download wasn't successfully completed. Try again")
I have the feeling it breaks because my Laptop has two GPU's. The mentioned NVIDIA and the onboard Intel UHD Graphics 630 card.
Does anyone have a clue? Please share your workflow if you have installed tensorflow-gpu on your laptop! | 0 | 1 | 235 |
0 | 55,415,405 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-03-29T10:17:00.000 | 0 | 3 | 0 | Number of 2D array to 3D array, extending third dimension | 55,415,158 | 0 | python,arrays,numpy,append | Try creating a new array that you fill with your 2D arrays
new3DArray = numpy.empty(10, 60, 100) | I have 10 different matrix of size (60, 100). I want to put them along the third dimension inside a for loop, so that the final shape is (10, 60, 100).
I tried with concatenate and end up with size (600, 100). | 0 | 1 | 402 |
1 | 56,324,221 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-03-30T18:02:00.000 | 2 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 0.049958 | android,python,matplotlib,pydroid | I also had this problem a while back, and managed to fix it by using plt.show()
at the end of your code. With matplotlib.pyplot as plt. | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know. | 0 | 1 | 8,119 |
1 | 60,702,515 | 0 | 0 | 0 | 0 | 3 | true | 4 | 2019-03-30T18:02:00.000 | 0 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 1.2 | android,python,matplotlib,pydroid | After reinstalling it worked.
The problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab.
The version of matplotlib was too high for pydroid | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know. | 0 | 1 | 8,119 |
1 | 66,386,763 | 0 | 0 | 0 | 0 | 3 | false | 4 | 2019-03-30T18:02:00.000 | 0 | 8 | 0 | Matplotlib with Pydroid 3 on Android: how to see graph? | 55,434,255 | 0 | android,python,matplotlib,pydroid | You just need to add a line
plt.show()
Then it will work. You can also save the file before showing
plt.savefig("*imageName*.png") | I'm currently using an Android device (of Samsung), Pydroid 3.
I tried to see any graphs, but it doesn't works.
When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window.
(means that i can't see even terminal screen, which always showed me [Program Finished])
Well, even the basic sample code which Pydroid gives me doesn't show me the graph :(
I've seen many tutorials which successfully showed graphs, but well, mine can't do that things.
Unfortunately, cannot grab any errors.
Using same code which worked at Windows, so don't think the code has problem.
Of course, matplotlib is installed, numpy is also installed.
If there's any possible problems, please let me know. | 0 | 1 | 8,119 |
0 | 55,441,333 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-03-31T12:57:00.000 | 0 | 2 | 0 | My trained image classifer model classify all the images that are not even in the category | 55,441,123 | 0 | java,android,python-3.x,tensorflow | what about checking probability score? Eventhough a cup is classifed as Dog , it will have less score. so you can set your threshold. If the probability score > threshold value, then it will be displayed as animal otherwise not. | I have already trained a model to recognise animal and it is working, deployed into android application. I'm finding for a solution to make the image classifier to only classify the trained categories. I'm not sure whether to do this through the model training or any code to be added to solve this.
Example, if a picture of a cup is sent for classification, the result shows as Dog or some other animal name. How to make only classify the given category, anything else than that show it as "not a animal".
Im using Tensorflow 1.12, MobileNet Model | 0 | 1 | 115 |
0 | 55,451,075 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-01T08:34:00.000 | -1 | 2 | 0 | Do I need a test-train split for K-means clustering even if I'm not looking to predict anything? | 55,450,949 | -0.099668 | python-3.x,machine-learning,cluster-analysis,k-means | No, in clustering (i.e unsupervised learning ) you do not need to split the data | I have a set of 2000 points which are basically x,y coordinates of pass origins from association football. I want to run a k-means clustering algorithm on it to just classify it to get which 10 passes are the most common (k=10). However, I don't want to predict any points for future values. I simply want to work with the existing data. Do I still need to split it into testing-training sets? I assume they're only done when we want to train the model on a particular set to calculate for future values (?)
I'm new to clustering (and Python as a whole) so any help would be appreciated. | 0 | 1 | 8,449 |
0 | 55,471,123 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-01T20:01:00.000 | 1 | 2 | 0 | converting an array of size (n,n,m) to (None,n,n,m) | 55,462,718 | 0.099668 | python,arrays,numpy,conv-neural-network,reshape | The shape (None, 14,14,3) represent ,(batch_size,imgH,imgW,imgChannel) now imgH and imgW can be use interchangeably depends on the network and the problem.
But the batchsize is given as "None" in the neural network because we don't want to restrict our batchsize to some specific value as our batchsize depends on a lot of factors like memory available for our model to run etc.
So lets say you have 4 images of size 14x14x3 then you can append each image into the array say L1, and now the L1 will have the shape 4x14x14x3 i.e you made a batch of 4 images and now you can feed this to your neural network.
NOTE here None will be replaced by 4 and for the whole training process it will be 4. Similarly when you feed your network only one image it assumes the batchsize of 1 and set None equal to 1 giving you the shape (1X14X14X3) | I am trying to reshape an array of size (14,14,3) to (None, 14,14,3). I have seen that the output of each layer in convolutional neural network has shape in the format(None, n, n, m).
Consider that the name of my array is arr
I tried arr[None,:,:] but it converts it to a dimension of (1,14,14,3).
How should I do it? | 0 | 1 | 163 |
0 | 55,472,980 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-02T08:12:00.000 | 1 | 1 | 0 | I need to remove multi co-linearity between features | 55,469,874 | 0.197375 | python,machine-learning | You don't need to transform the data.Instead you can change the way that you are calculating correlation between variables. As these are categorical features, you have to use Chi-Squared test of independence.Then, you won't be facing this issue. | I have categorical variables such as Gender,Anxiety,Alcoholic and when i convert these categorical variables into numerical values using encoder techniques then all these variables resembles same in values and then multi co linearity is existing. How i can convert these variables to number so that multi co linearity doesn't exist. All three variables are important for prediction of target variable. | 0 | 1 | 35 |
0 | 55,488,118 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-02T09:36:00.000 | 0 | 2 | 0 | Code conversion from python 2 to python 3 | 55,471,459 | 0 | python-2.7,pytorch | The problem is(can) that cpu object is expected but it gpu object. Try to put the object to cpu:
mask.cpu() | I'm setting up a new algorithm which combines an object detector(bounding box detector) which is in python 3 and a mask generator which is in python 2. The problem here is I have several python 2 files which is required for the mask generation algorithm. So I tried 2to3 to convert all my python 2 files to python 3. The script seemed like working but as it was a deep learning algorithm(for mask generation when bounding box coordinates are given as input) which needs some pytorch weights to be loaded, while testing the model in python 3 the program was throwing out error like
"RuntimeError: Expected object of type torch.FloatTensor but found
type torch.cuda.FloatTensor for argument #2 ‘weight’"
I have searched in PyTorch forums but none of the posts were useful to me. Is it because my mask generation code is trained in python 2 ?
Does that means while loading the weights and testing the model I should use python 2 not python 3 ? It would be great if someone can shed some light on this. As a work around I can still use the object detector code downgraded to python 2. But still I want to know why it was throwing the error. | 0 | 1 | 174 |
0 | 55,475,569 | 0 | 1 | 0 | 0 | 1 | true | 3 | 2019-04-02T12:52:00.000 | 3 | 3 | 0 | Creating a big matrix where every element is the same | 55,475,303 | 1.2 | python,matrix,sage | @JamesKPolk gave me a working solution.
T = matrix(RDF, 6000, 6000, lambda i,j: 1/6000) | I'm trying to create a matrix of dimension nxn in Sage. But every element in the matrix has to be 1/n. The size of n is around 7000.
First I tried using creating a matrix of ones with the build in sagemethod, and then multiplying the matrix with 1/n. This is very slow and crashes my jupyter notebook kernel.
T =matrix.ones(7000) * 1/n
A second thing I tried is creating all the elements by list comprehension.
T = matrix(RDF,[[1/l for x in range(l)] for row in range(l)])
This also seems to be something my pc can't handle. | 0 | 1 | 140 |
0 | 55,482,982 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-02T20:11:00.000 | -1 | 2 | 0 | Python: (uniform) sampling from a rectangle | 55,482,837 | -0.099668 | python,random | I think modulo operator (%) is your friend to check if x and y are in [a,c] and [b,d]
If you can't use random between 2 numbers (others 0 and 1), you can try to make x = (random() *(c-a)+a)
Same with y :)
EDIT : Oh, i send it just after Merig | Say I have a rectangle [a,b]x[c,d], where a,b,c,d are reals.
I would like to produce k points (x,y) sampled uniformly from this rectangle, i.e. a <= x <= c and b <= y <= d.
Obviously, if sampling from [0,1]x[0,1] is possible, then
the problem is solved. How to achieve any of the two goals, in python?
Or, another tool such as R, for example? | 0 | 1 | 577 |
0 | 55,487,120 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-03T04:28:00.000 | 0 | 2 | 0 | Does the accuracy of the deep learning program drop if I do not put in the default input shape into the pretrained model? | 55,487,087 | 1.2 | python,keras,conv-neural-network,pre-trained-model,transfer-learning | Usually, with convolutional neural networks, differences in the image shape (the width/height of an image) will not matter. However, differences in the # of channels in the image (equivalently the depth of the image), will affect the performance. In fact, there will usually be dimension mismatch errors you get if the model was trained for greyscale/colour and you put in the other type. | As the title says, I want to know whether input shape affects the accuracy of the deep learning model.
Also, can pre-trained models (like Xception) be used on grayscale images?
P.S. : I recently started learning deep learning so if possible please explain in simple terms. | 0 | 1 | 54 |
0 | 55,496,284 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-03T12:55:00.000 | 0 | 1 | 0 | Shape of passed values blah indices imply blah | 55,495,746 | 1.2 | python-3.x,pandas,scikit-learn | So I after implementing @Quang Hoang's suggestion panda.reshape(array_name, (-1, 78)) on x_test, y_test and x_train this finally converted all nessasary arrays into the required 2D format. | I am attempting to pass my custom dataset which is loaded in from a CSV file using
panda.readcsv() through sklearns MLPRegressor.
My initial error was my 1D array needed to become a 2D array. Expected 2D array, got 1D array instead: array=[0. 0. 1. ... 0. 0. 1.].
So I used panda.reshape(x_test, (-1, 1)) on both the x_test and y_train to solve this issue. However this now presents me with the following error.
Shape of passed values is (16209258, 1), indices imply (207811, 78)
Have looked around a few other posts without success. | 0 | 1 | 44 |
0 | 55,506,296 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-04T00:37:00.000 | 0 | 1 | 0 | Need to find the MEAN of Col 6 based on the value of Col 5 (Col 5 is 0/1) | 55,506,207 | 0 | python | If I understand your question correctly:
First you need to import your spreadsheet into python with csv module.
Then you need "for" loop to sum all your col per person.
Calculate mean of each obs.
If result greater than half of total number, get to your student 1 ;else get them 0 . | I have a spreadsheet with 10 columns and 727 obs. Col 5 is 0/1 whether a student is economically disadvantaged or not. I need to find the mean of Col 6 (test score) based on whether the student is economically disadvantaged or not. Help! | 0 | 1 | 16 |
0 | 55,534,189 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-04-05T11:04:00.000 | 2 | 2 | 0 | Сhoosing the right NN model (speed / performance) | 55,533,997 | 0.197375 | python,machine-learning,neural-network | TinyYOLO is a smaller version of the original YOLO network. You could try that one. | Im beginner and this is my first steps.
Im already learning about different Neural Network architecture and i have a question:
Which model i should choice for Rasberry PI / android?
Im already tried "ResNet" with 98x98 resolution of images and that model requires almost full power of my PC. Exactly:
Model takes 2 GB of video card memory, 1.4~2 GB of RAM.
This model is not suitable for the android / Rasberry (low power).
Which model i should choice for my task?
P.S I expect at least 5~10 FPS on Rasberry and 10~15 on Android. | 0 | 1 | 78 |
0 | 55,534,144 | 0 | 0 | 0 | 0 | 2 | true | 1 | 2019-04-05T11:04:00.000 | 3 | 2 | 0 | Сhoosing the right NN model (speed / performance) | 55,533,997 | 1.2 | python,machine-learning,neural-network | Object Detection on Raspberry Pi with 5-10FPS is highly unrealistic.
You can have a look at YOLO or SSD, for example YOLO has also a smaller implementation which can run on RPI but you will be happy with 1FPS. | Im beginner and this is my first steps.
Im already learning about different Neural Network architecture and i have a question:
Which model i should choice for Rasberry PI / android?
Im already tried "ResNet" with 98x98 resolution of images and that model requires almost full power of my PC. Exactly:
Model takes 2 GB of video card memory, 1.4~2 GB of RAM.
This model is not suitable for the android / Rasberry (low power).
Which model i should choice for my task?
P.S I expect at least 5~10 FPS on Rasberry and 10~15 on Android. | 0 | 1 | 78 |
0 | 55,540,865 | 0 | 0 | 0 | 0 | 1 | false | 4 | 2019-04-05T16:28:00.000 | 2 | 1 | 0 | How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step? | 55,539,820 | 0.379949 | python,reinforcement-learning,openai-gym,q-learning | You can take one of two approaches - depend on the problem:
Think of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis.
Think of them as dependent and look on the Cartesian product of the sets of actions, and then make the network to output value for each product - so if you have two actions that you need to pass and 5 options for each, the size of output layer will be 2*5=10, and you just use softmax on that. | I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step. For example in the Roboschool Reacher environment, 2 torque values - one for each axis - must be specify at each time step. The problem is that the Q matrix is built from (state, action) pairs. However, if more than one action are taken simultaneously, it is not straightforward to build the Q matrix.
The book "Deep Reinforcement Learning Hands-On" by Maxim Lapan mentions this but does not give a clear answer, see quotation below.
Of course, we're not limited to a single action to perform, and the environment could have multiple actions, such as pushing multiple buttons simultaneously or steering the wheel and pressing two pedals (brake and accelerator). To support such cases, Gym defines a special container class that allows the nesting of several action spaces into one unified action.
Does anybody know how to deal with multiple actions in Q learning?
PS: I'm not talking about the issue "continuous vs discrete action space", which can be tackled with DDPG. | 0 | 1 | 937 |
0 | 61,877,221 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-06T10:08:00.000 | 0 | 1 | 0 | Is there any difference between del and pop for column in python pandas dataframe? | 55,548,010 | 0 | python,python-3.x,pandas,dataframe,del | The difference between deleting and popping a column is that pop will return the deleted column back to you while del method won't. | I have just learned how to work with DataFrame in python's Pandas through an online course and there is this question:
"What is the difference between deleting and popping column?"
I thought they work the same way but most of the answers are
"You can store a popped column"
What does that mean?
I saw from the documentation and it tells me that there's only a deleting a single column use out of the pop function so Im kind of confused here | 0 | 1 | 939 |
0 | 55,552,428 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-06T18:52:00.000 | 3 | 1 | 0 | Python - Error at equals sign for no reason? | 55,552,402 | 0.53705 | python | eval evaluates expressions. Assignment in Python is not an expression, it's a statement.
But you don't need this anyway. Make a list or dict to hold all of your values. | I'm using eval and pybrain to make neural networks. Here's it stripped down. Using python 3.6
from pybrain import *
numnn = 100
eval("neuralNetwork" + chr(numnn) + " = buildNetwork(2, 3, 1, bias=True)") | 0 | 1 | 127 |
0 | 66,080,521 | 0 | 0 | 0 | 0 | 1 | false | 5 | 2019-04-07T08:18:00.000 | 1 | 5 | 0 | Importing COCO datasets to google colaboratory | 55,556,965 | 0.039979 | python,computer-vision,google-colaboratory,semantic-segmentation | Using drive is better for further use. Also unzip the zip with using colab ( !unzip ) because using zip extractor on drive takes longer. I've tried :D | The COCO dataset is very large for me to upload it to google colab. Is there any way I can directly download the dataset to google colab? | 0 | 1 | 10,788 |
0 | 55,602,989 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-08T06:39:00.000 | 0 | 1 | 0 | Windowed writes in python, e.g. to NetCDF | 55,567,542 | 0 | python,large-data,netcdf | This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file.
Note this approach also provides the opportunity to progressively grow a dimension of the array (by calling createDimension with size None, making it the first dimension of a variable, and iteratively assigning to incrementally larger indices along that dimension of the variable).
Although random-access window (i.e. subset) writes appear to require the lower level package, more systematic subset writes (eventually covering the entire array) can be done incrementally with xarray (by specifying a chunk size parameter to trigger use of the dask.array backend), and provided that your algorithm is refactored so that the main loop occurs in the dask/xarray store-to-file call. This means you will not have explicit control over the sequence in which chunks are generated and written. | In python how can I write subsets of an array to disk, without holding the entire array in memory?
The xarray input/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the connected file.) The dask docs suggest it might be necessary to save the entire array after each manipulation? | 0 | 1 | 100 |
0 | 55,602,759 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-08T11:11:00.000 | 1 | 2 | 0 | Delay in Savitzky-Golay filtering | 55,572,128 | 0.099668 | python,scipy,filtering,signal-processing | You are asking about lag/latency of a digital filter: the only possible answer for a real-time filter is that the latency is determined entirely by the window size of the filter.
Non-realtime filters (e.g, where the full set of samples is provided to the filter, as for e.g. the scipy Savitsky-Golay filter) can pretend/simulate filtering at the ‘time’ of the current sample, but only by looking ahead at the full window.
Some might protest that this is demonstrably how e.g. the scipy Savitzky-Goay filter works, and that’s entirely correct, but nevertheless if you are asking about latency of a filter, which can only mean the delay that a real-time real-world filter will apply to real-time samples, the only possible answer is: this is only and undeniably/incontrovertibly determined by the window size. | I am applying a Savitzky-Golay filter to a signal, using the scipy function.
I need to calculate the lag of the filtered signal, and how much is it behind the original signal.
Could someone shed some light on this matter? How could I calculate it with scipy? How should I interpret the result correctly?
I would be very grateful! | 0 | 1 | 1,092 |
0 | 55,577,000 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-08T14:47:00.000 | 0 | 3 | 0 | How to check the p values of parameters in OLS | 55,576,097 | 0 | python,statsmodels | The p-value corresponds to the probability of observing this value of a under the null hypothesis (which is typically 0 as this is the case when there is no effect of the covariate x on the outcome y).
This is under the assumptions of linear regression which among other things state that a follows a normal distribution. Therefore, if you really want to change your null hypothesis to be a=2 then just transform a such that a_ = a - 2 now when a=2, a_ will be 0 as per the usual assumption.
So you can achieve this by solving for y+2x = a_*x + b and you will have a p-value for the probability that a=2 would occur by chance. As I said though this is a fairly unusual test... | When running a linear regression, like y=a*x+b, the summary gives me the p-values of whether the parameters equal to zero, what if I would like to see the p-value of whether the parameter a equals to 2 or something different from zero?
I expect the OLS summary gives me the p value of whether a is different from 2. | 0 | 1 | 3,001 |
0 | 55,655,008 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-08T15:01:00.000 | 1 | 2 | 0 | Hardware for python multiprocessing | 55,576,373 | 0.099668 | python,pandas,multiprocessing,gpu,xeon-phi | Took a while, but after changing it all to numpy and achieving a little more vectorization I managed to get a speed increase of over 20x - so thanks Paul.
max9111 thanks too, I'll have a look into numba. | I have a task where I need to run the same function on many different pandas dataframes. I load all the dataframes into a list then pass it to Pool.map using the multiprocessing module. The function code itself has been vectorized as much as possible, contains a few if/else clauses and no matrix operations.
I'm currently using a 10-core xeon and would like to speed things up, ideally passing from Pool(10) to Pool(xxx). I see two possibilities:
GPU processing. From what I have read though I'm not sure if I can achieve what I want and would in any case need lots of code modification.
Xeon-Phi. I know it's being discontinued, but supposedly code adaptation is easier and if thats really the case I'd happily get one.
Which path should I concentrate on? Any other alternatives?
Software: Ubuntu 18.04, Python 3.7. Hardware: X99 chipset, 10-core xeon (no HT) | 0 | 1 | 428 |
0 | 55,679,352 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-09T14:44:00.000 | 0 | 1 | 0 | YOLO v2 bad accuracy in Tensorflow | 55,595,512 | 1.2 | python,tensorflow,keras,computer-vision,artificial-intelligence | Okay, so it turned out that YOLOv2 was performing very well on unseen data except that the unseen data has to be the same size of images as the ones it's trained on. Don't feed Yolo with 800x800 images if it's been trained on 400x400and 300x400 images. Also the Keras accuracy measure is meaningless for detection. It might say 2% accuracy and actually be detecting all objects. Passing unseen data of the same size solved the problem. | I'm currently using a custom version of YOLO v2 from pjreddie.com written with Tensorflow and Keras. I've successfully got the model to start and finish training over 100 epochs with 10000 training images and 2400 testing images which I randomly generated along with the associated JSON files all on some Titan X gpus with CUDA. I only wish to detect two classes. However, after leaving the training going, the loss function decreases but the test accuracy hovers at below 3%. All the images appear to be getting converted to black and white. The model seems to perform reasonably on one of the classes when using the training data, so the model appears overfitted. What can I do to my code to get the model to become accurate? | 0 | 1 | 260 |
0 | 55,598,802 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-09T18:02:00.000 | 1 | 1 | 0 | How to apply a single fully connected layer to each point in an image | 55,598,702 | 1.2 | python,tensorflow,machine-learning,keras | What you're describing is a 1x1 convolution with output depth 1. You can implement it just as you implement the rest of the convolution layers. You might want to apply tf.squeeze afterwards to remove the depth, which should have size 1. | I'm trying to set up a non-conventional neural network using keras, and am having trouble efficiently setting this up.
The first few layers are standard convolutional layers, and the output of these have d channels, which each have image shapes of n x n.
What I want to do is use a single dense layer to map this d x n x n tensor onto a single image of size n x n. I want to define a single dense layer, with input size d, and output size 1, and apply this function to each "pixel" on the input (with the inputs taken depthwise across channels).
So far, I have not found a efficient solution to this. I have tried first defining a fully connected layer, then looping over each "pixel" in the input, however this takes many hours to initialize the model, and I am worried that it will slow down backprop, as the computations are likely not properly parallelized.
Is there an efficient way to do this? | 0 | 1 | 234 |
0 | 70,400,820 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-04-09T19:37:00.000 | 0 | 2 | 0 | how do I implement ssim for loss function in keras? | 55,600,106 | 0 | python,tensorflow,keras,loss-function | other choice would be
ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val))
then
combine_loss = mae (or mse) + ssim_loss
In this way, you are minimizing both of them. | I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is that I could not find an implementation of SSIM in keras.
Tensorflow has tf.image.ssim, but it accepts the image and I do not think I can use it in loss function, right? Could you please tell me what should I do? I am a beginner in keras and deep learning and I do not know how can I make SSIM as a custom loss function in keras. | 0 | 1 | 3,145 |
0 | 69,070,357 | 0 | 0 | 0 | 0 | 1 | false | 3 | 2019-04-10T03:32:00.000 | 1 | 1 | 0 | How to limit the number of threads that opencv uses in Python? | 55,604,373 | 0.197375 | python,multithreading,opencv | You can use cv2.setNumThreads(n) (Where n = number of threads)
But it didn't work for me, its still using all the CPU. | I am designing a program which will run continuously on a ROC64. It includes the usage of BackgroudsubtractorMOG2(a background-subtracting algorithm implemented in opencv). Opencv seems to use multithreading optimization in this algorithm and it eats up all the CPU resources. I understand that in C++ we can limit the number of threads by using setNumThreads().Is there a similar thing in Python or I must find another way to work around it? | 0 | 1 | 3,919 |
0 | 55,613,124 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-10T12:40:00.000 | 0 | 2 | 0 | Found duplicate column when trying to query with Spark SQL | 55,612,813 | 1.2 | python,apache-spark,dataframe,pyspark,apache-spark-sql | I've got the solution now. What I needed to use was:
Add from pyspark.sql.functions import * at the file header
Simply use col()'s alias function like so:
filtered_df2 = filtered_df.select(col("li"),col("result.li").alias("result_li"), col("fw")).orderBy("fw") | I want to do a filter on a dataframe like so:
filtered_df2 = filtered_df.select("li", "result.li", "fw").orderBy("fw")
However, the nested column, result.li has the same name as li and this poses a problem. I get the following error:
AnalysisException: 'Found duplicate column(s) when inserting into hdfs://...: `li`;'
How can I filter both fields successfully ? | 0 | 1 | 2,453 |
0 | 55,619,709 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-10T19:02:00.000 | 0 | 1 | 0 | Is there a module which tries to fit different functions to set of data points? | 55,619,624 | 0 | python,curve-fitting,data-fitting | In scipy there is curve_fit but I believe you have to define the curve that is going into it. | Let’s say I have 100 data points, consisting of two values (x,y or V1, V2).
Right now I am defining a bunch of functions (like log, exp, poly, sigmoid etc.) with a bunch of parameters to scale the data and/or adapt the base-equation. Then I use scipy.optimize.minimize to fit them to the data. After that I compare the fits visually and by their rms to choose the best one.
Is there a python module which does that? | 0 | 1 | 65 |
0 | 55,621,089 | 0 | 0 | 0 | 0 | 1 | true | 2 | 2019-04-10T19:53:00.000 | 1 | 1 | 0 | Algorithm used in KNeighborsClassifier with sparse input? | 55,620,365 | 1.2 | python,scikit-learn | No, it means that if the input is sparse, whichever value passed to the argument algorithm will be ignored and brute force algorithm will be used (which is equivalent to algorithm='brute') | For classification algorithm KNeighborsClassifier what does fitting on a sparse input mean?
Does it mean if I have x_train and x_test as sparse csr matrix and If I fit on x_train and don't specify algorithm it will automatically choose brute? can anyone clear this confusion.
algorithm : {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, optional
Algorithm used to compute the nearest neighbors:
Note: fitting on sparse input will override the setting of this parameter,
using brute force. | 0 | 1 | 133 |
0 | 58,826,866 | 0 | 0 | 0 | 0 | 1 | false | 2 | 2019-04-11T00:35:00.000 | 1 | 1 | 0 | Suggestions for feature engineering | 55,623,095 | 0.197375 | python,machine-learning,data-science,feature-engineering | You can extract the following features:
Simple Moving Averages for day 2 and day 3 respectively. This means you now have two extra columns.
Percentage Change from previous day
Percentage Change from day 1 to 3 | I am having a problem during feature engineering. Looking for some suggestions. Problem statement: I have usage data of multiple customers for 3 days. Some have just 1 day usage some 2 and some 3. Data is related to number of emails sent / contacts added on each day etc.
I am converting this time series data to column-wise ie., number of emails sent by a customer on day1 as one feature, number of emails sent by a customer on day2 as one feature and so on.
But problem is that, the usage can be of either increasing order or decreasing order for different customers.
ie., example 1: customer 'A' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=0
example 2: customer 'B' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=100
example 3: customer 'C' --> 'number of emails sent on 1st . day' = 0 . ' number of emails sent on 2nd day'=0
example 4: customer 'D' --> 'number of emails sent on 1st . day' = 100 . ' number of emails sent on 2nd day'=100
In the first two cases => My new feature will have "-100" and "100" as values. Which I guess is good for differentiating.
But the problem arises for 3rd and 4th columns when the new feature value will be "0" in both scenarios
Can anyone suggest a way to handle this | 0 | 1 | 106 |
0 | 55,625,183 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2019-04-11T02:25:00.000 | 0 | 2 | 0 | Syntax Error In Python When Trying To Refer To Range Of Columns | 55,623,798 | 0 | python,pandas | Steven Burnap's explanation is correct, but the solution can be simplified - just remove the internal parentheses:
db = db.drop(db.columns[12:22], axis = 1)
this way, db.columns[12:22] is a 'slice' of the columns array (actually index, but doesn't matter here), which goes into the drop method. | I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:
db = db.drop(db.columns[[12:22]], axis = 1)
This works but it seems clumsy...
db = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)
How do I refer to a range of columns? | 0 | 1 | 99 |
0 | 55,623,865 | 0 | 0 | 0 | 1 | 2 | false | 0 | 2019-04-11T02:25:00.000 | 1 | 2 | 0 | Syntax Error In Python When Trying To Refer To Range Of Columns | 55,623,798 | 0.099668 | python,pandas | The first example uses [12:22] is a "slice" of nothing. It's not a meaningful statement, so as you say, it gives a syntax error. It seems that what you want is a list containing the numbers 12 through 22. You need to either write it out fully as you did, or use some generator function to create it.
The simplest is range, which is a generator that creates a list of sequential values. So you can rewrite your example like:
db = db.drop(db.columns[list(range(12,23)]], axis = 1)
Though it looks like you are using some sort of library. If you want more detailed control, you need to look the documentation of that library. It seems that db.columns is an object of a class that has defined an array operator. Perhaps that class's documentation shows a way of specifying ranges in a way other than a list. | I am trying to remove the last several columns from a data frame. However I get a syntax error when I do this:
db = db.drop(db.columns[[12:22]], axis = 1)
This works but it seems clumsy...
db = db.drop(db.columns[[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]], axis = 1)
How do I refer to a range of columns? | 0 | 1 | 99 |
0 | 70,600,860 | 0 | 0 | 0 | 0 | 1 | false | 53 | 2019-04-11T08:16:00.000 | 1 | 3 | 0 | Evaluating pytorch models: `with torch.no_grad` vs `model.eval()` | 55,627,780 | 0.066568 | python,machine-learning,deep-learning,pytorch,autograd | If you're reading this post because you've been encountering RuntimeError: CUDA out of memory, then with torch.no grad(): will likely to help save the memory. Using only model.eval() is unlikely to help with the OOM error.
The reason for this is that torch.no grad() disables autograd completely (you can no longer backpropagate), reducing memory consumption and speeding up computations.
However, you will still be able to call the gardients when using model.eval(). Personally, I find this design decision intriguing. So, what is the purpose of .eval()? It seems its main functionality is to deactivate the Dropout during the evaluation time.
To summarize, if you use torch.no grad(), no intermediate tensors are saved, and you can possibly increase the batch size in your inference. | When I want to evaluate the performance of my model on the validation set, is it preferred to use with torch.no_grad: or model.eval()? | 0 | 1 | 16,936 |
0 | 55,628,823 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-11T09:06:00.000 | 0 | 2 | 0 | how can I fix Memory Error on np.arange(0.01*1e10,100*1e10,0.5)? | 55,628,673 | 0 | python,numpy,memo | You are trying to create an array of roughtly 2e12 elements. If every element was to be a byte, you would need approximately 2Tb of free memory to allocate it. Not sure you have so much ram available, that is why you have the memory error.
Note: the array you are trying to allocate contains floats, so it is even bigger. Do you really need so much elements?
Hope it helps, | I have Memory Error when I run np.arange() with large number like 1e10.
how can I fix Memory Error on np.arange(0.01*1e10,100*1e10,0.5) | 0 | 1 | 195 |
0 | 55,629,272 | 0 | 0 | 0 | 0 | 2 | false | 2 | 2019-04-11T09:30:00.000 | 1 | 3 | 0 | Shape of tensor | 55,629,163 | 0.066568 | python,tensorflow,machine-learning,pytorch | Your understanding of the shapes is correct. From the context probably the x_train are 60k images of handwritten numbers (with resolution 28x28 pixel) and the y_train are simply the 60k true number, which the images show. | I came across this piece of code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: " + str(x_train.shape))
print("Shape of y_train: " + str(y_train.shape))
And found that the output looks like this
(60000, 28, 28)
(60000,)
For the first line of output
So far my understanding, does it means that in 1st dimension it can hold 60k items, then in next dimension, it can hold 28 "array of 60k items"
and finally, in the last dimension, it can hold 28 "array of 28 "array of 60k items""
What I want to clarify is, Is this 60k samples of 28x28 data or something else?
For the second line of output, it seems like its just a 1d array of 60k items. So what does it actually represents? (i know that in x_train it was handwritten numbers and each number represents the intensity of grey in that cell)
Please note I have taken this code from some online example(i don't remember and won't mind if you want your credit to be added to this) and public dataset
tf.keras.datasets.mnist | 0 | 1 | 67 |
0 | 55,629,310 | 0 | 0 | 0 | 0 | 2 | true | 2 | 2019-04-11T09:30:00.000 | 1 | 3 | 0 | Shape of tensor | 55,629,163 | 1.2 | python,tensorflow,machine-learning,pytorch | You are right the first line gives 60K items of 28x28 size data thus (60000, 28, 28).
The y_train are labels of the x_train. Thus they are a one dimensional and 60k in number.
For example: If the first item of the x_train is a handwritten image of 3, then the first item of y_train will be '3' which is the label. | I came across this piece of code
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: " + str(x_train.shape))
print("Shape of y_train: " + str(y_train.shape))
And found that the output looks like this
(60000, 28, 28)
(60000,)
For the first line of output
So far my understanding, does it means that in 1st dimension it can hold 60k items, then in next dimension, it can hold 28 "array of 60k items"
and finally, in the last dimension, it can hold 28 "array of 28 "array of 60k items""
What I want to clarify is, Is this 60k samples of 28x28 data or something else?
For the second line of output, it seems like its just a 1d array of 60k items. So what does it actually represents? (i know that in x_train it was handwritten numbers and each number represents the intensity of grey in that cell)
Please note I have taken this code from some online example(i don't remember and won't mind if you want your credit to be added to this) and public dataset
tf.keras.datasets.mnist | 0 | 1 | 67 |
0 | 55,648,613 | 0 | 0 | 0 | 0 | 1 | false | 1 | 2019-04-11T11:57:00.000 | 1 | 1 | 0 | KMeans: Extracting the parameters/rules that fill up the clusters | 55,631,944 | 0.197375 | python,scikit-learn,k-means | Got the answer in a different topic:
Just record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean. | I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster.
My question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clusters may slightly shift, but I want to keep the old clusters (even though they fit the data slightly worse).
My guess is that there should be a way to extract the paramaters that decides which case goes to their respective cluster, but I haven't found the solution yet.
I would appreciate any help | 0 | 1 | 454 |
0 | 55,637,346 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-11T16:35:00.000 | 2 | 1 | 0 | Applying a permutation along one axis in TensorFlow | 55,637,345 | 1.2 | python,tensorflow | tf.gather can be used to that end. In fact, it is even more general, as the indices it takes as one of its inputs don't need to represent a permutation. | How to permute "dimensions" along a single axis of a tensor?
Something akin to tf.transpose, but at the level of "dimensions" along an axis, instead of at the level of axes.
To permute them randomly (along the first axis), there it tf.random.shuffle, and to shift them, there is tf.roll. But I can't find a more general function that would apply any given permutation. | 0 | 1 | 321 |
0 | 55,645,447 | 0 | 1 | 0 | 0 | 1 | false | 0 | 2019-04-11T18:22:00.000 | 1 | 2 | 0 | similarity score between phrases | 55,638,949 | 0.099668 | python,similarity,levenshtein-distance,sentence-similarity | You can also measure the similarity between two phrases using Levenshtein distance, threating each word as a single element. When you have strings of unequal sizes you can use the Smith-Waterman or the Needleman-Wunsch algorithm. Those algorithms are widely used in bioinformatics and the implementation can be found in the biopython package.
You can also tokenize the words in the phrases and measure the frequency of each token in each phrase, that will result in an array of frequencies for each phrase. From that array you can measure the pairwise similarity using any vector distance such as euclidean distance or cosine similarity. The tokenization of the phrases can be done with the nltk package, and the distances can be measured with scipy.
Hope it helps. | Levenshtein distance is an approach for measuring the difference between words, but not so for phrases.
Is there a good distance metric for measuring differences between phrases?
For example, if phrase 1 is made of n words x1 x2 x_n, and phrase 2 is made of m words y1 y2 y_m. I'd think they should be fuzzy aligned by words, then the aligned words should have a score about how similar they are, and some kind of gap penalty should be applied for non aligned words. These positive scores and negative scores should be aggregated in some way. There seem to be some heuristics involved.
Is there an existing solution for measuring the similarity between phrases? Python is preferred but other solution is also fine. Thanks. | 0 | 1 | 574 |
0 | 55,657,266 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-12T17:54:00.000 | 1 | 1 | 0 | How to use Pandas in Pycharm | 55,657,228 | 0.197375 | python,pandas,pycharm | It simply means that the pandas is not installed properly or not even installed at all.
The TimeOut error is generally for a connection problem, retry again after some time or try resetting your connection. | I tried to install Pandas in Project Interpreter under my Project -> Clicked on '+' .. but it says "Time out" and shows nothing. So I installed it using "py -m pip install pandas" in cmd, but I dont see it under Project Interpreter - there is only pip and setuptools.
What should I do to make it work ?
I am still getting an error:
import pandas as pd
ModuleNotFoundError: No module named 'pandas'
Thank you | 0 | 1 | 130 |
0 | 55,660,859 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-13T00:02:00.000 | 1 | 2 | 0 | How to select columns from a matrix with an algorithm | 55,660,812 | 1.2 | python,numpy-slicing | The column index i should satisfy 0 =< i modulo (210+70) <= 70-1 | I am writing a user defined function in python to extract specific chunks of columns from a matrix efficiently.
My matrix is 48 by 16240. The data is organised in some pattern column wise.
My objective is to make 4 matrices out of it. The first matrix is extracted by selecting the first 70 columns, skip the next 210, select the next 70, skip the next 210, till the end of the matrix.
The second matrix is extracted by selecting the second 70 columns, skip the next 210, select the next 70, skip the next 210, till the end of the matrix.
The third and the fourth matrix are extracted by selecting the third and the fourth 70 columns respectively, in the same manner as described above.
As can be observed, 16240 is dividable by 70.
Is there a way to have this efficiently done? | 0 | 1 | 82 |
0 | 55,670,156 | 0 | 0 | 0 | 0 | 1 | true | 1 | 2019-04-13T11:57:00.000 | 3 | 1 | 0 | How do I calculate the similarity of a word or couple of words compared to a document using a doc2vec model? | 55,665,180 | 1.2 | python,gensim,doc2vec | There's a number of possible approaches, and what's best will likely depend on the kind/quality of your training data and ultimate goals.
With any Doc2Vec model, you can infer a vector for a new text that contains known words – even a single-word text – via the infer_vector() method. However, like Doc2Vec in general, this tends to work better with documents of at least dozens, and preferably hundreds, of words. (Tiny 1-3 word documents seem especially likely to get somewhat peculiar/extreme inferred-vectors, especially if the model/training-data was underpowered to begin with.)
Beware that unknown words are ignored by infer_vector(), so if you feed it a 3-word documents for which two words are unknown, it's really just inferring based on the one known word. And if you feed it only unknown words, it will return a random, mild initialization vector that's undergone no inference tuning. (All inference/training always starts with such a random vector, and if there are no known words, you just get that back.)
Still, this may be worth trying, and you can directly compare via cosine-similarity the inferred vectors from tiny and giant documents alike.
Many Doc2Vec modes train both doc-vectors and compatible word-vectors. The default PV-DM mode (dm=1) does this, or PV-DBOW (dm=0) if you add the optional interleaved word-vector training (dbow_words=1). (If you use dm=0, dbow_words=0, you'll get fast training, and often quite-good doc-vectors, but the word-vectors won't have been trained at all - so you wouldn't want to look up such a model's word-vectors directly for any purposes.)
With such a Doc2Vec model that includes valid word-vectors, you could also analyze your short 1-3 word docs via their individual words' vectors. You might check each word individually against a full document's vector, or use the average of the short document's words against a full document's vector.
Again, which is best will likely depend on other particulars of your need. For example, if the short doc is a query, and you're listing multiple results, it may be the case that query result variety – via showing some hits that are really close to single words in the query, even when not close to the full query – is as valuable to users as documents close to the full query.
Another measure worth looking at is "Word Mover's Distance", which works just with the word-vectors for a text's words, as if they were "piles of meaning" for longer texts. It's a bit like the word-against-every-word approach you entertained – but working to match words with their nearest analogues in a comparison text. It can be quite expensive to calculate (especially on longer texts) – but can sometimes give impressive results in correlating alternate texts that use varied words to similar effect. | In gensim I have a trained doc2vec model, if I have a document and either a single word or two-three words, what would be the best way to calculate the similarity of the words to the document?
Do I just do the standard cosine similarity between them as if they were 2 documents? Or is there a better approach for comparing small strings to documents?
On first thought I could get the cosine similarity from each word in the 1-3 word string and every word in the document taking the averages, but I dont know how effective this would be. | 0 | 1 | 343 |
0 | 55,680,593 | 0 | 0 | 0 | 0 | 1 | true | 0 | 2019-04-14T20:15:00.000 | 0 | 1 | 0 | Comparing feature extractors (or comparing aligned images) | 55,679,644 | 1.2 | python,opencv,computer-vision | From your question, it seems like the task is not to compare the feature extractors themselves, but rather to find which type of feature extractor leads to the best alignment.
For this, you need two things:
a way to perform the alignment using the features from different extractors
a way to check the accuracy of the alignment
The algorithm you suggested is a good approach for doing the alignment. To check if accuracy, you need to know what is a good alignment.
You may start with an alignment you already know. And the easiest way to know the alignment between two images is if you made the inverse operation yourself. For example, starting with one image, you rotate it some amount, you translate/crop/scale or combine all this operations. Knowing how you obtained the image, you can obtain your ideal alignment (the one that undoes your operations).
Then, having the ideal alignment and the alignment generated by your algorithm, you can use one metric to evaluate its accuracy, depending on your definition of "good alignment". | I'd like to compare ORB, SIFT, BRISK, AKAZE, etc. to find which works best for my specific image set. I'm interested in the final alignment of images.
Is there a standard way to do it?
I'm considering this solution: take each algorithm, extract the features, compute the homography and transform the image.
Now I need to check which transformed image is closer to the target template.
Maybe I can repeat the process with the target template and the transformed image and look for the homography matrix closest to the identity but I'm not sure how to compute this closeness exactly. And I'm not sure which algorithm should I use for this check, I suppose a fixed one.
Or I could do some pixel level comparison between the images using a perceptual difference hash (dHash). But I suspect the the following hamming distance may not be very good for images that will be nearly identical.
I could blur them and do a simple subtraction but sounds quite weak.
Thanks for any suggestions.
EDIT: I have thousands of images to test. These are real world pictures. Images are of documents of different kinds, some with a lot of graphics, others mostly geometrical. I have about 30 different templates. I suspect different templates works best with different algorithms (I know in advance the template so I could pick the best one).
Right now I use cv2.matchTemplate to find some reference patches in the transformed images and I compare their locations to the reference ones. It works but I'd like to improve over this. | 0 | 1 | 89 |
0 | 55,689,959 | 0 | 0 | 0 | 0 | 1 | false | 0 | 2019-04-15T12:45:00.000 | 0 | 1 | 0 | Neural Network with different input shapes | 55,689,510 | 0 | python,tensorflow,machine-learning,deep-learning,computer-vision | In my experience you cannot train any network with different sample size on the same batch.
Fully convolutional network is similar to a fully connected network with fully connected layers at the end. As such any input image in the batch must have the same dims (w,h,d).
The difference is that a fully connected layers output a single output vector for every sample in the input batch while the fully convolutional net outputs a map of probability for each class.
It has a deeper meaning than just the image size, when trying to fit any data the size of it must be absolute and cannot be changed while training. I guess you can do it in different batches as I stated but I've never tried it.
An encoder\decoder could help "rebuild" the image in a specific size.
This tip, again, is from my experience in object detection, I could be wrong :) | I'm currently designing the architecture of a neural network for the colorization of grayscale images. Later on it should be able to colorize images with different sizes and different aspect ratios. I read that this would not be possible with a common CNN. I also read that the only options are downscaling the images to one specific size or to use a big fixed size (like 3000x3000px) and to fill the remaining space black. Both of these options don't seem to be that elegant. The first one is the opposite of what I want and the second would make the neural network slower.
Then I read about Fully Convolutional Networks and that this problem would not exist there. This would be great if it really works. I would like to know why this special network can deal with different input shapes. And maybe you could show me some tensorflow code of such a network.
By the way, I thought about an Autoencoder combined with a GAN for the architecture. | 0 | 1 | 863 |
0 | 55,692,831 | 0 | 1 | 0 | 0 | 1 | true | 0 | 2019-04-15T14:53:00.000 | 1 | 1 | 0 | Python Array Data Structure with History | 55,691,861 | 1.2 | python,data-structures | Have you considered writing a log file? A good use of memory would be to have the arrays contain only the current relevant values but build in a procedure where the update statement could trigger a logging function. This function could write to a text file, database or an array/dictionary of some sort. These types of audit trails are pretty common in the database world. | I recently needed to store large array-like data (sometimes numpy, sometimes key-value indexed) whose values would be changed over time (t=1 one element changes, t=2 another element changes, etc.). This history needed to be accessible (some time in the future, I want to be able to see what t=2’s array looked like).
An easy solution was to keep a list of arrays for all timesteps, but this became too memory intensive. I ended up writing a small class that handled this by keeping all data “elements” in a dict with each element represented by a list of (this_value, timestamp_for_this_value). that let me recreate things for arbitrary timestamps by looking for the last change before some time t, but it was surely not as efficient as it could have been.
Are there data structures available for python that have these properties natively? Or some sort of class of data structure meant for this kind of thing? | 0 | 1 | 272 |
0 | 55,694,836 | 0 | 1 | 0 | 0 | 1 | false | 1 | 2019-04-15T17:53:00.000 | 0 | 2 | 0 | Why does ~pd.isnull() return -2? | 55,694,800 | 0 | python | It's because you used an arithmetic, bitwie negation operator instead of a logical negation. | I was doing some quick tests for handling missing values and came across this weird behavior. When looking at ~pd.isnull(np.nan), I expect it to return False, but instead it returns -2. Why is this? | 0 | 1 | 59 |
0 | 55,711,407 | 0 | 0 | 0 | 0 | 2 | false | 1 | 2019-04-16T14:48:00.000 | 1 | 2 | 0 | Dealing with new words in gensim not found in model | 55,710,967 | 0.099668 | python,nlp,gensim | The models are defined on vectors, which, by default setting, depend only on old words so I do not expect them depend on new words.
It is still possible, depending on code, for new words to affect results. To be on safe side I recommend to test your particular model and/or metrics on a small text (with and without a bunch of new words). | Lets say I am trying to compute the average distance between a word and a document using distances() or compute cosine similarity between two documents using n_similarity(). However, lets say these new documents contain words that the original model did not. How does gensim deal with that?
I have been reading through the documentation and cannot find what gensim does with unfound words.
I would prefer gensim to not count those in towards the average. So, in the case of distances(), it should simply not return anything or something I can easily delete later before I compute the mean using numpy. In the case of n_similarity, gensim of course has to do it by itself....
I am asking because the documents and words that my program will have to classify will in some instances contain unknown words, names, brands etc that I do not want to be taken into consideration during classification. So, I want to know if I'll have to preprocess every document that I am trying to classify. | 0 | 1 | 179 |
Subsets and Splits