GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
56,563,697
0
0
0
0
1
false
0
2019-06-12T12:43:00.000
0
2
0
Can I avoid annotating my dataset for object recognition?
56,562,536
0
python,tensorflow,deep-learning,dataset,object-detection-api
No, not really, Deep Learning is not magic, you need bounding box annotations. The field of research that deals with this problem is Weakly Supervised Object Detection, and its a research field, there are no solutions that perform as well as using an annotated dataset.
I've been working on image classification with deep learning models (CNN with keras and tensorflow as back end) such as AlexNet and ResNet. I learned a lot about the whole dataset, learning, testing processes. I'm now shifting to object detection and have done a lot of research. I came across R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN and different versions of YOLO networks. I noticed that these object detection networks require dataset annotation instead of the former simply needing to have cropped images stored in corresponding files. Is there any way to accomplish object detection without having to annotate dataset?
0
1
872
0
56,839,312
0
0
0
0
1
false
0
2019-06-12T14:54:00.000
0
1
0
How to get a voxel array from a list of 3D points that make up a line in a voxalized volume?
56,565,073
0
python,3d,line,voxel,bresenham
I ended up solving this problem myself. For anyone who is wondering how, I'll explain it briefly. First, since all the catheters point in the general direction of the z-axis, I got the thickness of the slices along that axis. Both input points land on a slice. I then got the coordinates of every intersection between the line between the two input points and the z-slices. Next, since I know the radius of the catheter and I can calculate the angle between the two points, I was able to draw ellipse paths on each slice around the points I had previously found (when you cut a cone at and angle, the cross-section is an ellipse). Then I got the coordinates of all the voxels on every slice along the z-axis and checked which voxels where within my ellipse paths. Those voxels are the ones that describe the volume of the catheter. If you would like to see my code please let me know.
I have a list of points that represent a needle/catheter in a 3D volume. This volume is voxalized. I want to get all the voxels that the line that connects the point intersects. The line needs to go through all the points. Ideally, since the round needle/catheter has a width I would like to be able to get the voxels that intersect the actual three dimensional object that is the needle/catheter. (I imagine this is much harder so if I could get an answer to the first problem I would be very happy!) I am using the latest version of Anaconda (Python 3.7). I have seen some similar problems, but the code is always in C++ and none of it seems to be what I'm looking for. I am fairly certain that I need to use raycasting or a 3D Bresenham algorithm, but I don't know how. I would appreciate your help!
0
1
628
0
56,583,582
0
0
0
0
1
false
0
2019-06-12T18:44:00.000
0
2
0
We have many mainframe files which are in EBCDIC format, is there a way in Python to parse or convert the mainframe file into csv file or text file?
56,568,561
0
excel,python-3.x,export-to-csv,mainframe,ebcdic
Who said anything about EBCDIC? The OP didn't. If it is all text then FTP'ing with EBCDIC to ASCII translation is doable, including within Python. If not then either: The extraction and conversion to CSV needs to happen on z/OS. Perhaps with a COBOL program. Then the CSV can be FTP'ed down with or The data has to be FTP'ed BINARY and then parsed and bits of it translated. But, as so often is the case, we need more information.
I need to read the records from mainframe file and apply the some filters on record values. So I am looking for a solution to convert the mainframe file to csv or text or Excel workbook so that I can easily perform the operations on the file. I also need to validate the records count.
0
1
1,297
0
56,592,187
0
0
0
0
1
false
0
2019-06-14T04:26:00.000
0
1
0
Stack Data Frames on top of one another dataframe
56,591,367
0
python,python-3.x
You could use pandas.concat() or DataFrame.append() , check pandas document its easily available with examples.
I have two dataframes : 1st dataframe Countrey 2000 2001 2002 2003 2004 2005 canada 779 771 754 740 760 747 mexico 1311.2 1285.2 1271.2 1276.5 1270.6 1281 usa 836 814 810 800 802 799 India 914 892 888 878 880 877 China 992 970 966 956 958 955 2nd dataframe year 2000 2001 2002 2003 2004 2005 data 1.1 1.2 1.3 1.4 1.5 1.6 i would like to merge above these two dataframe in following way?is it possible? data 1.1 1.2 1.3 1.4 1.5 1.6 Countrey 2000 2001 2002 2003 2004 2005 canada 779 771 754 740 760 747 mexico 1311.2 1285.2 1271.2 1276.5 1270.6 1281 usa 836 814 810 800 802 799 India 914 892 888 878 880 877 China 992 970 966 956 958 955 Sum 4832.2 4732.2 4689.2 4650.5 4670.6 4659 also i wants the sum of each column
0
1
561
0
68,484,466
0
0
0
0
1
false
9
2019-06-14T08:46:00.000
2
7
0
Change 1's to 0 and 0's to 1 in numpy array without looping
56,594,598
0.057081
python,numpy
iverted = ~arr + 2 should do the work
Let's say I have a numpy array where I would like to swap all the 1's to 0 and all the 0's to 1 (the array will have other values, and there is nothing special about the 0's and 1's). Of course, I can loop through the array and change the values one by one. Is there an efficient method you can recommend using? Does the np.where() method have an option for this operation?
0
1
14,516
0
56,815,637
0
0
0
0
1
false
0
2019-06-14T09:02:00.000
0
1
0
Train object detection with large images
56,594,824
0
python-3.x,object-detection
Yes, you can train the object detection model with large images size 1600*1216 and obtain good accuracy.
Can I train the object detection model with large images size 1600*1216 and obtain good accuracy . Thanks
0
1
84
0
56,596,649
0
0
0
0
1
false
0
2019-06-14T10:25:00.000
0
2
0
Merge column data into row in python
56,596,244
0
python,python-3.x
It's not hard at all. You have to transpose the second dataframe and add its values in the rows of the first dataframe. Use the .T command.
I have two dataframes : 1st dataframe cat 2000 2001 2002 2003 2004 2005 1 779 771 754 740 760 747 2 1311.2 1285.2 1271.2 1276.5 1270.6 1281 3 836 814 810 800 802 799 4 914 892 888 878 880 877 5 992 970 966 956 958 955 2nd dataframe year data 2000 1 2001 4 2002 7 2003 10 2004 6 2005 3 i would like to merge above these two dataframe in following way?is it possible? cat 2000 2001 2002 2003 2004 2005 1 779 771 754 740 760 747 2 1311.2 1285.2 1271.2 1276.5 1270.6 1281 3 836 814 810 800 802 799 4 914 892 888 878 880 877 5 992 970 966 956 958 955 6(merge entry)1 4 7 10 6 3
0
1
38
0
56,603,296
0
0
0
0
1
false
0
2019-06-14T17:11:00.000
0
1
0
Model unable to identify distant objects
56,602,594
0
python-3.x,tensorflow,object-detection-api
You can resize and add padding using data augmentation to images with objects that are clearly visible so that they look like they are in a big distance and train your model further with those images
I have made a object recognition and detection model using tensorflow. It identifies the images which are clearly visible but its unable to identify if the same object is at a large distance. I am using Faster RCNN model. the model is able to identify the same object when it is closer but not when it is at a far distance. It has been trained already for the same object. How can i make the model identify objects at a distance?
0
1
32
0
57,694,779
0
0
1
0
1
false
0
2019-06-14T21:09:00.000
1
1
0
Design Choice: how to wrap up large files with 100k+ vectors
56,605,145
0.197375
python,performance,oop
It seems you want to "wrap paths" and store many attributes for those paths, and there are lots of "paths". Instead of defining classes that require lots of custom objects creation just for the paths themselves, store the paths (strings I suppose) as keys in a dict, and the attributes in any appropriate form inside.
I am comparing design choice of wrapping either each "vector" into Object, or each whole "matrix" of vectors into Object. I realize there will be more overhead if I try to wrap up each path by a class object, but it would make the system a lot easier to understand and implement. However, I thought this might cost us the performance. What would be a convention when it comes to loading big data as attributes? I appreciate your thoughts in advance, q
0
1
20
0
56,609,093
0
0
0
0
1
false
0
2019-06-15T08:50:00.000
0
3
0
Which loss function to use when there are many outputs?
56,608,876
0
python,keras,neural-network,loss-function
You can use binary cross entropy loss and set the nearest n-bins to the ground truth as labels. For example, you have 10 pixels and ground truth label is 3 and you selected 3 neighbours. In typical categorical cross entropy, you would set label as follow using one-hot encoded vector. [0 0 1 0 0 0 0 0 0 0] In the solution I suggested, you would use this [0 1 1 1 0 0 0 0 0 0] Or it can be this, basically imposing a Gaussian instead of flat labels. [0 0.5 1 0.5 0 0 0 0 0 0] Object detection architectures as suggested in the comments also essentially behave the same way I described. Except that they use a quantized scheme [0 1 0 0 0 0 0 0 0] (actual pixels) [- - 1 - - - - 0 - -] (group into 2 groups of 5. Your network only has two outputs now. Think of this as binning stage, as the actual pixel belong to group 1. this subnetwork uses binary cross entropy). [1 0] (first classification network output) [-1 0] (this second stage can be thought of as delta network, it takes the classified bin value from first stage and outputs a correction value, as the first bin is anchored at index 2, you need to predict -1 to move it to index 1. this network is trained using smoothed l1 loss). Now there is immediately a problem, what if there are two objects in group 1? This is an unfortunate problem which also exists in object detection architecture. The way to workaround with this is to define slightly shifted and scaled bin(or anchor) positions. This way you can detect at one pixel maximum of N objects where N is the number of anchors defined at that pixel.
Data similar to the images of 1000 x 1 pixels come from the equipment. Somewhere in the image may be 1, 2 or more objects. I'm going to build a neural network to detect objects. I want to make 1,000 outputs. Each output will indicate whether there is an object in that output or not. Advise me which loss function to use. It seems to me that "categorical crossentropy" is not suitable, because for example: in the training data, I will indicate that the objects are at 10 and 90 pixels. And the neural network will predict that the objects are at 11 and 89 pixels. It's not a big loss. But for the network, it will be the same loss as if it predict objects at 500 and 900 pixels. What loss function is suitable for such a case ? I'm using Keras
0
1
469
0
56,632,620
0
0
0
0
1
false
0
2019-06-15T10:41:00.000
0
1
0
tensor flow is not working in anaconda gettting CUDA runtime version error on window system
56,609,576
0
python,tensorflow
I uninstalled the tensorflow and then created a new environment and then reinstalled the tensorflow then the issue was resolved.
Tensorflow library was working on my system, I have installed other library now tensorflow is not working. I am using window 10. python 3.7, tensorflow 1.13.1. I am getting following error: File "", line 1, in runfile('C:/Users/Yogesh/.spyder-py3/MNISTBasicClassification.py', wdir='C:/Users/Yogesh/.spyder-py3') File "C:\Users\Yogesh\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 704, in runfile execfile(filename, namespace) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 108, in execfile exec(compile(f.read(), filename, 'exec'), namespace) File "C:/Users/Yogesh/.spyder-py3/MNISTBasicClassification.py", line 58, in model.fit(train_images, train_labels, epochs=5) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 880, in fit validation_steps=validation_steps) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 251, in model_iteration model.reset_metrics() File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\training.py", line 1119, in reset_metrics m.reset_states() File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\metrics.py", line 460, in reset_states K.set_value(v, 0) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\backend.py", line 2847, in set_value get_session().run(assign_op, feed_dict={assign_placeholder: value}) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\backend.py", line 479, in get_session session = _get_session() File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\keras\backend.py", line 457, in _get_session config=get_default_session_config()) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1551, in init super(Session, self).init(target, graph, config=config) File "C:\Users\Yogesh\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 676, in init self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts) InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
0
1
154
0
56,613,628
0
0
0
0
1
false
0
2019-06-15T18:22:00.000
0
1
0
from sklearn.cross_validation import KFold renaming and depreaction of cross_validation
56,613,045
0
python-3.7
Issue resolved by importing the supported package which is from sklearn.model_selection import KFold then modifying KFold method with correct parameters as below KFold(n_splits=2, random_state=None, shuffle=False)
i used in my code from sklearn.cross_validation import KFold and it was working without no issue . now i got error ModuleNotFoundError: No module named 'sklearn.cross_validation' i google it and i found that the package function is renamed to model_selection instead of cross_validation but i have to use KFold function from sklearn.cross_validation import KFold. the sklearn version i have is 0.20.1 with Python 3.7.1 using KFold which is a method already implemented in sklearn.cross_validation from sklearn.cross_validation import KFold expected to run sucessfully as before but i got below error: ModuleNotFoundError: No module named 'sklearn.cross_validation'
0
1
815
0
56,627,542
0
1
0
0
1
false
0
2019-06-17T08:20:00.000
0
2
0
Difference between modulus (%) and floor division(//) in NumPy?
56,627,393
0
python,numpy,integer-division
Assume a= 10, b = 6 a%b will give you the remainder, that is 4 a//b will give you the quotient, that is 1
Recently, I read a book on Numpy which mentions different types of ufuncs, where I was encountered with two different ufuncs, namely 'modulus', denoted by % symbol and 'floor division' //. Can someone explain the difference between them and why two operators are provided to do the the same thing (display reminder of division, according to me)? Please correct, if I am wrong.
0
1
8,915
0
56,656,088
0
0
0
1
1
true
0
2019-06-17T14:39:00.000
2
1
0
I want to write a 75000x10000 matrix with float values effectively into a database
56,633,576
1.2
python,sql,django,pandas,bigdata
SQLite is quite impressive for what it is, but it's probably not going to give you the performance you are looking for at that scale, so even though your existing project is Django on SQLite I would recommend simply writing a Python wrapper for a different data backend and just using that from within Django. More importantly, forget about using Django models for something like this; they are an abstraction layer built for convenience (mapping database records to Python objects), not for performance. Django would very quickly choke trying to build 100s of millions of objects since it doesn't understand what you're trying to achieve. Instead, you'll want to use a database type / engine that's suited to the type of queries you want to make; if a typical query consists of a hundred point queries to get the data in particular 'cells', a key-value store might be ideal; if you're typically pulling ranges of values in individual 'rows' or 'columns' then that's something to optimize for; if your queries typically involve taking sub-matrices and performing predictable operations on them then you might improve the performance significantly by precalculating certain cumulative values; and if you want to use the full dataset to train machine learning models, you're probably better off not using a database for your primary storage at all (since databases by nature sacrifice fast-retrieval-of-full-raw-data for fast-calculations-on-interesting-subsets), especially if your ML models can be parallelised using something like Spark. No DB will handle everything well, so it would be useful if you could elaborate on the workload you'll be running on top of that data -- the kind of questions you want to ask of it?
thanks for hearing me out. I have a dataset that is a matrix of shape 75000x10000 filled with float values. Think of it like heatmap/correlation matrix. I want to store this in a SQLite database (SQLite because I am modifying an existing Django project). The source data file is 8 GB in size and I am trying to use python to carry out my task. I have tried to use pandas chunking to read the file into python and transform it into unstacked pairwise indexed data and write it out onto a json file. But this method is eating up my computational cost. For a chunk of size 100x10000 it generates a 200 MB json file. This json file will be used as a fixture to form the SQLite database in Django backend. Is there a better way to do this? Faster/Smarter way. I don't think a 90 GB odd json file written out taking a full day is the way to go. Not even sure if Django databases can take this load. Any help is appreciated!
1
1
124
0
56,644,886
0
0
0
0
1
false
0
2019-06-18T05:24:00.000
0
1
1
Enabling Cuda on Azure Datascience VM
56,642,210
0
python,azure,tensorflow
what is the size of the VM you are using on Azure ?
Azure Datascience VM with Nvidia Gpu extension Ubuntu 16.04.6 LTS Python 3.5.5 NVCC V10.0.130 Tensorflow-gpu 1.13.1 When runnning an op, I get E tensorflow/stream_executor/cuda/cuda_driver.cc:300] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
0
1
101
0
62,995,790
0
0
0
0
1
false
3
2019-06-18T12:45:00.000
3
2
0
Is there any difference between using Dataframe.columns and Dataframe.keys() to obtain the column names?
56,649,500
0.291313
python-3.x,pandas,dataframe
one difference I noticed That you can Use .keys() with Series but You can not use .columns with Series .
For the sake of curiosity is there any practical difference between getting the column names of a DataFrame (let's say df) by using df.columns or df.keys()? I've checked the outs by type and it seems to be exactly the same. Am I missing something or these two methods are just as redundant as it seems? Is one more appropriate to use than the other? Thanks.
0
1
1,291
0
56,665,504
0
0
0
0
1
true
1
2019-06-19T10:20:00.000
2
2
0
What happens when you transform the test set using MinMaxScaler
56,665,409
1.2
python,scikit-learn,sklearn-pandas
For a given feature x, your minmax scaling to (0,1) will effectively map: x to (x- min_train_x)/(max_train_x - min_train_x) where min_train_x and max_train_x are the minimum and maximum value of x in the training set. If a value of x in the testing set is larger than the max_train_x the scaling transformation will return a value > 1. It usually is not a big problem except if the input has to be in the (0,1) range.
i am currently in the process of pre-processing my data and I understand that i have to use the same scaling parameters I have used on my training set, on my test set. However, when i applied the transform method from sklearn library, i noticed something weird. I first used preprocessing.MinMaxScaler(feature_range=(0,1)) on my training set which sets the maximum to be 1 and minimum to be 0. Next, i used minmax_scaler.transform(data) on my test set and I've noticed when i printed out the data-frame, I have values that are greater than 1. What can this possibly mean?
0
1
1,278
0
57,010,899
0
0
0
0
1
false
1
2019-06-20T13:04:00.000
1
1
0
How to display number of epochs in tensorflow object detection api with Faster Rcnn?
56,686,630
0.197375
python,tensorflow,object-detection-api
So you are right with your assumption, that you have 200 epochs. I had a similar problem with the not showing of loss. my solution was to go to the model_main.py file and then insert tf.logging.set_verbosity(tf.logging.INFO) after the import stuff. then it shows you the loss after each 100 steps. you could change the set_verbosity if you want to have it after every epoch ;)
I am using Tensorflow Object detection api. What I understood reading the faster_rcnn_inception_v2_pets.config file is that num_steps mean the total number of steps and not the epochs. But then what is the point of specifying batch_size?? Lets say I have 500 images in my training data and I set batch size = 5 and num_steps = 20k. Does that mean number of epochs are equal to 200 ?? When I run model_main.py it shows only the global_steps loss. So if these global steps are not the epochs then how should I change the code to display train loss and val loss after each step and also after each epoch.
0
1
2,207
0
56,690,623
0
0
0
0
1
false
0
2019-06-20T15:23:00.000
0
2
0
Is it acceptable to scale target values for regressors?
56,689,243
0
python-3.x,regression,data-science,prediction
It is actually a common practice to scale target values in many cases. For example a highly skewed target may give better results if it is applied log or log1p transforms. I don't know the characteristics of your data, but there could a transformation that might decrease your RMSE. Secondly, Test set is meant to be a sample of unseen data, to give a final estimate of your model's performance. When you see the unseen data and tune to perform better on it, it becomes a cross validation set. You should try to split your data into three parts, Train, Cross-validation and test sets. Train on your data and tune parameters according to it's performance on cross validation and then after you are done tuning, run it on the test set to get a prediction of how it works on unseen data and mark it as the accuracy of your model.
I am getting very high RMSE and MAE for MLPRegressor , ForestRegression and Linear regression with only input variables scaled (30,000+) however when i scale target values aswell i get RMSE (0.2) , i will like to know if that is acceptable thing to do. Secondly is it normal to have better R squared values for Test (ie. 0.98 and 0.85 for train) Thank You
0
1
4,385
0
60,075,464
0
1
0
0
1
false
2
2019-06-21T14:46:00.000
0
4
0
i tried installing tensorflow using 'pip install tensorflow ' in anaconda prompt and command prompt. its showing following output
56,705,686
0
python-3.x,tensorflow,anaconda
I was getting this error while installing from conda environment. Always upgrade conda or pip before a new installation. Following worked for me: [Optional] If installing in conda environment, then temporarily remove the conda env: conda remove --name myenv --all Update all conda packages: conda update --all Create conda env again: conda create -n myenv Activate conda env: conda activate Install tensorflow: pip install tensorflow
Found existing installation: wrapt 1.10.11 Cannot uninstall 'wrapt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
0
1
5,569
0
56,706,524
0
0
0
0
1
false
1
2019-06-21T15:20:00.000
0
2
0
How to remove regularisation from pre-trained model?
56,706,219
0
python,python-3.x,keras,regularized
Create a model with your required hyper-parameters and load the parameters to the model using load_weight().
I've got a partially trained model in Keras, and before training it any further I'd like to change the parameters for the dropout, l2 regularizer, gaussian noise etc. I have the model saved as a .h5 file, but when I load it, I don't know how to remove these regularizing layers or change their parameters. Any clue as to how I can do this?
0
1
356
0
56,855,229
0
0
0
0
2
true
10
2019-06-21T15:58:00.000
22
2
0
How to convert a spark dataframe into a databrick koalas dataframe?
56,706,860
1.2
python-3.x,dataframe,databricks
To go straight from a pyspark dataframe (I am assuming that is what you are working with) to a koalas dataframe you can use: koalas_df = ks.DataFrame(your_pyspark_df) Here I've imported koalas as ks.
I know that you can convert a spark dataframe df into a pandas dataframe with df.toPandas() However, this is taking very long, so I found out about a koala package in databricks that could enable me to use the data as a pandas dataframe (for instance, being able to use scikit learn) without having a pandas dataframe. I already have the spark dataframe, but I cannot find a way to make it into a Koalas one.
0
1
8,396
0
58,230,046
0
0
0
0
2
false
10
2019-06-21T15:58:00.000
3
2
0
How to convert a spark dataframe into a databrick koalas dataframe?
56,706,860
0.291313
python-3.x,dataframe,databricks
Well. First of all, you have to understand the reason why toPandas() takes so long : Spark dataframe are distributed in different nodes and when you run toPandas() It will pull the distributed dataframe back to the driver node (that's the reason it takes long time) you are then able to use pandas, or Scikit-learn in the single(Driver) node for faster analysis and modeling, because it's like your modeling on your own PC Koalas is the pandas API in spark and when you convert it to koalas dataframe : It's still distributed, so it will not shuffle data between different nodes, so you can use pandas' similar syntax for distributed dataframe transformation
I know that you can convert a spark dataframe df into a pandas dataframe with df.toPandas() However, this is taking very long, so I found out about a koala package in databricks that could enable me to use the data as a pandas dataframe (for instance, being able to use scikit learn) without having a pandas dataframe. I already have the spark dataframe, but I cannot find a way to make it into a Koalas one.
0
1
8,396
0
56,708,040
0
0
0
0
1
true
0
2019-06-21T17:21:00.000
0
1
0
.vocabulary_ vs .get_feature_names()
56,707,957
1.2
python,python-3.x,scikit-learn,tfidfvectorizer
Basically, I think that they contain exactly the same information. However, if you have the name of the term and you look for the column position of it at the tf-idf matrix then you go for the .vocabulary_. The .vocabulary_ has as keys the names of the terms and as values their column position at the tf-idf matrix. Whereas, if you know the column position of the term at the tf-idf matrix and you look for its name then you go for the .get_feature_names(). The position of the terms in the .get_feature_names() correspond to the column position of the elements to the tf-idf matrix.
These are related to the TfidfVectorizer of sklearn. Could some explain please the similarities and differences between these two and when each one is useful. It is quite confusing because they look very similar to each other but also quite different. Also the rather limited sklearn documentation does not help much in this case either.
0
1
509
0
56,730,467
0
0
0
0
1
false
1
2019-06-24T05:10:00.000
1
2
0
How to efficiently delete data from Redshift?
56,730,152
0.099668
python,amazon-web-services,pyspark,bigdata,amazon-redshift
Deleting data stored in Redshift with DELETE command will take time. The reason is that you are doing a soft delete, I mean you mark existing rows as deleted and then insert new row representing updated form of the data. So one way is executing DELETE for junks of data. Instead of deleting one by one you should try to address multiple rows. Since each write takes place in 1 MB chunks of data, we should be minimizing those data read and writes eventually. If you have a good information about the topology of the data stored in Redshift compute nodes and slices, addition to that information about distribution key and sort key, you can separate your DELETE command into multiple statements. (Any how we are expecting Redshift SQL Engine to do this for the SQL developer)
I have data in my Redshift cluster. I need to find the best and efficient way to delete the previously stored data when I re run the job. I have these two column to determine previous data previous_key (column that corresponds to run_dt) and creat_ts (time when we load the data) I found two approaches so far but they don't work in efficient way: Use sql DELETE command – might be slow, eventually requires Vacuum the table to reclaim storage space and resort rows Unload the data from a table into a file on S3 and then load table back (truncate and insert) with max clndr_key filtered out. Not really good either, might be risky. Please suggest any good approach to rerun jobs on Redshift cluster. Note: partitions functionality is not available.
0
1
6,918
0
56,775,827
0
0
0
0
1
true
0
2019-06-24T14:19:00.000
0
1
0
Plotting heat-map in python with multiple data sets
56,738,558
1.2
python,heatmap
The problem was caused due to the fact that the Data was of non-uniform grid, now I created a common domain (x-axis value) for all the functions and then interpolated all the Data set to the new X-value. I used the numpy.interpol() function to achieve this.
I have a list FFT data which I want to plot in a single heat map. Each data set has its own X and Y. Usually I make use of Seaborn and Panda dataframe to plot the heatmap as the Frequency will be common for all the sets, but now the frequency column is unique for each data set. How can we plot a heatmap from this kind of data? that is , I have a number of data sets as X1,Y1 X2,Y2 X3,Y3 ... ... Xn,Yn each X is unique and I want to plot these N graphs as a heatmap.
0
1
497
0
56,748,008
0
0
0
0
1
false
0
2019-06-25T06:18:00.000
0
1
0
Distance of the centre of the object from the centre of the image
56,747,903
0
python,opencv,image-processing
If the bounding box has bottom-left coordinate (x1, y1) and top-right (x2, y2), then its center'll be ((x1+x2)/2, (y1+y2)/2). Same in the case of the whole image. Now, determine the distance by using sqrt((c1x-c2x)^2 + (c1y-c2y)^2) Here, (c1x, c1y) is the center of the image and (c2x, c2y) is that of the bounding box
If we apply object detection on an image containing a playing card, then it will make bounding box around that card. Now my question is this that is there any way to find the distance between the center of the card(bounding box) and the center of the whole image?
0
1
580
0
56,755,246
0
1
0
0
1
true
0
2019-06-25T13:23:00.000
0
1
0
No module named 'numpy' Even When Installed
56,755,112
1.2
python,numpy,import,package
python3 is not supported under NumPy 1.16.4. Try to install a more recent version of NumPy: pip uninstall numpy pip install numpy
I'm using windows with Python 3.7.3, I installed NumPy via command prompt with "pip install NumPy", and it installed NumPy 1.16.4 perfectly. However, when I run "import numpy as np" in a program, it says "ModuleNotFoundError: No module named 'numpy'" I only have one version of python installed, and I don't know how I can fix this. How do I fix this?
0
1
696
0
56,757,810
0
0
0
0
1
false
0
2019-06-25T15:22:00.000
0
1
0
Does the BBox labelling for object detection has to be done manually on Images or is there any ways to automate it
56,757,346
0
python,tensorflow
Short answer: You will probably have to do a lot of it manually, or maybe hire an intern. If you already have a way to automatically annotate your data, you don't have to build another. You can try to use another way as an initial Solution, which you still need to go through by hand afterwards and finetune or correct the solutions. Depending on the size of the project it might be worth it though. Ways to do this could be things like using annotated data for similar classes, using a non-deep-learning way that kind of works, using video footage where you know consecutive frames having similar objects and bounding boxes so annotating is faster.
I'm doing a project on custom object detection. I have to use a Bbox labelling tool and yolov3 weights to train my dataset. Still I'm confused about where to start and proceed with the same.
0
1
25
0
56,760,620
0
0
0
0
1
false
0
2019-06-25T17:41:00.000
0
1
0
Python/Pandas - Read multiple files in a folder based on dates in filenames
56,759,396
0
python,pandas
Just put the read command in a try except statement: try: read_file() append_file() except: pass
I have a folder with hundreds of csv files with the filenames in the format of file_nameddmmyyyy (e.g. data25062019.csv). I am using python/pandas to read the files and manipulating the dataframes. I want to be able to pull multiple csv files based on the dates on the file names. e.g. I want to open the daily files from the previous one week and append them to a dataframe. The problem is there are not files for everyday of the week, so there may only be 4-5 files for the last week. Any suggestions on best way to approach this?
0
1
141
0
56,769,956
0
0
0
0
1
true
0
2019-06-26T09:33:00.000
0
2
0
How to set up data collection for small-scale algorithmic trading software
56,769,559
1.2
python,database,algorithmic-trading
CSV is a nice exchange format, but as it is based on a text file, it is not good for real-time updates. Only my opinion but I cannot imagine a reason to prefere that to database. In order to handle real time conflicts, you will later need a professional grade database. PostgreSQL has the reputation of being robust, MariaDB is probably a correct choice too. You could use a liter database in development mode like SQLite, but beware of the slight differences: it is easy to write something that will work on one database and will break on another one. On another hand, if portability across databases is important, you should use at least 2 databases: one at development time and a different one at integration time. A question to ask yourself immediately is whether you want a relational database or a noSQL one. Former ensures ACID (Atomicity, Consistency, Isolation, Durability) transations, the latter offers greater scalability.
This is a question on a conceptual level. I'm building a piece of small-scale algorithmic trading software, and I am wondering how I should set up the data collection/retrieval within that system. The system should be fully autonomous. Currently my algorithm that I want to trade live is doing so on a very low frequency, however I would like to be able to trade with higher frequency in the future and therefore I think that it would be a good idea to set up the data collection using a websocket to get real time trades straight away. I can aggregate these later if need be. My first question is: considering the fact that the data will be real time, can I use a CSV-file for storage in the beginning, or would you recommend something more substantial? In any case, the data collection would proceed as a daemon in my application. My second question is: are there any frameworks available to handle real-time incoming data to keep the database constant while the rest of the software is querying it to avoid conflicts? My third and final question is: do you believe it is a wise approach to use a websocket in this case or would it be better to query every time data is needed for the application?
1
1
88
0
56,778,812
0
0
0
0
1
false
0
2019-06-26T17:58:00.000
1
2
0
When is it better to use npz files instead of csv?
56,778,614
0.099668
python,csv,numpy
It depends of the expected usage. If a file is expected to have broad use cases including direct access from an ordinary client machines, then csv is fine because it can be directly loaded in Excel or LibreOffice calc which are widely deployed. But it is just an good old text file with no indexes nor any additional feature. On the other hand is a file is only expected to be used by data scientists or generally speaking numpy aware users, then npz is a much better choice because of the additional features (compression, lazy loading, etc.) Long story made short, you exchange a larger audience for higher features.
I'm looking at some machine learning/forecasting code using Keras, and the input data sets are stored in npz files instead of the usual csv format. Why would the authors go with this format instead of csv? What advantages does it have?
0
1
817
0
59,459,541
0
0
0
0
1
false
2
2019-06-26T18:26:00.000
0
1
0
How to fix 'KeyError: dtype('float32')' in LDAviz
56,779,011
0
python,lda,amazon-sagemaker
I know this is late but I just fixed a similar problem by updating my gensim library from 3.4 to the current version which for me is 3.8.
I use LDAvis library to visualize my LDA topics. It works fine before, but it gets me this error when I download the saved model files from Sagemaker to the local computer. I don't know why does this happen? Does that relate to Sagemaker? If I run from the local, and saved the model from local, and then run LDAviz library, it works fine. KeyError Traceback (most recent call last) in () ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pyLDAvis\gensim.py in prepare(topic_model, corpus, dictionary, doc_topic_dist, **kwargs) 116 See pyLDAvis.prepare for **kwargs. 117 """ --> 118 opts = fp.merge(_extract_data(topic_model, corpus, dictionary, doc_topic_dist), kwargs) 119 return vis_prepare(**opts) ~\AppData\Local\Continuum\anaconda3\lib\site-packages\pyLDAvis\gensim.py in _extract_data(topic_model, corpus, dictionary, doc_topic_dists) 46 gamma = topic_model.inference(corpus) 47 else: ---> 48 gamma, _ = topic_model.inference(corpus) 49 doc_topic_dists = gamma / gamma.sum(axis=1)[:, None] 50 else: ~\AppData\Local\Continuum\anaconda3\lib\site-packages\gensim\models\ldamodel.py in inference(self, chunk, collect_sstats) 665 # phinorm is the normalizer. 666 # TODO treat zeros explicitly, instead of adding epsilon? --> 667 eps = DTYPE_TO_EPS[self.dtype] 668 phinorm = np.dot(expElogthetad, expElogbetad) + eps 669 KeyError: dtype('float32')
0
1
520
0
56,781,883
0
0
0
0
1
false
0
2019-06-26T21:37:00.000
1
2
0
How to reshape a 4 dimensional Numpy array into different dimensions?
56,781,237
0.099668
python,numpy
np.squeeze will collapse all the dimensions having length 1, or you can use the reshape function
I have a 4 dimensional Numpy array, of (8, 1, 1, 102). Now, for instance, I simply want to ignore the middle two dimensions and have an array of shape (8,102), what may be the suitable way to accomplish this?
0
1
358
0
56,785,434
0
0
0
0
1
false
2
2019-06-26T22:26:00.000
3
3
0
Number of epochs to be used in a Keras sequential model
56,781,680
0.197375
python,keras,conv-neural-network
If the number of epochs are very high, your model may overfit and your training accuracy will reach 100%. In that approach you plot the error rate on training and validation data. The horizontal axis is the number of epochs and the vertical axis is the error rate. You should stop training when the error rate of validation data is minimum. You need to have a trade-off between your regularization parameters. Major problem in Deep Learning is overfitting model. Various regularization techniques are used,as i) Reducing batch-size ii) Data Augmentation(only if your data is not diverse) iii) Batch Normalization iv) Reducing complexity in architecture(mainly convolutional layers) v) Introducing dropout layer(only if you are using any dense layer) vi) Reduced learning rate. vii) Transfer learning Batch-size vs epoch tradeoff is quite important. Also it is dependent on your data and varies from application to application. In that case, you have to play with your data a little bit to know the exact figure. Normally a batch size of 32 medium size images requires 10 epochs for good feature extraction from the convolutional layers. Again, it is relative
I'm building a Keras sequential model to do a binary image classification. Now when I use like 70 to 80 epochs I start getting good validation accuracy (81%). But I was told that this is a very big number to be used for epochs which would affect the performance of the network. My question is: is there a limited number of epochs that I shouldn't exceed, note that I have 2000 training images and 800 validation images.
0
1
2,256
0
59,754,774
0
0
0
0
1
false
0
2019-06-27T08:57:00.000
0
1
0
Multiple header in Pandas DataFrame to_excel
56,787,460
0
python,pandas,export-to-excel
Had a similar issue. Solved by persisting cell-by-cell using worksheet.write(i, j, df.iloc[i,j]), with i starting after the header rows.
I need to export my DataFrame to Excel. Everything is good but I need two "rows" of headers in my output file. That mean I need two columns headers. I don't know how to export it and make double headers in DataFrame. My DataFrame is created with dictionary but I need to add extra header above. I tried few dumb things but nothing gave me a good result. I want to have on first level header for every three columns and on second level header for each column. They must be different. I expect output with two headers above columns.
0
1
1,093
0
56,789,899
0
0
0
0
1
false
1
2019-06-27T09:51:00.000
1
4
0
How can I find the index of a tuple inside a numpy array?
56,788,435
0.049958
python,numpy
It's not means search a tuple('Species2', 'Species3') from groups when you use np.where(groups == ('Species2', 'Species3')) it means search 'Species2' and 'Species3' separately if you have a Complete array like this groups=np.array([('Species1',''), ('Species2', 'Species3')], dtype=object)
I have a numpy array as: groups=np.array([('Species1',), ('Species2', 'Species3')], dtype=object). When I ask np.where(groups == ('Species2', 'Species3')) or even np.where(groups == groups[1]) I get an empty reply: (array([], dtype=int64),) Why is this and how can I get the indexes for such an element?
0
1
1,220
0
58,783,825
0
0
0
0
1
false
0
2019-06-27T12:15:00.000
0
1
0
Aggregate Ranking using Khatri-Rao product
56,790,761
0
python,data-mining,matrix-multiplication,data-analysis,ranking
Rank both graphs separately each node gets a rank in both graphs then do simple matrix addition. Now normalize the rank. This should keep the relationship like rank1>rank2>rank3>rank4 true and relationships like rank1+rank1>rank1+rank2 true. I don't know how it would help you taking the Khatri-Rao product of the matrix. That would make you end up with more than 400 nodes. Then you would need to compress them back to 400 nodes in-order to have 400 ranked nodes at the end. Who told you to use Khatri-Rao product?
I have constructed 2 graphs and calculated the eigenvector centrality of each node. Each node can be considered as an individual project contributor. Consider 2 different rankings of project contributors. They are ranked based on the eigenvector of the node. Ranking #1: Rank 1 - A Rank 2 - B Rank 3 - C Ranking #2: Rank 1 - B Rank 2 - C Rank 3 - A This is a very small example but in my case, I have almost 400 contributors and 4 different rankings. My question is how can I merge all the rankings and get an aggregate ranking. Now I can't just simply add the eigenvector centralities and divide it by the number of rankings. I was thinking to use the Khatri-Rao product or Kronecker Product to get the result. Can anyone suggest me how can I achieve this? Thanks in advance.
0
1
83
0
56,797,251
0
0
0
0
1
false
0
2019-06-27T15:56:00.000
0
1
0
How to get cluster of lines using python
56,794,801
0
python,cluster-analysis
Yes, almost all can be used. It's just probably not the most efficient way. Depending on how you'd formalize your problem, which is not clear enough. Do the points pairwise have to fulfill this property, or is it enough if they "mostly" do, or if there are some that do? In many cases it will be easiest (but slow) to make a graph where you connect any two lines that have the desired property, then search for desirable structures such as cliques (and thenyou'll quickly see why it's not clear which solution you are looking for).
I have a lines dataset where each lines have several points (minimum 2). All the coordinates are known and on the same metric reference. I would like to merge the lines with the same azimut +/- 10° and a maximal distance of 5cm between lines. I think a clustering algorithm can do that, I found on web some clustering algorithms working with points and I would like to know if there is any existing function/algorithm to do that with lines ? If no, I will try to adapt a code myself.
0
1
577
0
56,815,137
0
0
0
0
1
false
0
2019-06-28T07:14:00.000
0
1
0
How to customize a pre-trained model with our own dataset?
56,802,475
0
python,deep-learning,conv-neural-network,transfer-learning
Load the saved model with pretrained weights. Remove last dense and 2 class softmax layer and add a new dense with 4 class softmax as your low level features are already trained. Now train this model with modified data.
I am currently working on Object detection. I am using Amazon Workspace for training my model. I am using the model to detect cars and bikes. I have used a pre-trained model (which is Faster-RCNN-Inception-V2 model) and customized it with my own dataset for 2 labels namely car and bike. It took me 5 hours to complete the training. Now I want to modify my model for 2 more labels (keeping to old ones) namely bus and auto. But I don't want to do the training from scratch as my model is already trained for cars and bikes. So is there any way that I can train my model only with the dataset of bus and auto, and after training it will detect all 4 objects(car, bike, bus, and auto)?
0
1
469
0
56,817,681
0
0
0
0
1
false
0
2019-06-29T12:40:00.000
0
1
0
How to "push and shift" an element in a numpy array most efficiently?
56,817,603
0
python,numpy
You're describing a ring buffer. Just use a normal NumPy array plus an integer which tells you which row is the "top." To push, simply replace the "top" row with a new row and increment the top index.
I am trying to add a numpy array x = [1,2,3,4] to "the end" of a numpy array of shape (n,4) (i.e. "push" x onto the array) while removing the first element (i.e. "shift" the array). So basically, after adding my array x, the (4) shaped array that was added last should be removed. How does one do this in the computationally most efficient manner?
0
1
140
0
56,822,417
0
0
0
0
1
false
0
2019-06-29T22:55:00.000
0
1
0
How to calculate the number of queues needed in a box classification/packaging problem?
56,821,368
0
python,queue,classification,bins
Your question is frustratingly vague. I think I understand what you're asking for, but a little more detail would help. As I understand it, you have something like an order processing facility, where each order is a queue in which you hold items until all the items for the order arrive. Then you release that queue at some rate (x seconds per item, or something like that). The number of queues you need will depend on: The expected delivery rate: how many orders per minute do you want to complete? Call this OPM: Orders Per Minute. The average amount of time a queue lives. That is, time from when the first item arrives in the queue until the last item leaves. QLT: Queue Lifetime The number of queues you need is OPM * QLT. If you want to deliver 100 orders per minute, and the average queue lifetime is 3 minutes. You will need 300 queues. If the average queue lifetime is 30 seconds, then you only need 50 queues. The queue lifetime is a combination of how long it takes to fill the queue, and how long it takes to empty it. Call those QFT and QET. QET is easy: the average number of items in an order, divided by the queue empty rate. You said that items are released from the queue at a fixed rate. If the average order size is 5 items, and you empty the queue at a rate of 12 items per minute, then it will take 5/12 minutes (25 seconds) to empty the queue. QFT (queue fill time) depends on the average order size and the average time it takes an item to be picked and delivered to the queue. If you can't get that from your manufacturing data, then you'll have to estimate it yourself. Doing those calculations gives you a good estimate of how your system should react on average. You can then build a simple simulation using those numbers, and begin varying one or more of the parameters. For example, what happens if the average number of items per order gets larger or smaller for a period. If it gets larger, then QLT will probably increase. If it gets smaller, QLT will decrease, but you'll probably have more concurrent orders (and thus need more queues).
I need to size a box classification system where boxes get on a queue until the packaging quantity is reached. Then they all leave the queue at a fixed rate. This is done real time with production. I can estimate the volume for each SKU but I can't predict the order in which they will arrive at the classification/sorting facility. However I can look at previous manufacturing data to test the algorithm. The key point is how would you estimate the needed bins/queues to accomplish the sorting (minimizing the "all queues used" condition) I thought of queue theory but I want to run some simulations with the known data (data is not totally random) and most of what I searched assumes random entry to queues. I'm starting to write a python script to model the queue behavior myself with given fixed times for queue evacuation. Any suggestions? Thanks in advance. Ideally should be python based The expected output should be the used queues vs time and, in case of a limited number of queues, the number of boxes "discarded" vs time
0
1
35
0
56,825,630
0
0
0
0
1
false
0
2019-06-30T14:14:00.000
0
1
0
the way to do the cross validation
56,825,542
0
python,cross-validation,k-fold
It depends what is your end goal. K-Fold CV is used for finding model hyperparameters. After this phase you may change your validation dataset and train your model with it. If you want to harness as much data as you can (performing predictions) it might be a good idea to train N models on N different folds and ensemble their predictions. This one is similar to boostrap, all in all your ensemble saw all the data yet it didn't overfit. This approach is N times more computationally intensive though, so it still comes down to your goals. Finally, you should get better results with fitting different models to your folds instead of a single one, but this would require separate hyperparameter space for each algorithm.
Let say I have fold1, fold2 , fold3. I trained fold1,fold2,fold3 with modelA. A) modelA(fold1) -> modelA(fold2) -> modelA(fold3) B) modelA(fold1) -> saved weight modelA(fold1) -> modelA(fold2)-> saved weight modelA(fold2) -> modelA(fold3)-> saved weight modelA(fold3) -> ensemble 3 weight which way is the right way to do the k-fold cross validation and why?
0
1
33
0
57,441,048
0
0
0
0
1
true
3
2019-07-02T11:17:00.000
0
1
0
Pandas numpy.lookfor equivalent
56,851,264
1.2
python,pandas,numpy
There is probably no such thing for Pandas library as of today. However useful tip I found out is that you can press tab while typing pandas method in Jupyter Notebook, so it will suggest some possible methods. Also by pressing shift and tab you can get information about the method within jupyter notebook. Pressing shift and 4 times tab will conveniently open a simple window that contains all the information about specific method. Not exactly as numpy.lookfor but stil useful.
Numpy has great help function that allows me to search through the numpy documentation from terminal. Is there something similar to numpy.lookfor function that works for pandas library? I know that numpy.info works with pandas functions and methods but I have to know the name of the method beforehand.
0
1
79
0
56,852,299
0
0
0
0
1
false
0
2019-07-02T11:56:00.000
1
1
0
NLP Structure Question (best way for doing feature extraction)
56,851,945
0.197375
python-3.x,pandas,nlp,jupyter-notebook,spacy
The preprocessing pipeline depends mainly upon your problem which you are trying to solve. The use of TF-IDF, word embeddings etc. have their own restrictions and advantages. You need to understand the problem and also the data associated with it. In order to make the best use of the data, we need to implement the proper pipeline. Specifically for text related problems, you will find word embeddings to be very useful. TF-IDF is useful when the problem needs to be solved emphasising the words with lesser frequency. Word embeddings, on the other hand, convert the text to a N-dimensional vector which may show up similarity with some other vector. This could bring a sense of association in your data and the model can learn the best features possible. In simple cases, we can use a bag of words representation to tokenize the texts. So, you need to discover the best approach for your problem. If you are solving a problems which closely resembles the famous NLP problems like IMDB review classification, sentiment analysis on Twitter data, then you can find a number of approaches on the internet.
I am building an NLP pipeline and I am trying to get my head around in regards to the optimal structure. My understanding at the moment is the following: Step1 - Text Pre-processing [a. Lowercasing, b. Stopwords removal, c. stemming, d. lemmatisation,] Step 2 - Feature extraction Step 3 - Classification - using the different types of classifier(linearSvC etc) From what I read online there are several approaches in regard to feature extraction but there isn't a solid example/answer. a. Is there a solid strategy for feature extraction ? I read online that you can do [a. Vectorising usin ScikitLearn b. TF-IDF] but also I read that you can use Part of Speech or word2Vec or other embedding and Name entity recognition. b. What is the optimal process/structure of using these? c. On the text pre-processing I am ding the processing on a text column on a df and the last modified version of it is what I use as an input in my classifier. If you do feature extraction do you do that in the same column or you create a new one and you only send to the classifier the features from that column? Thanks so much in advance
0
1
52
0
56,921,855
0
0
0
0
1
false
0
2019-07-03T10:13:00.000
0
1
0
Disparity value difference in Stereo SGBM vs. WLS disparity maps
56,867,814
0
python-3.x
Anyone who might have an idea please?
I have been working on a stereo imaging project for a few months now. The goal is to track a defined object, compute its position (by using triangulation with stereo camera) and its orientation (by using an IMU). I get really good results, but there is something I don't understand and it's bothering me : the minimum depth in the disparity map obtained after using the WLS filter function available in OpenCV is not the same as the minimum depth in the disparity map obtained with the stereo SGBM algorithm. Why ?
0
1
151
0
56,869,092
0
1
0
0
1
false
0
2019-07-03T11:04:00.000
0
1
0
Pros and cons of an object with big dataframe vs a list of lots of class objects in Python
56,868,707
0
python,list,dataframe,object
The question is rather broad, so I will not go beyond generalities. If you intend to process user by user, then it makes sense to have one object per user. On the other hand, if you mainly process all users at the same time and attribute by attribute, then it makes sense to have classes for attributes each object containing the attribute for all users. That way, if memory become scarce, you can save everything to disk and only have one user (resp. attribute) in memory.
I'm writing a python program to perform certain processing on data for couple of millions (N) users. Outcome of the process would be several 1D arrays for each user (the process would apply on each user separately). I need set and get functions for each user output data. I have two options for implementing this: 1. Create a class with attributes of size N by column_size, so one object that contains big arrays 2. Create a class for each user and store instances of this class in a list, so a list of N objects My question is that what are pros and cons of each approach in terms of speed and memory consumption?
0
1
90
0
58,441,617
0
0
0
0
1
true
0
2019-07-03T18:40:00.000
0
1
0
How to apply lstm in speech emotion feature
56,876,204
1.2
python,speech-recognition,lstm
After my internship I learn how to fix out this error and where to look. Here's what you have to take care. Unexpected error input form If the reported layer is the first it is a cause of the input data for the train of a model as a same shape for a create your model. If this is the last layer that bug then it is the labels that are well coded Either you put a sigmoid but the labels are not binary either you put softmax and the labels are in one-hot format [0,1,0]: example 3 classes, this element is of class 2. So, the labels are badly encoded or you are deceived in the function (sigmoid / softmax) of your output layer. Hope this help
I'd like to apply lstm in my speech emotion datasets (dataset of features in numeric values with one column of targets). I've done split_train_test. Do I need some other transformation to do in the data set before the model? I ask this question because when I compile and fit the model I've got one error in the last dense layer. Error when checking model target: expected activation_2 to have shape (8,) but got array with shape (1,). Thanks.
0
1
26
0
56,882,432
0
0
0
0
1
true
0
2019-07-03T22:16:00.000
1
1
0
How to write each dataframe partition into different tables
56,878,553
1.2
python-3.x,pyspark,azure-databricks
If your Id has distinct number of values (kind of type/country column) you can use partitionBy to store and thereby saving them to different table will be faster. Otherwise create a derive column(using withColumn) from you id column by using the logic same as you want to use while deviding data across tables. Then you can use that derive column as a partition column in order to have faster load.
I am using Databricks to connect to an Eventhub, where each message comming from the EventHub may be very different from another. In the message, I have a body and an id. I am looking for performance, so I am avoiding collecting data or doing unecessary processings, also I want to do the saving in parallel by partition. However I am not sure on how to do this in a proper way. I want to append the body of each ID in a different AND SPECIFIC table in batches, the ID will give me the information I need to save in the right table. So in order to do that I have been trying 2 approachs: Partitioning: Repartition(numPartitions, ID) -> ForeachPartition Grouping: groupBy('ID').apply(myFunction) #@pandas_udf GROUPED_MAP The approach 1 doens't look very attracting to me, the repartition process looks kind unecessary and I saw in the docs that even if I set a column as a partition, it may save many ids of that column in a single partition. It only garantees that all data related to that id is in the partition and not splitted The approach 2 forces me to output from the pandas_udf, a dataframe with the same schema of the input, which is not going to happen since I am transforming the eventhub message from CSV to dataframe in order to save it to the table. I could return the same dataframe that I received, but it sounds weird. Is there any nice approach I am not seeing?
0
1
134
0
56,925,320
0
0
0
0
1
false
1
2019-07-04T06:33:00.000
-1
1
0
Find the outliers or anomaly in gps data (time, latitude, longitude, altitude)
56,881,873
-0.197375
python-3.x,machine-learning,data-science
First of all, if you use Python, then use scikit-learn. For this problem, there is multiple possibilities. One way is indeed to use a clustering algorithm. For this purpose to get the anomaly too, you can use DBSCAN. It is an algorithm designed to get cluster and the outliers. Another way would be (assuming you have for each device all their position) to use more funny way like a clustering algorithm on all the positions to get the important place, and after an LDA (latent dirichlet allocation) to get the main topics (here the words would be the index of the cluster, the document would be the list of position of each device and so the topics would be the main "routes").
I have data. Based on the data (time, latitude, longitude, altitude) determine what are the typical routes that device makes during a full week. After determining the baseline routes or typical area frequented by device we can start determining an anomaly based on the device traveling outside it’s frequent route/area. Action: The process will then send an “alert” to the system is traveling outside it’s frequent area route Please suggest which machine learning algorithm is useful. I am going to start clustering algorithm. Also tell me which python libraries is useful to use machine learning algorithm.
0
1
493
0
57,056,265
0
0
0
0
1
false
1
2019-07-04T11:42:00.000
0
2
0
Is there an option like generator in keras with scikit to process large records of data?
56,887,202
0
python,machine-learning,keras,scikit-learn,training-data
The Gaussian process implementation(Regression/classification) from scikit is'nt capable of handling big dataset. It can run only upto 15000 rows of data. So I decided to use a different algorithm instead as this seems to be a problem with algorithm.
I have a training dataset of shape(90000,50) and I trying to fit this in model(Gaussian process regression). This errors out with memory error. I do understand the computation, but is there way to pass data in batches using scikit? I am using the scikit implementation of the GPR algorithm.
0
1
350
0
56,888,637
0
1
0
0
1
false
0
2019-07-04T12:38:00.000
0
1
0
jupyter notebook - problem no module named 'pandas'
56,888,145
0
python,pandas,numpy,jupyter-notebook,anaconda
let's try: in jupiter !pip install pandas --upgrade import pandas as pd print(pd.__version__) what do you see
I am trying to learn pandas and I haven't been able to import it into my code. I have looked at other answers on this site and none of them have worked. jupyter notebook is not importing my module.
0
1
76
0
56,973,276
0
0
0
0
1
true
1
2019-07-04T12:44:00.000
0
1
0
Keras preprocessing for 3D semantic segmentation task
56,888,245
1.2
python,image,keras,deep-learning,conv-neural-network
The Input() function defines the shape of the input tensor of a given model. For 3D images, often a 5D Tensor is expected, e.g. (None, 32, 32, 32, 1), where None refers to the batch size. Therefore the training images and labels have to be reshaped. Keras offers the to_categorical function to one-hot encode the label data (which is necessary). The use of generators helps to feed in the data. In this case, I cannot use the ImageDataGenerator from keras as it can only deal with RGB and grayscale images and therefore have to write a custom script.
For semantic image segmentation, I understand that you often have a folder with your images and a folder with the corresponding masks. In my case, I have gray-scale images with the dimensions (32, 32, 32). The masks naturally have the same dimensions. The labels are saved as intensity values (value 1 = label 1, value 2 = label 2 etc.). 4 classes in total. Imagine I have found a model that was built with the keras model API. How do I know how to prepare my label data for it to be accepted by the model? Does it depend on the loss function? Is it defined in the model (Input parameter). Do I just add another dimension (4, 32, 32, 32) in which the 4 represents the 4 different classes and one-hot code it? I want to build a 3D convolutional neural network for semantic segmentation but I fail to understand how to feed in the data correctly in keras. The predicted output is supposed to be a 4-channel 3D image, each channel showing the probability values of each pixel to belong to a certain class.
0
1
207
0
56,893,072
0
0
0
0
1
false
0
2019-07-04T16:46:00.000
0
1
0
Uploading images folders from my PC to Google Colab
56,891,871
0
python,deep-learning,dataset,google-colaboratory
You can try zipping and the unzip on collab Step 1: Zip the whole folder Step 2:Upload the folder Step 3: !unzip myfoldername.zip Step 4:type ls and see folder names to see if successful It would be better if you compress or resize the images to reduce file size using opencv or something
I want to train a deep learning (CNN) model on a dataset containing around 100000 images. Since the dataset is huge (approx 82 Gb), I want to use Google colab since it's GPU supported. How do I upload this full image folder into my notebook and use it? I can not use google drive or git hub since my dataset is too large.
0
1
308
0
56,897,110
0
0
0
0
1
false
1
2019-07-05T03:31:00.000
0
3
0
Early Stopping, Model has gone through how many epochs?
56,896,221
0
python-3.x,tensorflow,keras,neural-network
Yes, you get the model (weights) corresponding to the epoch when it stops. A commonly used strategy is to save the model whenever the validation loss/acc improves.
I am using Keras. I am training my Neural Network and using Early Stopping. My patience is 10 and the epoch with the lowest validation loss is 15. My network runs til 25 epochs and stops however my model is the one with 25 epochs not 15 if I understand correctly Is there an easy way to revert to the 15 epoch model or do I need to re-instantiate the model and run 15 epochs?
0
1
1,076
0
56,896,809
0
0
0
0
2
false
2
2019-07-05T04:50:00.000
0
2
0
How to represent the Null class in Multilabel Classification with Convolutional Neural Nets?
56,896,712
0
python,tensorflow,machine-learning,keras
The model outputs a probability to each class, and we assign the input to the class which has the highest probability during prediction. The last layer is normally a softmax layer for multiclass classification and sigmoid for binary classification. Both of them squash the input into a range of 0 to 1. We can interpret them as probabilities. So no you cannot have all zeros as a one class (Null), because their values will sum up to 1 (probability). You have to define a new class for Null.
I'm trying to label images with the various categories that they belong to with a convolutional neural net. For my problem, the image can be in a single category, multiple categories, or zero categories. Is it standard practice to set the zero category as all zeroes or should I add an additional null class neuron to the final layer? As an example, let's say there are 5 categories (not including the null class). Currently, I'm representing that with [0,0,0,0,0]. The alternative is adding a null category, which would look like [0,0,0,0,0,1]. Won't there also be some additional unnecessary parameters in this second case or will this allow the model to perform better? I've looked on Stackoverflow for similar questions, but they pertain to Multiclass Classification, which uses the Categorical Crossentropy with softmax output instead of Binary Crossentropy with Sigmoid output, so the obvious choice there is to add the null class (or to do thresholding).
0
1
791
0
56,897,025
0
0
0
0
2
true
2
2019-07-05T04:50:00.000
5
2
0
How to represent the Null class in Multilabel Classification with Convolutional Neural Nets?
56,896,712
1.2
python,tensorflow,machine-learning,keras
Yes, the "null" category should be represented as just zeros. In the end multi-label classification is a set of C binary classification problems, where C is the number of classes, and if all C problems output "no class", then you get a vector of just zeros.
I'm trying to label images with the various categories that they belong to with a convolutional neural net. For my problem, the image can be in a single category, multiple categories, or zero categories. Is it standard practice to set the zero category as all zeroes or should I add an additional null class neuron to the final layer? As an example, let's say there are 5 categories (not including the null class). Currently, I'm representing that with [0,0,0,0,0]. The alternative is adding a null category, which would look like [0,0,0,0,0,1]. Won't there also be some additional unnecessary parameters in this second case or will this allow the model to perform better? I've looked on Stackoverflow for similar questions, but they pertain to Multiclass Classification, which uses the Categorical Crossentropy with softmax output instead of Binary Crossentropy with Sigmoid output, so the obvious choice there is to add the null class (or to do thresholding).
0
1
791
0
59,382,368
0
0
0
0
1
false
1
2019-07-06T18:05:00.000
1
1
0
Adanet running out of memory
56,916,504
0.197375
python,tensorflow,tensorflow-estimator,adanet
Three hypotheses: You might have too many DNNs in your ensemble, which can happen if max_iteration_steps is too small and max_iterations is not set (both of those are constructor arguments to AutoEnsembleEstimator). If you want to train each DNN for N steps, and you want an ensemble with 2 DNNs, you should set max_iteration_steps=N, set max_iterations=2, and train the AutoEnsembleEstimator for 2N steps. You might have been on adanet-0.6.0-dev, which had a memory leak. To fix this, try updating to the latest release and seeing if this problem still arises. Your batch size might have been too large. Try lowering your batch size.
I tried training an AutoEnsembleEstimator with two DNNEstimators (with hidden units of 1000,500, 100) on a dataset with around 1850 features (after feature engineering), and I kept running out of memory (even on larger 400G+ high-mem gcp vms). I'm using the above for binary classification. Initially I had trained various models and combined them by training a traditional ensemble classifier over the trained models. I was hoping that Adanet would simplify the generated model graph that would make the inference easier, rather than having separate graphs/pickles for various scalers/scikit models/keras models.
0
1
39
0
56,926,165
0
1
0
0
1
false
0
2019-07-07T07:51:00.000
0
1
0
Is there a pre-defined custom indexed array in python?
56,920,278
0
python-3.x
No. Feel free to write a wrapper class that adjusts slices by such an offset.
I am a beginner for python programming, specifically for scientific computing. In python, the index of a np.array, by default, starts from 0. Is it possible to change the index starting from any number, such as -1 ..., like what can do in fortran? thanks a lot.
0
1
36
0
59,574,175
0
0
0
0
1
false
1
2019-07-07T10:05:00.000
0
1
0
How to create a CNN cross filter for Tensorflow in python?
56,921,148
0
python,tensorflow,filter,conv-neural-network
If I understand your question correctly, you want to use a Cross Filter, like the one whose shape is3*4 rather than a Square Filter, whose shape is 3*3. If that is the case, that is allowed using the Function, tf.keras.layers.Conv2D. In the argument, kernel_size, you need to provide (3,4) instead of just 3 or (3,3).
In general, the CNN model uses a square filter. I would like to use a cross filter or X filter. The function of creating a square filter in tensorflow is provided, but information on other filter-types is not available. How can I make a cross filter and/or an X-filter?
0
1
35
0
56,931,866
0
0
0
0
1
false
0
2019-07-08T09:04:00.000
0
1
0
Predict the probability of someone leaving based on two conditions
56,931,584
0
python,dataframe,statistics,probability
A direct and simple approach would be to use the mean of the percentage you get from the age and tax bracket that they are in. The downsides is that you consider that both those variables are independant and have the same weight in deciding wether an employee will quit or not. A better approach would be to use a classifier to give you a more accurate prediction of the probability of the employee quitting based on his tax bracket and age. You could start with common classifiers like Random Forest.
I am currently working on a dataset that provides information over several years about employees at a big company. Information includes whether or not the employee quit that year (True or False for every year), what Tax Bracket they're in and what age they are. Based on the dataset, I have plottet the percentage of people quitting against their age, and the percetange of people quitting against their tax bracket. Assuming those numbers can be considered as the probability of someone quitting given their age, and the probability of someone quitting given their tax bracket, I would like to find a way to predict the probability of someone quitting given both age and tax bracket. I cannot use our dataset for that, because it is too small and most combinations do not occur in it (so I simply get 0% for everything). Is there a way to predict it, using some kind of model?
0
1
33
0
56,950,836
0
0
0
0
1
false
0
2019-07-08T10:31:00.000
1
2
0
Which algorithm to use for percentage features in my DV and IV, in regression?
56,933,059
0.099668
python,statistics,regression,percentage,feature-extraction
Having a mix of predictors doesn't matter for any form of regression, this will only change how you interpret the coefficients. What does matter, however, is the type/distribution of your Y variable Case 1: Predict a continuous valued Y a.Will using a Lasso regression suffice? Regular OLS regression will work fine for this b. How do I interpret the X-coefficient if X is standardized and is a numeric value? The interpretation of coefficients always follows a format like "for a 1 unit change in X, we expect an x-coefficient amount of change in Y, holding the other predictors constant" Because you have standardized X, your unit is a standard deviation. So the interpretation will be "for a 1 standard deviation change in X, we expect an X-coefficient amount of change in Y..." c. How do I interpret the X-coefficient if X is standardized and is a %? Same as above. You units are still standard deviations, despite it originally coming from a percentage Case 2: Predict a %-ed valued Y, like % resource used. a. Should I use Beta-Regression? If so which package in Python offers this? This is tricky. The typical recommendation is to use something like binomial logistic regression when your Y outcome is a percentage. b. How do I interpret the X-coefficient if X is standardized and is a numeric value? c. How do I interpret the X-coefficient if X is standardized and is a %? Same as interpretations above. But if you use logistic regression, they are in the units of log odds. I would recommend reading up on logistic regression to get a deeper sense of how this works If I am wrong in standardizing the Xs which are a % already , is it fine to use these numbers as 0.30 for 30% so that they fall within the range 0-1? So that means I do not standardize them, I will still standardize the other numeric IVs. Standardizing is perfectly fine for variables in regression, but like I said, it changes your interpretation as your unit is now a standard deviation Final Aim for both cases 1 & 2: To find the % of impact of IVs on Y. Eg: When X1 increases by 1 unit, Y increases by 21% If your Y is a percentage and you use something like OLS regression, then that is exactly how you would interpret the coefficients (for a 1 unit change in X1, Y changes by some percent)
I am using regression to analyze server data to find feature importance. Some of my IVs (independent variables) or Xs are in percentages like % of time, % of cores, % of resource used, while others are in numbers like number of bytes, etc. I standardized all my Xs with (X-X_mean)/X_stddev. (Am I wrong in doing so?) Which algorithm should I use in Python in case my IVs are a mix of numeric and %s and I predict Y in the following cases: Case 1: Predict a continuous valued Y a.Will using a Lasso regression suffice? b. How do I interpret the X-coefficient if X is standardized and is a numeric value? c. How do I interpret the X-coefficient if X is standardized and is a %? Case 2: Predict a %-ed valued Y, like "% resource used". a. Should I use Beta-Regression? If so which package in Python offers this? b. How do I interpret the X-coefficient if X is standardized and is a numeric value? c. How do I interpret the X-coefficient if X is standardized and is a %? If I am wrong in standardizing the Xs which are % already, is it fine to use these numbers as 0.30 for 30% so that they fall within the range 0-1? So that means I do not standardize them, I will still standardize the other numeric IVs. Final Aim for both Cases 1 and 2: To find the % of impact of IVs on Y. e.g.: When X1 increases by 1 unit, Y increases by 21% I understand from other posts that we can NEVER add up all coefficients to a total of 100 to assess the % of impact of each and every IV on the DV. I hope I am correct in this regard.
0
1
173
0
56,952,479
0
0
0
0
1
true
2
2019-07-08T18:02:00.000
2
2
1
Remote video streaming with Pepper and NAO robots
56,940,339
1.2
python,opencv,gstreamer,nao-robot,pepper
gstreamer should already be installed on the robot, so you could run it on the robot with a command like this: gst-launch-0.10 -v v4l2src device=/dev/video-top ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! ffmpegcolorspace ! jpegenc ! multipartmux! tcpserversink port=3000 ... and then you can open the stream from your computer, for example with vlc: vlc tcp://ip.of.the.robot:3000
I am trying to remotely stream videos and images from the cameras of the Pepper and NAO robots to my laptop. First, I used a while loop to repeatedly capture images from the NAO's camera and processed the images through opencv. However, as you can imagine, this only provided me with a framerate of about 1 fps. Then I tried to access the camera through opencv's videocapture, but it is not working properly. Next, I attempted to use gstreamer 1.0 for python on Windows, but the Windows version seems to be missing a number of elements, even though I have all of the required plugins (base, good, bad, ugly). Also, I am trying to avoid using ROS because I am having issues using it with the python 2.7 naoqi SDK of the Pepper and NAO robots. Any help would be greatly appreciated. Thanks
0
1
1,257
0
56,969,631
0
0
0
0
1
false
2
2019-07-10T11:12:00.000
0
4
0
how do I produce unique random numbers as an array in Python?
56,969,477
0
python,arrays,numpy,random
If you need all possible arrays in random order, consider enumerating them in any arbitrary deterministic order and then shuffling them to randomize the order. If you don't want all arrays in memory, you could write a function to generate the array at a given position in the deterministic list, then shuffle the positions. Note that Fisher-Yates may not even need a dense representation of the list to shuffle... if you keep track of where the already shuffled entries end up you should have enough.
I have an array with size (4,4) that can have values 0 and 1, so I can have 65536 different arrays. I need to produce all these arrays without repeating. I use wt_random=np.random.randint(2, size=(65536,4,4)) but I am worried they are not unique. could you please tell me this code is correct or not and what should I do to produce all possible arrays? Thank you.
0
1
452
0
56,970,162
0
1
0
0
2
false
0
2019-07-10T11:43:00.000
0
3
0
How to add a column of seconds to a column of times in python?
56,970,023
0
python,datetime,time,add,timedelta
Probably the easiest way is to read your file into a pandas data-frame and parse each row as a datetime object. Then you create a datetime.timedelta object passing the fractional seconds. A datetime object + a timedelta handles wrapping around for days quite nicely so this should work without any additional code. Finally, write back your updated dataframe to a file.
I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: "2.13266 2.21784 2.20719 2.02499 2.16543", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row. 2014-08-26 19:49:32 0 2014-08-28 05:43:21 0 2014-08-30 11:47:54 0 2014-08-30 03:26:10 0
0
1
673
0
56,973,402
0
1
0
0
2
false
0
2019-07-10T11:43:00.000
0
3
0
How to add a column of seconds to a column of times in python?
56,970,023
0
python,datetime,time,add,timedelta
Ok. Finally it is done via this code: d= 2.13266 dd= pd.to_timedelta (int(d), unit='s') df= pd.Timestamp('2014-08-26 19:49:32') new = df + dd
I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: "2.13266 2.21784 2.20719 2.02499 2.16543", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row. 2014-08-26 19:49:32 0 2014-08-28 05:43:21 0 2014-08-30 11:47:54 0 2014-08-30 03:26:10 0
0
1
673
0
56,981,888
0
0
0
0
1
true
0
2019-07-10T15:34:00.000
0
1
0
how to average in a specific dimension with numpy.mean?
56,974,114
1.2
python,numpy
The solution is to specify the correct axis and use keepdims=True which is noted by several commenters (If you add your answer I will delete mine). This can be done with either pos.mean(axis = 0,keepdims=True) or np.mean(pos,axis=0,keepdims=True)
I have a matrix called POS which has form (10,132) and I need to average those first 10 elements in such a way that my averaged matrix has the form of (1,132) I have tried doing means = pos.mean (axis = 1) or menas = np.mean(pos) but the result in the first case is a matrix of (10,) and in the second it is a simple number i expect the ouput a matrix of shape (1,132)
0
1
95
0
56,978,241
0
0
0
0
1
false
0
2019-07-10T18:50:00.000
1
2
0
Can I obtain Word2Vec and Doc2Vec matrices to calculate a cosine similarity?
56,976,941
0.099668
python,gensim,word2vec,doc2vec
Yes, you could train a Word2Vec or Doc2Vec model on your texts. (Though, your data is a bit small for these algorithms.) Afterwards, with a Word2Vec model (or some modes of Doc2Vec), you would have word-vectors for all the words in your texts. One simple way to then create a vector for a longer text is to average together all the vectors for the text's individual words. Then, with a vector for each text, you can compare texts by calculating the cosine-similarity of their vectors. Alternatively, with a Doc2Vec model, you can either (a) look up the learned doc-vectors for texts that were in the training set; or (b) use infer_vector() to feed in new text, which should be tokenized the same way as the training data, and get a model-compatible vector for that new text.
I am working with text data and at the moment I have put my data into a term document matrix and calculated the TF, term frequency and TF-IDF, term frequency inverse document frequency. From here my matrix looks like: columns = document names rownames = words filled with their TF and TF-IDF scores. I have been using the tm package in R for much of my current analysis but to take it further I have started playing around with the gensim library in Python. Its not clear to me if I have the word embeddings as in the TF and TF-IDF. I am hopeing to use Word2Vec/Doc2Vec and obtain a matrix similar to what I currently have and then calculate the cosine similarity between document. Is this one of the outputs of the models? I basically have about 6000 documents I want to calculate the cosine similarity between them and then rank these cosine similarity scores.
0
1
1,708
0
56,980,001
0
0
0
0
1
true
1
2019-07-10T21:59:00.000
1
1
0
Is it bad to not remove stopwords when I've already set a ceiling on document frequency?
56,979,185
1.2
python,scikit-learn,nlp,text-mining,text-processing
You are right. It could be the definition of stop words. However, do not forget that one reason to remove the stop words in the first phase, is to prevent counting them and reduce the computation time. Notice that your intuition behind stop words is correct.
I'm using sklearn.feature_extraction.text.TfidfVectorizer. I'm processing text. It seems standard to remove stop words. However, it seems to me that if I already have a ceiling on document frequency, meaning I will not include tokens that are in a large percent of the document (eg max_df=0.8), dropping stop words doesn't seem necessary. Theoretically, stop words are words that appear often and should be excluded. This way, we don't have to debate on what to include in our list of stop words, right? It's my understanding that there is disagreement over what words are used often enough that they should be considered stop words, right? For example, scikit-learn includes "whereby" in its built-in list of English stop words.
0
1
40
0
56,988,032
0
0
0
0
2
false
0
2019-07-11T10:03:00.000
0
2
0
Applying "reinforcement learning" on a supervised learning model
56,986,663
0
python,linear-regression,reinforcement-learning,supervised-learning
Most (or maybe all) iterative supervised learning methods already use a feedback loop on the outputs of the prediction. If fact, this feedback is very informative since it provides information with the exact amount of error in each sample. Think for example in stochastic gradient descent, where you compute the error of each sample to update the model parameters. In reinforcement learning the feedback signal (i.e., reward) is much more limited than in supervised learning. Therefore, in the typical setup of adjusting some model parameters, if you have a set of input-output (i.e., a training data set), probably it has no sense to apply reinforcement learning. If you are thinking on a more specific case/problem, you should be more specific in your question.
Is it possible to use "reinforcement learning" or a feedback loop on a supervised model? I have worked on a machine learning problem using a supervised learning model, more precisely a linear regression model, but I would like to improve the results by creating a feedback loop on the outputs of the prediction, i.e, tell the algorithm if it made mistakes on some examples. As I know, this is basically how reinforcement learning works: the model learns from positive and negative feedbacks. I found out that we can implement supervised learning and reinforcement learning algorithms using PyBrain, but I couldn't find a way to relate between both.
0
1
361
0
57,064,604
0
0
0
0
2
false
0
2019-07-11T10:03:00.000
0
2
0
Applying "reinforcement learning" on a supervised learning model
56,986,663
0
python,linear-regression,reinforcement-learning,supervised-learning
Reinforcement Learning has been used to tune hyper-parameters and/or select optimal Supervised Learning Models. There's also a paper on it: "Learning to optimize with Reinforcement Learning". Reading Pablo's answer you may want to read up on "back propagation". It may be what you are looking for.
Is it possible to use "reinforcement learning" or a feedback loop on a supervised model? I have worked on a machine learning problem using a supervised learning model, more precisely a linear regression model, but I would like to improve the results by creating a feedback loop on the outputs of the prediction, i.e, tell the algorithm if it made mistakes on some examples. As I know, this is basically how reinforcement learning works: the model learns from positive and negative feedbacks. I found out that we can implement supervised learning and reinforcement learning algorithms using PyBrain, but I couldn't find a way to relate between both.
0
1
361
0
68,777,974
0
0
0
0
1
false
10
2019-07-11T11:48:00.000
1
1
0
Tensorflow 2.0: Accessing a batch's tensors from a callback
56,988,498
0.197375
python,tensorflow,keras,tensorflow2.0,tf.keras
No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values. To do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize / serialize / etc. Metrics Another way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic / debugging.
I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch. I expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch? This has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?
0
1
354
0
57,031,365
0
0
0
0
1
false
0
2019-07-11T15:48:00.000
0
1
0
VTK ObbTree.IntersectWIthLine too slow for large-scale collision detection
56,993,004
0
python,gpu,vtk
IntersectWithLine only intersects a segment to a polygonal mesh afaik. You can maybe make a convex hull of the first cloud and then ask for the internal points of the second (?) In this case one would use vtkSelectEnclosedPoints.
I want to check the collision between a set of points to a point cloud (contain around 1M pts). I actually want to know which parts of the point cloud is collided with those outside points, and store those collided pts in the point cloud. I loop through each outside point and use the Obbtree.IntersectWithLine to check the collision with each pt in point cloud, while it is too slow... I enabled the CPU parallel computing but the improvement is limited. I was wondering if any GPU-accelerated API that is compatible with the VTK objects, or there is a better-efficient way to check the collision?
0
1
173
0
57,639,340
0
0
0
0
1
false
0
2019-07-11T16:21:00.000
0
1
0
Python Streamplot from large 1D arrays
56,993,521
0
python,arrays,python-3.x,matplotlib,plot
try to vectorize your loops that may help greatly with large scale datas.
I need to plot streamtraces from CFD analysis with Python, over a 2D contour plot. My problem is that I'm dealing with 4 large 1D arrays (x,y coordinates and u,v velocity components), say over 100k points, arising from an external CFD simulation (so I cannot manipulate them). Creating 2D arrays from them (e.g. with scipy.interpolate.griddata as I found) causes my computer to crash due to excessive memory usage. I've also tried with quiver but I can't get a size for the arrows that scales with the dimensions of the plot: they are either too big or too small and anyway too many. Since I've looked at all the solutions I've found but none worked.
0
1
48
0
58,046,297
0
0
0
0
1
false
0
2019-07-11T17:26:00.000
0
2
0
Save colab results
56,994,495
0
python-2.7
After mounting your Google drive. You can do the following to save your work on the drive: 1 Click File. 2 Click Save File in Drive.
I am using google colab but I can not save the results directly from colab to my google drive or even in my computer. Can anybody give me a hand? Regards
0
1
67
0
56,997,616
0
1
0
0
1
false
0
2019-07-11T21:17:00.000
1
2
0
check for consecutive values membership in arrays
56,997,410
0.099668
python,arrays
best way to maintain a mapping of elements in an array to a value is a hash table. as @JacobIRR mentioned {[4, 5, 6] : 6} is not valid because you cant have multiple keys associated to a value. i cant give you an exact solution to your problem as you didnt mention how your function works. if your algorithm dosent depend on a list as a key,you can try {6 :[4, 5, 6]} will be a valid solution as HashMap doesn't allow duplicate keys but allows duplicate values.
There are several similar questions around that suggest using any() with sets, however mine is a bit different in a way that I want to get the most effective and Pythonic way of forming a new array out of existing arrays based on membership. I have the following array [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 4, 5, 6], [2, 4, 5, 6, 9, 1, 4, 5, 6, 4, 5, 6], [1, 2, 9, 4, 5, 6, 7, 7, 8, 5]] What I need is a way to form a new object {[4, 5, 6] : 6} where [4, 5, 6] is the key, and 6 is the number of times the key sequence appears in the array above. I have achieved the result by making a function with simple for loops, but I don't think it is the most Pythonic and efficient way of achieving it. I was thinking of using map() and filter() as well. What would be the most efficient and Pythonic way of achieving the result?
0
1
54
0
57,009,082
0
0
0
0
1
false
0
2019-07-12T14:00:00.000
0
1
0
How does fsolve in scipy work? What methods does it use to find the roots?
57,008,554
0
python,scipy
So, after looking up the source code it seems that the solver "fsolve" uses the powell method to find the roots.
I'm trying to find out how fsolve in scipy works. So far, all I have found is that it is a numerical solver that finds the root of non linear sets of equations. But I can't find what method it uses to find the roots anywhere. Does anyone know how the roots are found?
0
1
122
0
57,012,785
0
0
0
0
1
true
1
2019-07-12T17:34:00.000
1
2
0
Force scipy.stats.cauchy between an interval
57,011,662
1.2
python,python-3.x,scipy
Short answer: Not directly, no. The issue is that a distribution is normalized, the integral of the pdf over the support is unity. When you change the support, you are effectively changing the distribution. For the truncated Cauchy distribution, you can easily roll your own little generator, using the inverse function transform of a uniform random variate.
I would like to force scipy.stats.cauchy probability density function to only generate values between -1 and 1. Currently I am doing the hacky way by running y=cauchy.rvs(center,sigma) inside a while loop and while its lower than -1 or higher than 1 it recomputes it and after it enters the desired interval it returns y. So essentially I am drawing a new random variable until the conditions are not met. I am wondering whether it's possible to do this in a simpler way, the scipy documentation is not very helpful and it's very ambiguous. Is there any way to specify the min/max range of the random variables inside the function arguments, like via **kwargs or something?
0
1
507
0
57,016,163
0
0
0
0
1
false
0
2019-07-13T01:06:00.000
0
2
0
Quickest way to insert zeros into numpy array
57,015,429
0
python-3.x,numpy
u can use np.zeros and append it to your existing array like newid=np.append(np.zeros((4,), dtype=int),ids) Good Luck!
I have a numpy array ids = np.array([1,1,1,1,2,2,2,3,4,4]) and another array of equal length vals = np.array([1,2,3,4,5,6,7,8,9,10]) Note: the ids array is sorted by ascending order I would like to insert 4 zeros before the beginning of each new id - i.e. new array = np.array([0,0,0,0,1,2,3,4,0,0,0,0,5,6,7,0,0,0,0,8,0,0,0,0,9,10]) Only, way I am able to produce this is by iterating through the array which is very slow - and I am not quite sure how to do this using insert, pad, or expand_dim ...
0
1
126
0
57,053,572
0
0
0
0
1
true
1
2019-07-13T23:37:00.000
0
1
0
How can I put my curvilinear coordinate data on a map projection?
57,023,555
1.2
python,interpolation,netcdf
I am pretty sure you can already use methods like contour, contourf, pcolormesh from Python's matplotlib without re-gridding the data. The same methods work for Basemap.
I'm working with NetCDF files from NCAR and I'm trying to plot sea-ice thickness. This variable is on a curvilinear (TLAT,TLON) grid. What is the best way to plot this data on a map projection? Do I need to re-grid it to a regular grid or is there a way to plot it directly? I'm fairly new to Python so any help would be appreciated. Please let me know if you need any more information. Thank you! I've tried libraries like iris, scipy, and basemap, but I couldn't really get a clear explanation on how to implement them for my case.
0
1
276
0
57,029,334
0
0
0
0
1
false
0
2019-07-14T12:11:00.000
0
2
0
Add coordinates to points after changing a df CRS in Geopandas
57,027,276
0
python,pandas,gis,qgis,geopandas
Alright, so problem solved. It was a simple espg issue. Here's the solution if that can help any other newbie like me: Even if Google uses espg 3857, if you need to convert the X,Y of your point to something which could be exploitable (such as a url parameter), you need to use espg 4326. You'll then get your long lat the right way. Like 5.859162,49.003710. So: Step 1: convert your original shapefile to espg 4326 pointswgs84 = points.to_crs({'init': 'epsg:4326'}) Step 2: simply extract x and y and add them in new columns pointswgs84['lon'] = pointswgs84.geometry.x pointswgs84['lat'] = pointswgs84.geometry.y
Basically the idea is to automate some workflows without using Qgis. I'm failing to achieve similar results to the Qgis feature 'Add coordinates to points' in Geopandas, which allows you to grab the x,y coordinates of points in its current projection and create new attributes to the table. So I have a set of points with which I played with. The original shapefile's CRS is epsg 2154 (Lambert 93). I need to get the latitude and longitude in a format compatible with Google Maps. Google uses epsg 3857 for Google Maps. points = pd.DataFrame({'Id': ['001', '002', '003'],'geometry': ['POINT (909149.3986619939 6881986.232659903)', 'POINT (909649.3986619939 6882486.232659903)', 'POINT (909149.3986619939 6882486.232659903)']}) The idea is to switch to epsg 3857 (wgs84) and from there, create lat/long columns which would be filled with wgs84 coordinates, such as 47.357955,1.7317783. So what I did was obvisouly changing the CRS: pointswgs84 = points.to_crs(espg=3857) And then pointswgs84['lon'] = pointswgs84.geometry.x pointswgs84['lat'] = pointswgs84.geometry.y But my lat/long columns get then filled with coordinates corresponding to the original points dataframe: points = pd.DataFrame({'Id': ['001', '002', '003'],'geometry':['POINT (909149.3986619939 6881986.232659903)', 'POINT (909649.3986619939 6882486.232659903)', 'POINT (909149.3986619939 6882486.232659903)'],'long': ['6881986.232659903', '6882486.232659903', '6882486.232659903'], 'lat': ['909149.3986619939', '909649.3986619939', '909149.3986619939']}) Looks like I'm missing something here but since I'm rather new to Python & Geopandas in general I'm not sure about what... Thank you for your help.
0
1
659
0
57,031,293
0
0
0
0
1
false
1
2019-07-14T14:49:00.000
0
2
0
Create a non-hdfs csv from spark dataframe
57,028,445
0
python-3.x,pandas,apache-spark,pyspark,pyspark-sql
If you have 45 million records you will likely need to create a set of csv files which spark will do automatically. Depending on where you want to save the data the path will vary. For example if you wanted to write to S3 you would provide a path like this. df.write.csv("s3://my-bucket/path/to/folder/") You may also want to manually repartition the data before writing to get an exact number of output files.
I want to create a non-hdfs .csv file using a spark DataFrame. How can do it ? The purpose of this non-hdfs .csv file is to use read_csv() on it and load it back to a pandas DataFrame. I tried using toPandas() but I have 45 million records in my spark DataFrame and its very slow.
0
1
438
0
57,029,035
0
1
0
0
1
true
1
2019-07-14T15:37:00.000
1
1
0
Grid Search and computer in sleep mode
57,028,814
1.2
python,machine-learning,grid-search,gridsearchcv
Sleep mode suspends all processor activities and places RAM in a low power state only enough to retain the state. So, yes grid search or any process for that matter is suspended.
Does the grid search cv stops when my computer is in sleep mode? Should I turn off sleep mode when running it? Regards Vikrant
0
1
68
0
57,142,523
0
0
0
0
1
false
0
2019-07-15T07:29:00.000
-1
1
0
how to use natural language generation from a csv file input .which python module we should use.can any one share a sample tutorial?
57,035,069
-0.197375
python,nlp
There are not much python libraries for NLG!!. Try out nlglib a python wrapper around SimpleNLG. For tutorial purposes, you could read Building Natural Language Generation systems by e.reiter.
take a input as a csv file and generate text/sentence using nlg. I have tried with pynlg and markov chain.But nothing worked .What else I can use?
0
1
311
0
57,045,925
0
1
0
0
1
false
0
2019-07-15T18:30:00.000
0
2
0
Maximizing GCD with some condition
57,045,356
0
java,c++,python-3.x,python-2.7,greatest-common-divisor
Here is one way of doing it. Create a mutable class MinMax for storing the min. and max. index. Create a Map<Integer, MinMax> for storing the min. and max. index for a particular divisor. For each value in a, find all divisors for a[i], and update the map accordingly, such that the MinMax object stores the min. and max. i of the number with that particular divisor. When done, iterate the map and find the entry with largest result of calculating key + value.max - value.min. The min. and max. values of that entry is your answer.
This is a problem given in HackWithInfy2019 in hackerrank. I am stuck with this problem since yesterday. Question: You are given array of N integers.You have to find a pair (i,j) which maximizes the value of GCD(a[i],a[j])+(j - i) and 1<=i< j<=n Constraints are: 2<= N <= 10^5 1<= a[i] <= 10^5 I've tried this problem using python
0
1
289
0
57,048,809
0
0
0
0
1
true
0
2019-07-15T19:03:00.000
0
1
0
How to do multi-objective optimization using neural networks?
57,045,756
1.2
python,matlab,neural-network,artificial-intelligence
There are many methods, but the easiest way is "linear scalarization". You can add objectives to make single objective. While doing this, you can weight objectives considering priority. (Making linear combination of multiple objectives) See examples: Variational AE loss (regularization loss + reconstruction loss) Associative DA loss (classification loss + walker loss + visit loss)
I have five decision variables, each having a specific range. I need to find a combination of these variables so as to maximize one of my objectives while minimizing the other at the same time. I have prepared a datasheet of randomly generated variables with respective values of the 2 objective functions. Please suggest me how to approach the solution using neural networks. My objective function involves thermodynamic calculations. If interested you can have a look at the objective functions here :
0
1
244
0
57,053,491
0
0
0
0
1
false
0
2019-07-16T08:24:00.000
0
1
0
How to include quarterly regressor in Prophet for monthly time series?
57,053,017
0
python,forecasting,facebook-prophet
You have to evaluate based on your business problem, but there are some questions you can ask yourself. How are the external regressors making their predictions? Are they trained on completely different data? If not, are they worth including? How quickly do we expect those regressors to get "stale"? How far in the future are their predictions available? How well do they perform more than one quarter into the future? Interpolation can be reasonable based on these factors...but don't leak information about the future to your model at training time. Do they relate to subsets of your features? If so, some feature engineering could but fun - combine the external regressor's output with your other data in meaningful ways.
I have a monthly time series which I want to forecast using Prophet. I also have external regressors which are only available on a quarterly basis. I have thought of following possibilities - repeat the quarterly values to make it monthly and then include linearly interpolate for the months What other options I can evaluate? Which would be the most sensible thing to do in this situation?
0
1
189
0
67,595,461
0
0
0
0
1
false
0
2019-07-16T13:11:00.000
0
1
0
Keras, Tensorflow are reserving all GPU memory on model build
57,058,071
0
python-3.x,tensorflow,keras
I meet a similar problem when I load pre-trained ResNet50. The GPU memory usage just surges to 11GB while ResNet50 usually only consumes less than 150MB. The problem in my case is that I also import PyTorch without actually used it in my code. After commented it, everything works fine. But I have another PC with the same code that works just fine. So I uninstall and reinstall the Tensorflow and PyTorch with the correct version. Then everything works fine even if I import PyTorch.
my GPU is NVIDIA RTX 2080 TI Keras 2.2.4 Tensorflow-gpu 1.12.0 CUDA 10.0 Once I load build a model ( before compilation ), I found that GPU memory is fully allocated [0] GeForce RTX 2080 Ti | 50'C, 15 % | 10759 / 10989 MB | issd/8067(10749M) What could be the reason, how can i debug it? I don't have spare memory to load the data even if I load via generators I have tried to monitor the GPUs memory usage found out it is full just after building the layers (before compiling model)
0
1
502
0
57,084,810
0
0
0
0
1
false
0
2019-07-16T20:08:00.000
0
1
0
Using pandas.merge_asof when one merge column contains NaNs
57,064,741
0
python,pandas,merge
I did two methods: 1) I set the Nan's to -1. (there was no id that had -1 in the other dataset). Then put them back to Nan after. 2) I removed the records associated with the Nan's for that column, and put the records back in after. I tried to compare results (and reset the indices, sort by timestamp), but I kept getting false. Both should give the same result regardless.
New to pandas - I have been trying to use the pandas.merge_asof to join two datasets together by shared ID first, then merge by nearest timestamp to the timestamp in df1. The issue is that I have discovered is that both left_on and right_on must be int. I have one column that contains NaNs and they must remain. Floats was also ineffective. From my research on Stackoverflow, I found out that the latest version of Pandas, 24.02 has this functionality where you simply convert the column to Int64. However, the version of pandas I have available at work is 23.xx and cannot be upgraded at this time. What is my easiest option? If I were to simply remove the rows associated with the NaNs values in the one column, could I simply add them back later, and then change the dtype back from int to object? Would this disrupt anything?
0
1
486
0
57,066,365
0
0
0
0
1
true
0
2019-07-16T22:30:00.000
0
1
0
How to apply trained model on images of shape/size larger than what the model was trained on (in Tensorflow)?
57,066,263
1.2
python,python-2.7,tensorflow
I once have the same question about how to deal with larger or smaller images. According to my experience, a possible method is to resize images to the input size of the network. For example, if your current image size is 982x982 and the network input size is 512x512, then you can just use some libraries like Pillow or OpenCV to resize the original image from 982x982 to 512x512. Your method is a possible solution. I would say that there are many possible solutions other than the resizing operation, but you should better try this simplest method to see if your network works well. What I have learned from my projects is that we can always try the simplest solution, and in most cases, it works perfectly. Generally speaking, there is not a general perfect way to do it. You can start with the simplest method, and find a more complicated one if it does not work.
I trained a model on size 512x512 images. I currently have size 982x982 (or any other size) images that I need to have the model predict on. When I run on these images, I get errors about unexpected input array shape. Does Tensorflow provide any way to conveniently deploy a model on images of size/shape larger than what the model was trained on? More Details: Specifically, this is a model used for image segmentation. I assume one workaround to the issue is creating my own sliding-window script that, instead, inputs into the model windows of the expected size taken at different intervals from the original image, and then somehow pasting all those output windows back into a single image after they have gone through the model. However, I want to know if there is a more standard approach. I am brand new to Tensorflow (and image segmentation) so it is very possible I just missed something, or that perhaps my question is unclear. Thank you for any advice :)
0
1
234
0
57,085,156
0
0
0
0
1
false
0
2019-07-17T04:15:00.000
0
1
0
Converting multiple string columns to enum in H2o Flow
57,068,435
0
python,pyspark,h2o
I was not able to find a coding solution. But found an easier way - saved the pyspark df as a parquet in hdfs and imported it into h2o. All string columns were auto-recognised as Enum.
I'm having more than 100 String Columns which I need to be converting into enum so that the ML model identifies theses columns as categories. In Pyspark, there is no Category type (as in Pandas) and hence I casted all categories as 'String'. I don't want to click 'convert to enum' > 100 times and I'm sure there is an easier way to perform this task. Any help would be greatly appreciated.
0
1
228
0
57,077,151
0
0
0
0
1
true
0
2019-07-17T11:43:00.000
2
1
0
Training Keras model without validation set and normalization of images
57,075,037
1.2
python,keras,conv-neural-network,keras-2,standardized
Yes, you can train a keras model without validation data, but its not a good practice, because then you would not know if the model can generalize or not. The same applies for autoencoders, they can overfit to the training set. It is always recommended to normalize your inputs, specially if the ranges are large or small. There is no preferred method, any normalization generally works the same.
I'm using Keras on Python to train a CNN autoencoder. In the fit() method I have to provide validation_split or validation_data. First, I would like to use 80% of my data as training data and 20% as validation data (random split). As soon as I have found the best parameters, I would like to train the autoencoder on all the data, i.e. no more using a validation set. Is it possible to train a Keras model without using a validation set, i.e. using all data to train? Moreover, the pixels in my images are all in the range [0, -0.04]. Is it still recommended to normalize the values of all pixels in all images in the training and validation set to the range [0,1] or to [-1,1] or to standardize it (zero mean, unit variance)? If so, which method is prefered? By the way, my images are actually 2D heat maps (one color channel).
0
1
3,996
0
57,209,941
0
0
0
0
1
false
0
2019-07-17T20:35:00.000
1
1
0
How to display a DataFrame without Jupyter Notebook crashing?
57,083,727
0.197375
python,pandas,numpy
I think your machine is not powerful enough to handle so much data. I can display 57623 rows for my data (with 16 columns) without any problem. But my machine has 252GB memory, and it still took about 1 minute to display these data in jupyter notebook. If I tried to scroll through the data, it's slow, and sometimes stuck for a while. On the other hand, what do you want to achieve by displaying all data here? There's definitely another way to achieve the same thing as you are doing now.
Everytime I try to display a complete DataFrame in my jupyter notebook, the note book crashes. The file wont start up so I had to make a new jupyter notebook file. When i do display(df) it only shows a couple of the rows, when I need to show 57623 rows. I need to show results for all of these rows and put them into an html file. I tried setting the max rows and max columns, but the entire dataframe would not print out without the notebook crashing ''' python pd.set_option('display.max_columns', 24) "pd.set_option('display.max_rows', 57623) ''' The expected results were for the entire DataFrame to print out, but instead the notebook would have an hourglass next to it and nothing would load.
0
1
534
0
62,209,904
0
0
0
0
1
false
3
2019-07-18T02:38:00.000
0
5
0
How do you clear Google Colab's output periodically
57,086,398
0
python,tensorflow,google-colaboratory,object-detection-api
This is a partial solution, but you could also specify your verbose parameter in your fit method to 0 to get no output, or to 2 to get a reduced output. And you can use TensorBoard to still see what is going on.
I am using Google Colab to train an object detection model, using the tensorflow object detection api. When I run the cell train.py, it keeps printing diegnostic output. After 30 minutes or so the browser crashes, because of the high number of lines printed in the cell's output. Is there any script which one can use to clear the output periodically (say every 30 min) instead of manually pressing the clear output button?
0
1
8,171
0
57,088,849
0
0
0
0
1
false
3
2019-07-18T07:01:00.000
0
2
0
how to pass in 'training' argument to tf,keras.Model when using model.fit
57,088,786
0
python,tensorflow,keras
As far as i know, there is no argument for this. Model.fit simply trains the model on whatever training data provided, and at the end of each epoch evaluates the training on either provided validation data, OR by the use of validation_split.
So I have this model written by the subclassing API, the call signature looks like call(x, training), where training argument is needed to differentiate between training and non-training when doing batchnorm and dropout. How do I make the model forward pass know I am in training mode or eval mode when I use model.fit? Thanks!
0
1
1,713
0
57,093,852
0
0
0
0
1
false
6
2019-07-18T11:24:00.000
0
2
0
how to add custom Keras model in OpenCv in python
57,093,345
0
python,opencv,keras
You would save the model to H5 file model.save("modelname.h5") , then load it in OpenCV code load_model("modelname.h5"). Then in a loop detect the objects you find via model.predict(ImageROI)
i have created a model for classification of two types of shoes now how to deploy it in OpenCv (videoObject detection)?? thanks in advance
0
1
5,883
0
57,110,360
0
0
0
0
1
false
2
2019-07-18T19:41:00.000
0
1
0
Can my training and testing data for neural network be of different lengths?
57,101,619
0
python,keras,neural-network,time-series,lstm
You have 2 options: First option, as @stormzhou commented above, you can pad you test data with zeros (not recommended) Second option, you can use for training and validation only 200 timesteps.
I am trying to use a recurrent neural network for a regression problem. I have 50 samples of 600 timesteps each. I am planning to use 40 for training and 10 for validation. After my network gets trained, can I use it to predict for a timeseries of a smaller length (200 timesteps)? The input and output dimensions (i.e. features) will remain same, just the length (i.e. number of timesteps) is smaller
0
1
182
0
57,103,655
0
0
0
0
1
false
1
2019-07-18T22:58:00.000
1
1
0
How to deal with infrequent data in a time series prediction model
57,103,601
0.197375
python,deep-learning,dataset,missing-data
I would be tempted to put the last quarter revenue in a separate table, with a date field representing when that quarter began (or ended, it doesn't really matter). Then you can write queries to work the way that most suits your application. You could certainly reconstitute the view you mention above using that table, as long as you can relate it to the main table. You would just need to join the main table by company name, while selected the max() of the last quarter revenue table.
I am trying to create a basic model for stock price prediction and some of the features I want to include come from the companies quarterly earnings report (every 3 months) so for example; if my data features are Date, OpenPrice, Close Price, Volume, LastQrtrRevenue how do I include LastQrtrRevenue if I only have a value for it every 3 months? Do I leave the other days blank (or null) or should I just include a constant of the LastQrtrRevenue and just update it on the day the new figures are released? Please if anyone has any feedback on dealing with data that is released infrequently but is important to include please share.... Thank you in advance.
0
1
161
0
57,112,843
0
1
0
0
1
true
13
2019-07-19T04:54:00.000
28
2
0
ModuleNotFoundError: No module named 'plotly.graph_objects'
57,105,747
1.2
python,plotly,plotly-python
You should use instead: from plotly import graph_objs as go
Trying to use 'plotly.graph_objects' but I get this error ModuleNotFoundError: No module named 'plotly.graph_objects' How do I download module and apply using anaconda navigator
0
1
24,260
0
57,165,757
0
0
0
0
1
true
0
2019-07-19T08:13:00.000
0
2
0
How to Bokeh plot a combination of Miles and Metres on axis (e.g. 7.5 Miles 100m, 7.5 Miles 200m, 8 Miles 100m....)
57,108,242
1.2
python,pandas,bokeh
I figured it out in the end, I just converted the Miles(e.g. 7.5) and Meters(e.g. 400m) into Miles (e.g. 7.75miles). Then I used the tickformatter to display the specific points as meters. Thanks for the help
I need to plot some data with Bokeh which has distance as the x-axis. However the distance data I have been provided with is in the format "7 Miles 100 Meters, 7 Miles 200m" etc. What's the best way to do this?
0
1
123