GUI and Desktop Applications
int64
0
1
A_Id
int64
5.3k
72.5M
Networking and APIs
int64
0
1
Python Basics and Environment
int64
0
1
Other
int64
0
1
Database and SQL
int64
0
1
Available Count
int64
1
13
is_accepted
bool
2 classes
Q_Score
int64
0
1.72k
CreationDate
stringlengths
23
23
Users Score
int64
-11
327
AnswerCount
int64
1
31
System Administration and DevOps
int64
0
1
Title
stringlengths
15
149
Q_Id
int64
5.14k
60M
Score
float64
-1
1.2
Tags
stringlengths
6
90
Answer
stringlengths
18
5.54k
Question
stringlengths
49
9.42k
Web Development
int64
0
1
Data Science and Machine Learning
int64
1
1
ViewCount
int64
7
3.27M
0
49,714,303
0
0
0
0
1
false
0
2018-04-07T23:44:00.000
1
1
0
Graphs from two sets of data files with different number of datapoints
49,713,065
0.197375
python,r,pandas,matplotlib,graph
Ok, addressing the points as best I can without an example of what you tried that didn't work... How to import to sets of data and how to choose specific column in each of them (would it be exactly the same as if dealing with one file)? You'd import each file separately. Assuming your file has headers, something like d1 <- read.csv("your_file1_name.csv", headers=TRUE) d2 <- read.csv("your_file2_name.csv", headers=TRUE) If your headers name useful names (e.g. "Time", "88height", "number of octopus", etc), your data frame will have the same column names, after running the headers through make.names(), which coverts the titles to legal R data frame column names. e.g. d1$Time d1$number.of.octopus d2$X88height If you want the data frames merged into one big data frame, use rbind(). If you want a vector of all the data from a particular column from each data frame, you'd use c() e.g. total.octopus <- c(d1$number.of.octopus, d2$number.of.octopus) One of the file have much more data points than the other (3 600 000 vs 80 000). How can I select every nth row in the csv file? To select every 9th row of, say, d1, you'd index: idx <- seq(1, nrow(d1), by=9) d1_samp <- d1[idx,] #note the comma and blank - means "every column" Because of the lack of examples, this is only my interpretation of your needs. If it doesn't answer your question, you'll get there faster if you post a sample or toy example of code we can run that shows what you tried. For example, what kind of graph are you trying to make? Scatterplot? Trend? Barchart? And what kind of data? Time series? Number-vs-category? etc.
I want to produce a graph from two sets of data files (txt and csv) and I have encountered a couple of issues using either R or Python and would be super super grateful if somebody could help :) How to import to sets of data and how to choose specific column in each of them (would it be exactly the same as if dealing with one file)? One of the file have much more data points than the other (3 600 000 vs 80 000). How can I select every nth row in the csv file? I would be grateful for any help in either R or python
0
1
52
0
51,268,892
1
0
0
0
1
true
1
2018-04-08T03:04:00.000
0
1
0
Cutoff in Closeness/Betweenness Centrality in python igraph
49,713,991
1.2
python,igraph
The cutoff really depends on the application and on the netwrok parameters (# nodes, # edges). It's hard to talk about closeness threshold, since it depends greatly on other parameters (# nodes, # edges,...). One thing you can know for sure is that every closeness centrality is somewhere between 2/[n(n-1)] (which is minimum, attained at path) and 1/(n-1) (which is maximum, attained at clique or star). Perhaps better question would be about Freeman centralization of closeness (which is somehow normalized version of closeness that you can better compare between various graphs). Suggestion: You can do a grid search for different cutoff values and then choose the one that makes more sense based on your application.
I am currently working a large graph, with 1.5 Million Nodes and 11 Million Edges. For the sake of speed, I checked the benchmarks of the most popular graph libraries: iGraph, Graph-tool, NetworkX and Networkit. And it seems iGraph, Graph-tool and Networkit have similar performance. And I eventually used iGraph. With the directed graph built with iGraph, the pagerank of all vertices can be calculated in 5 secs. However, when it came to Betweenness and Closeness, it took forever for the calculation. In the documentation, it says that by specifying "CutOff", iGraph will ignore all path with length < CutOff value. I am wondering if there a rule of thumb to choose the best CutOff value to choose?
0
1
676
0
49,734,613
0
0
0
0
1
true
14
2018-04-09T02:59:00.000
6
2
0
How are PyTorch's tensors implemented?
49,724,954
1.2
python,python-3.x,rust,pytorch,tensor
Contiguous array The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the "storage" would be 27 items long, but the interpretation of "coordinates" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension. This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.
I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done? I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder. The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.
0
1
1,685
0
49,731,077
0
0
0
0
1
false
0
2018-04-09T10:35:00.000
-1
1
0
convert list of tuple () to numpy matrix
49,730,922
-0.197375
python,numpy,tuples
try this: a = [(1,2),(2,49),(3,45)] print map(list,a)
I want to convert a list of tuple like a = [(1,2),(2,49),(3,45)] to numpy matrix : [[1,2],[2,49],[3,45]] Can anyone help me? thanks in advance
0
1
1,397
0
49,732,736
0
0
0
0
1
true
0
2018-04-09T12:00:00.000
1
1
0
Image becomes blue masked after applying tf.image.crop_to_bounding_box and tf.image.encode_jpeg
49,732,469
1.2
python,tensorflow,computer-vision
Your problem here is not related to Tensorflow at all but to OpenCV. For some reason, OpenCV loads images in a BGR format and not RGB as Tensorflow expects them to be. Invert your first and third channels in your image before passing it to Tensorflow and everything should be fine.
I am trying to crop and save images from tensorflow object detection API output with following code I created using previous stack overflow questions. But after saving the image seemed to have high density of blue color. The original image has normal color. Since both cropping and encoding refer image channel for format I am not sure about the source of the issue.
0
1
197
0
49,733,409
0
0
0
0
1
false
2
2018-04-09T12:41:00.000
0
3
0
Solving csv files with quoted semicolon in Pandas data frame
49,733,229
0
python-3.x,pandas,csv,data-cleaning,quoting
The sep parameter of pd.read_csv allows you to specify which character is used as a separator in your CSV file. Its default value is ,. Does changing it to ; solve your problem?
So I am facing the following problem: I have a ; separated csv, which has ; enclosed in quotes, which is corrupting the data. So like abide;acdet;"adds;dsss";acde The ; in the "adds;dsss" is moving " dsss" to the next line, and corrupting the results of the ETL module which I am writing. my ETL is taking such a csv from the internet, then transforming it (by first loading it in Pandas data frame, doing pre-processing and then saving it), then loading it in sql server. But corrupted files are breaking the sql server schema. Is there any solution which I can use in conjunction with Pandas data frame which allows me to fix this issue either during the read(pd.read_csv) or writing(pd.to_csv)( or both) part using Pandas dataframe?
0
1
2,337
0
49,733,693
0
0
0
0
1
true
0
2018-04-09T13:02:00.000
0
1
0
finding the maximum value of a column in pandas
49,733,632
1.2
python,pandas,dataframe
Just df.index[df[your_column_name].argmax()] :)
sorry for very basic question but i can't find a direct answer. I want to check a column to find the maximum value. for now I assume there is only one maximum value, its unique. I then want to find the index name for that row. Is there a way to do this?
0
1
66
0
49,743,935
0
0
0
0
1
false
1
2018-04-10T01:27:00.000
0
2
0
Plot 2D image in 3D axes
49,743,901
0
python,matplotlib
If your image is a coloured image you must first ensure that it is an indexed image. This means that you can only have 2d matrix (and not 3 matricies for the RGB components). Command rgb2ind can help. Then you can directly show you image in a 3D way. Use the mesh or surf command. You can also adjust perspective with angle and azimuth.
Using matplotlib is it possible to take a 2D image of something and place it in a 3D figure? I'd like to take a 2D image and place it at z position of 0. I want to then move the other pixels in the image along the z-axis separately based on a calculation I am making.
0
1
1,911
0
49,800,463
0
0
0
0
1
true
0
2018-04-10T09:49:00.000
1
1
0
What does the key " b'batch_label' " in cifar-10 unpickle by python use for?
49,750,513
1.2
python,machine-learning
The cifar10 dataset you are donwloading is splitted in serveral batches. Each batch has its own id (batch_label). data is the actual batch if images, while filenames is the name of the images encoded. labels, of course, is the set of labels associated with the data
When I unpickle cifar-10 by python, I get a dictionary with 4 keys:'batch_label','data','filenames','labels'. But I don't konw what the key 'batch_label' represent for. It's a 'bytes' type data, length is 21. I konw I needn't to know it when I train a network. But I'm still curious about it. Thanks for any reply. ^_^
0
1
233
0
49,756,505
0
0
0
0
1
false
0
2018-04-10T14:34:00.000
0
1
0
python not using latest numpy version
49,756,369
0
python,linux,path
You should use a virtual environment. Your system most likely will default to the most up to date version. Try installing: pip install virtualenv then run: virtualenv -p python2.7 environment_name. Source the virtualenv with: source environment_name/bin/activate then pip install numpy=1.14.2. Then you will have a mini environment with the exact version you want and wont update. This way you can have multiple versions all contained in the same system.
I have a problem. I need numpy 1.14.2, and my machine (Linux Mint 17.3) has only 1.8.2 installed. I then installed 1.14.2 through pip. But when I load it in ipython, it still says that it is 1.8.2. Using yolk I saw that 1.14.2 is actually installed, but marked as non-active. numpy - 1.14.2 - non-active development (/usr/local/lib/python2.7/dist-packages) While for 1.8.2 it says "active". Also, 1.8.2 is located in "/usr/lib/python2.7/dist-packages/numpy". Why is there a difference? I dont see a reason why there are two libraries. I read that python loads the libraries by the order in $PATH. And indeed, when I look at sys.path, I see that /usr/lib... is listet before /usr/local .... How can I change that? I dont have anything in .bashrc, /etc/profile or /etc/rc.local which would set this specific order. Thx.
0
1
343
0
49,762,442
0
0
0
0
1
false
1
2018-04-10T20:29:00.000
0
2
0
Manipulate Python Data frame to plot line charts
49,762,381
0
python,pandas,matplotlib,visualization
If you have a pandas DataFrame, let's call it df, for which your columns are X, Y, Z, etc. and your rows are the years in order, you can simply call df.plot() to plot each column as a line with the y axis being the values and the row name giving the x-axis.
I have the following data frame: Parameters Year 2016 Year 2017 Year 2018.... 0) X 10 12 13 1) Y 12 12 45 2) Z 89 23 97 3 . . . I want to make a line chart with the column headers starting from Year 2016 to be on the x-axis and each line on the chart to represent each of the parameters - X, Y, Z I am using the matplotlib library to make the plot but it is throwing errors.
0
1
403
0
49,784,641
0
0
0
0
1
false
0
2018-04-11T21:08:00.000
0
2
0
Python - Subtracting the Elements of Two Arrays
49,784,499
0
python,arrays,math,subtraction
I can't replicate this but it looks like the numbers are 8 bit and wrapping some how
I am new to Python programming and stumbled across this feature of subtracting in python that I can't figure out. I have two 0/1 arrays, both of size 400. I want to subtract each element of array one from its corresponding element in array 2. For example say you have two arrays A = [0, 1, 1, 0, 0] and B = [1, 1, 1, 0, 1]. Then I would expect A - B = [0 - 1, 1 - 1, 1 - 1, 0 - 0, 0 - 1] = [-1, 0, 0, 0, -1] However in python I get [255, 0, 0, 0, 255]. Where does this 255 come from and how do I get -1 instead? Here's some additional information: The real variables I'm working with are Y and LR_predictions. Y = array([[0, 0, 0, ..., 1, 1, 1]], dtype=uint8) LR_predictions = array([0, 1, 1, ..., 0, 1, 0], dtype=uint8) When I use either Y - LR_predictions or numpy.subtract(Y, LR_predictions) I get: array([[ 0, 255, 255, ..., 1, 0, 1]], dtype=uint8) Thanks
0
1
5,861
0
49,856,665
0
0
0
0
2
false
5
2018-04-12T07:02:00.000
0
3
0
How to reconnect to the ongoing process on GoogleColab
49,790,025
0
python,deep-learning,keras,jupyter-notebook,google-colaboratory
It seems there's no normal way to do this. But you can save your model to Google Drive with current training epoch number, so when you see something like "my_model_epoch_1000" on your google drive, you will know that the process is over.
I recently started to use Google Colab to train my CNN model. It always needs about 10+ hours to train once. But I cannot stay in the same place during these 10+ hours, so I always poweroff my notebook and let the process keep going. My code will save models automatically. I figured out that when I disconnect from the Colab, the process are still saving models after disconnection. Here are the questions: When I try to reconnect to the Colab notebook, it always stuck at "INITIALIZAING" stage and can't connect. I'm sure that the process is running. How do I know if the process is OVER? Is there any way to reconnect to the ongoing process? It will be nice to me to observe the training losses during the training. Sorry for my poor English, thanks alot.
0
1
3,496
0
49,854,222
0
0
0
0
2
false
5
2018-04-12T07:02:00.000
0
3
0
How to reconnect to the ongoing process on GoogleColab
49,790,025
0
python,deep-learning,keras,jupyter-notebook,google-colaboratory
first question: restart runtime from runtime menu second question: i think you can use tensorboard to monitor your work.
I recently started to use Google Colab to train my CNN model. It always needs about 10+ hours to train once. But I cannot stay in the same place during these 10+ hours, so I always poweroff my notebook and let the process keep going. My code will save models automatically. I figured out that when I disconnect from the Colab, the process are still saving models after disconnection. Here are the questions: When I try to reconnect to the Colab notebook, it always stuck at "INITIALIZAING" stage and can't connect. I'm sure that the process is running. How do I know if the process is OVER? Is there any way to reconnect to the ongoing process? It will be nice to me to observe the training losses during the training. Sorry for my poor English, thanks alot.
0
1
3,496
0
49,797,871
0
0
0
0
1
true
1
2018-04-12T13:18:00.000
3
1
0
How to eliminate a dummy dimension of ndarray?
49,797,656
1.2
python,numpy
You can't eliminate that 0 dimension. A dimension of length 0 is not a "dummy" dimension. It really means length 0. Since the total number of elements in the array (which you can check with a.size) is the product of the shape attribute, an array with shape (0, 1325, 3) contains 0 elements, while an array with shape (1325, 3) contains 3975 elements. If there was a way to eliminate the 0 dimension, where would that data come from? If your array is supposed to contain data, then you probably need to look at how that array was created in the first place.
How can I eliminate a dummy dimension in python numpy ndarray? For example, suppose that A.shape = (0, 1325, 3), then how can eliminate '0' dimension so that A.shape = (1325,3). Both 'np.sqeeze(A)' or 'A.reshape(A.shape[1:])' don't work.
0
1
118
0
51,196,523
0
0
0
0
1
false
0
2018-04-12T20:17:00.000
2
1
0
tensorflow Object detection, Bounding Boxes are not visible on the own dataset
49,805,206
0.379949
python,tensorflow,ubuntu-16.04,object-detection
Try passing the bounding boxes to a print statement. Do you get any valid output? If so then it is probably nothing to do with your model and your bounding boxes are created just fine. What is your mAP? If it is very early on in the training process then your accuracy is probably too poor and do not meet a minimum threshold. Take a look at the visualize_boxes_and_labels_on_image_array() function from object_detection/utils/visualization_utils.py, note the default value for min_score_thresh. You can either change this default value or pass in min_score_thres=0 as an argument when you call the function.
I am trying the run the TF object detection locally on my own dataset. Every step is happening perfectly except the Visualization of the BB on a test image. First I have run the Pascal VOC dataset on the Faster R-CNN Inception ResNet v2, modified the scripts as per the VOC dataset and Then followed the instructions from G3doc, everything worked perfectly, for the visualization, I am using the ipython jupyter notebook given in the objection detection. The visualization was awesome. Then I tried to do the same for my own dataset. Repeated all the steps same but no BB is showing on the image. Can someone help what might be going wrong? P.S. I am using ubuntu 16.04, 64GB ram system.
0
1
1,054
0
51,225,714
0
0
0
0
2
false
9
2018-04-12T20:45:00.000
2
11
0
Could not find a version that satisfies the requirement numpy == 1.9.3
49,805,576
0.036348
python,python-3.x,pandas,numpy,quandl
Just Install Python 3.5 or higher.
Trying to install quandl and need pandas, so I tried pip install pandas and get: Could not find a version that satisfies the requirement numpy==1.9.3 (from versions: 1.10.4, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2) No matching distribution found for numpy==1.9.3. I'm using python 3.4, win32
0
1
57,246
0
51,056,520
0
0
0
0
2
false
9
2018-04-12T20:45:00.000
1
11
0
Could not find a version that satisfies the requirement numpy == 1.9.3
49,805,576
0.01818
python,python-3.x,pandas,numpy,quandl
Just use this command : pip install pandas==0.19.0 . An older version which will not cry with this error !!
Trying to install quandl and need pandas, so I tried pip install pandas and get: Could not find a version that satisfies the requirement numpy==1.9.3 (from versions: 1.10.4, 1.11.0, 1.11.1rc1, 1.11.1, 1.11.2rc1, 1.11.2, 1.11.3, 1.12.0b1, 1.12.0rc1, 1.12.0rc2, 1.12.0, 1.12.1rc1, 1.12.1, 1.13.0rc1, 1.13.0rc2, 1.13.0, 1.13.1, 1.13.3, 1.14.0rc1, 1.14.0, 1.14.1, 1.14.2) No matching distribution found for numpy==1.9.3. I'm using python 3.4, win32
0
1
57,246
0
49,812,019
0
0
0
0
1
true
1
2018-04-13T03:56:00.000
0
2
1
Google Cloud - What products for time series data cleaning?
49,809,084
1.2
python,apache-spark,google-cloud-platform,google-cloud-dataflow,google-cloud-dataproc
I would use PySpark on Dataproc, since Spark is not just realtime/streaming but also for batch processing. You can choose the size of your cluster (and use some preemptibles to save costs) and run this cluster only for the time you actually need to process this data. Afterwards kill the cluster. Spark also works very nicely with Python (not as nice as Scala) but for all effects and purposes the main difference is performance, not reduced API functionality. Even with the batch processing you can use the WindowSpec for effective time serie interpolation To be fair: I don't have a lot of experience with DataFlow or DataPrep, but that's because out use case is somewhat similar to yours and Dataproc works well for that
I have around 20TB of time series data stored in big query. The current pipeline I have is: raw data in big query => joins in big query to create more big query datasets => store them in buckets Then I download a subset of the files in the bucket: Work on interpolation/resampling of data using Python/SFrame, because some of the time series data have missing times and they are not evenly sampled. However, it takes a long time on a local PC, and I'm guessing it will take days to go through that 20TB of data. Since the data are already in buckets, I'm wondering what would the best Google tools for interpolation and resampling? After resampling and interpolation I might use Facebook's Prophet or Auto ARIMA to create some forecasts. But that would be done locally. There's a few services from Google that seems are like good options. Cloud DataFlow: I have no experience in Apache Beam, but it looks like the Python API with Apache Beam have missing functions compared to the Java version? I know how to write Java, but I'd like to use one programming language for this task. Cloud DataProc: I know how to write PySpark, but I don't really need any real time processing or stream processing, however spark has time series interpolation, so this might be the only option? Cloud Dataprep: Looks like a GUI for cleaning data, but it's in beta. Not sure if it can do time series resampling/interpolation. Does anyone have any idea which might best fit my use case? Thanks
0
1
522
0
57,368,807
0
1
0
0
1
false
0
2018-04-13T07:24:00.000
2
2
0
How to install tensorflow-gpu for both python2 and python3
49,811,510
0.197375
python-3.x,tensorflow
If you have already installed tensorflow 1.2, CUDA 8.0 and CuDNN 5.1 for python2.7. Then you can: yum install python3-pip (now you have python3 and pip3, however the python version may not be 3.5) python3 -m pip install --upgrade tensorflow-gpu==1.2 (make sure the installed version is exactly same as that of python2) I made it after these two steps.
I already have installed tensorflow 1.2, CUDA 8.0 and CuDNN 5.1 for python2.7. Now I want to use it for python3.5 but importing tensorflow fails. How do I install tensorflow for python3 again. And do I have to do the CUDA and CuDNN episodes again?
0
1
1,667
0
50,134,947
0
0
0
0
1
false
2
2018-04-13T14:26:00.000
2
1
0
cannot import tensorflow - "Import Error: cannot import name 'self_check' "
49,819,253
0.379949
python,tensorflow,anaconda
pip install --upgrade tensorflow or you can uninstall the tensorflow and re-install it.
I'm trying to use tensorflow-gpu. Using Anaconda, I installed the libraries and active both (tensorflow and tensorflow-gpu) I have also installed Keras in this anaconda environment as well Next I launch Spider IDE within my Anaconda environment and run my py script, that is when I get the following error: cannot import tensorflow - Import Error: cannot import name 'self_check' I'm totally lost. Any recommendations? Using Python 3.5 Thanks
0
1
2,441
0
49,908,382
0
0
0
0
1
true
1
2018-04-13T19:58:00.000
1
1
0
Regularization Application in the Proximal Adagrad Optimizer in Tensorflow
49,824,281
1.2
python,optimization,tensorflow,neural-network,regularized
Regularization applies neither to forward or backpropagation but to the weight updates. You can use different optimizers for different layers by explicitly passing the variables to minimize to each optimizer.
I have been trying to implement l1-regularization in Tensorflow using the l1_regularization_strength parameter in the ProximalAdagradOptimizer function from Tensorflow. (I am using this optimizer specifically to get a sparse solution.) I have two questions regarding the regularization. Does the l1-regularization used in the optimizer apply to forward and backward propagation for a neural network or only the back propagation? Is there a way to break down the optimizer so the regularization only applies to specific layers in the network?
0
1
425
0
49,838,147
0
0
0
0
1
false
1
2018-04-13T21:32:00.000
1
1
0
Hypothesis search tree
49,825,415
0.197375
python-hypothesis
TLDR: don't worry, this is a common use-case and even a naive strategy works very well. The actual search process used by Hypothesis is complicated (as in, "lead author's PhD topic"), but it's definitely not a depth-first search! Briefly, it's a uniform distribution layered on a psudeo-random number generator, with a coverage-guided fuzzer biasing that towards less-explored code paths, with strategy-specific heuristics on top of that. In general, I trust this process to pick good examples far more than I trust my own judgement, or that of anyone without years of experience in QA or testing research!
I have a object with many fields. Each field has different range of values. I want to use hypothesis to generate different instances of this object. Is there a limit to the number of combination of field values Hypothesis can handle? Or what does the search tree hypothesis creates look like? I don't need all the combinations but I want to make sure that I get a fair number of combinations where I test many different values for each field. I want to make sure Hypothesis is not doing a DFS until it hits the max number of examples to generate
0
1
75
0
49,838,288
0
0
0
0
1
false
0
2018-04-14T01:27:00.000
3
1
0
Have a Strategy that does not uniformly choose between different strategies
49,827,010
0.53705
python-hypothesis
Hypothesis does not allow users to control the probability of various choices within a strategy. You should not use undocumented interfaces either - hypothesis.internal is for internal use only and could break at any time! I strongly recommend using C = st.one_of(A, B) and trusting Hypothesis with the details.
I'd like to create a strategy C that, 90% of the time chooses strategy A, and 10% of the time chooses strategy B. The random python library does not work even if I seed it since each time the strategy produces values, it generates the same value from random. I looked at the implementation for OneOfStrategy and they use i = cu.integer_range(data, 0, n - 1) to randomly generate a number cu is from the internals import hypothesis.internal.conjecture.utils as cu Would it be fine for my strategy to use cu.integer_range or is there another implementation?
0
1
172
0
62,990,659
0
0
0
0
2
false
1
2018-04-14T19:26:00.000
1
2
0
Using memory mapped buffers for scipy sparse
49,835,342
0.099668
python,numpy,scipy,sparse-matrix
This can partially work, until you try to do much with the array. There's a very good chance the subarrays will be fully read into memory if you subset, or you'll get an error. An important consideration here is that the underlying code is written assuming the arrays are typical in-memory numpy arrays. Cost of random access is very different for mmapped arrays and in memory arrays. In fact, much of the code here is (at time of writing) in Cython, which may not be able to work with more exotic array types. Also most of this code can change at any time, as long as the behaviour is the same for in-memory arrays. This has personally bitten me when some I learned some code I worked with was doing this, but with h5py.Datasets for the underlying arrays. It worked surprisingly well, until a bug fix release of scipy completely broke it.
I have to handle sparse matrix that can occasionally be very big, nearing or exceeding RAM capacity. I also need to support mat*vec and mat*mat operations. Since internally a csr_matrix is 3 arrays data, indices and indptr is it possible to create a csr matrix from numpy memmap.
0
1
517
0
50,985,982
0
0
0
0
2
true
1
2018-04-14T19:26:00.000
-2
2
0
Using memory mapped buffers for scipy sparse
49,835,342
1.2
python,numpy,scipy,sparse-matrix
This works without any problems.
I have to handle sparse matrix that can occasionally be very big, nearing or exceeding RAM capacity. I also need to support mat*vec and mat*mat operations. Since internally a csr_matrix is 3 arrays data, indices and indptr is it possible to create a csr matrix from numpy memmap.
0
1
517
0
49,841,568
0
0
0
0
1
false
58
2018-04-15T11:19:00.000
6
2
0
What does calling fit() multiple times on the same model do?
49,841,324
1
python,machine-learning,scikit-learn
You can use term fit() and train() word interchangeably in machine learning. Based on classification model you have instantiated, may be a clf = GBNaiveBayes() or clf = SVC(), your model uses specified machine learning technique. And as soon as you call clf.fit(features_train, label_train) your model starts training using the features and labels that you have passed. you can use clf.predict(features_test) to predict. If you will again call clf.fit(features_train2, label_train2) it will start training again using passed data and will remove the previous results. Your model will reset the following inside model: Weights Fitted Coefficients Bias And other training related stuff... You can use partial_fit() method as well if you want your previous calculated stuff to stay and additionally train using next data
After I instantiate a scikit model (e.g. LinearRegression), if I call its fit() method multiple times (with different X and y data), what happens? Does it fit the model on the data like if I just re-instantiated the model (i.e. from scratch), or does it keep into accounts data already fitted from the previous call to fit()? Trying with LinearRegression (also looking at its source code) it seems to me that every time I call fit(), it fits from scratch, ignoring the result of any previous call to the same method. I wonder if this true in general, and I can rely on this behavior for all models/pipelines of scikit learn.
0
1
28,564
0
54,985,155
0
0
0
0
1
true
3
2018-04-15T19:47:00.000
0
2
0
AttributeError: module 'tensorflow' has no attribute 'Session'
49,846,106
1.2
tensorflow,python-3.6
With the new version of python3.6 my problem is solved. I removed my previous python3.6 and installed the newest version. Then installed tensorflow through pip and it works.
When I call any function in python3.6, I get the error below; however, it works fine in python3.4. Any idea? import tensorflow as tf tf.Session() Traceback (most recent call last): File "", line 1, in AttributeError: module 'tensorflow' has no attribute 'Session' Here is my System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian 8.7 TensorFlow installed from (source or binary): by pip3 TensorFlow version (use command below): 1.7.0 Python version: 3.6.5 CUDA/cuDNN version: cuda 9.0 and cudnn 7.0 GPU model and memory: K80, 12 GB Exact command to reproduce: import tensorflow as tf tf.Session()
0
1
18,023
0
49,849,810
0
0
0
0
3
false
0
2018-04-16T04:34:00.000
0
4
0
Regarding image classification using CNN
49,849,715
0
python,tensorflow,classification,conv-neural-network
This means that you are clearly overfitting. Your model is not able to generalize well to the images that you enter manually. There are a few things that you could do. You could try regularization so that your model does not overfit. Try getting more data Data augmentation I hope that by introducing regularization, your model would work better on manually entered images.
I built a image classification model using convolutional neural network for 5 classes.It is giving the training accuracy as 100% and testing accuracy as 82%. But when I gave an image manually for predicting, the model is not able to classify them correctly.For 10 images the model is classifying only 3-4 images correctly.What is the mistake? What should I do??
0
1
116
0
51,365,356
0
0
0
0
3
false
0
2018-04-16T04:34:00.000
0
4
0
Regarding image classification using CNN
49,849,715
0
python,tensorflow,classification,conv-neural-network
Your model is not able to generalize properly. the training has gone very quickly through all the images in the training set and even started making predictions on the same images. Try adding images and of different sizes. Try flipping the original images horizontally to create transformed images that might also help in accurately training your model.
I built a image classification model using convolutional neural network for 5 classes.It is giving the training accuracy as 100% and testing accuracy as 82%. But when I gave an image manually for predicting, the model is not able to classify them correctly.For 10 images the model is classifying only 3-4 images correctly.What is the mistake? What should I do??
0
1
116
0
70,926,699
0
0
0
0
3
false
0
2018-04-16T04:34:00.000
-1
4
0
Regarding image classification using CNN
49,849,715
-0.049958
python,tensorflow,classification,conv-neural-network
Besides that, probably your training set has way easier images than your test set (those image that you give manually). That can happen even if you are feeding the network from the same dataset. To have better results on unseen images, you must have to reach a good generalisation level. Try what is said previously above...
I built a image classification model using convolutional neural network for 5 classes.It is giving the training accuracy as 100% and testing accuracy as 82%. But when I gave an image manually for predicting, the model is not able to classify them correctly.For 10 images the model is classifying only 3-4 images correctly.What is the mistake? What should I do??
0
1
116
0
63,833,958
0
0
0
0
1
false
17
2018-04-16T10:13:00.000
3
3
0
Select the inverse index in pd.Dataframe
49,854,796
0.197375
python,pandas,dataframe,indexing
Assuming my_index are the row indices you want to ignore, you could drop these where they exist in the dataframe df: df = df.drop(my_index, errors='ignore')
How to select the inverse index in pd.DataFrame by using loc or iloc? I tried df.loc[!my_index,my_feature] but fail. And df.loc[[ind for ind in df.index.tolist() if ind not in my_index],my_feature] looks too dull. Any better idea?
0
1
12,631
0
49,864,136
0
1
0
0
1
false
16
2018-04-16T18:04:00.000
5
2
0
numpy.product vs numpy.prod vs ndarray.prod
49,863,633
0.462117
python,numpy,numpy-ndarray
This is what I could gather from the source codes of NumPy 1.14.0. For the answer relevant to the current Master branch (NumPy 1.15.0), see the answer of miradulo. For an ndarray, prod() and product() are equivalent. For an ndarray, prod() and product() will both call um.multiply.reduce(). If the object type is not ndarray but it still has a prod method, then prod() will return prod(axis=axis, dtype=dtype, out=out, **kwargs) whereas product will try to use um.multiply.reduce. If the object is not an ndarray and it does not have a prod method, then it will behave as product(). The ndarray.prod() is equivalent to prod(). I am not sure about the latter part of your question regarding preference and readability.
I'm reading through the Numpy docs, and it appears that the functions np.prod(...), np.product(...) and the ndarray method a.prod(...) are all equivalent. Is there a preferred version to use, both in terms of style/readability and performance? Are there different situations where different versions are preferable? If not, why are there three separate but very similar ways to perform the same operation?
0
1
13,184
0
49,869,744
0
0
0
0
1
false
0
2018-04-17T03:51:00.000
0
1
0
Training changing input size RNN on Tensorflow
49,869,622
0
python,numpy,tensorflow,machine-learning,rnn
As a general guideline, GPU boosts performance only if you have calculation intensive code and little data transfer. In other words, if you train your model one instance at a time (or on small batch sizes) the overhead for data transfer to/from GPU can even make your code run slower! But if you feed in a good chunk of samples, then GPU will definitely boost your code.
I want to train an RNN with different input size of sentence X, without padding. The logic used for this is that I am using Global Variables and for every step, I take an example, write the forward propagation i.e. build the graph, run the optimizer and then repeat the step again with another example. The program is extremely slow as compared to the numpy implementation of the same thing where I have implemented forward and backward propagation and using the same logic as above. The numpy implementation takes a few seconds while Tensorflow is extremely slow. Can running the same thing on GPU will be useful or I am doing some logical mistake ?
0
1
42
0
49,905,936
0
0
0
0
1
true
0
2018-04-17T13:23:00.000
0
1
0
python: loadmat - ValueError: negative dimensions are not allowed
49,879,324
1.2
python,scipy,mat
I found out that this was a binary file (was written with Matlab's fwrite). The solution was saving the file as text, and then loading it in python not using loadmat.
Trying to read a matlab file into python. I am using the function loadmat from scipy.io to read it, and I'm getting back the error ValueError: negative dimensions are not allowed. How can I fix this issue?
0
1
131
0
63,543,545
0
1
0
0
1
false
10
2018-04-17T14:07:00.000
1
3
0
What is difference between plot and iplot in Pandas?
49,880,314
0.066568
python,pandas,matplotlib,plotly
Correct answer provided.I tried to run this code in pycharm IDE but could not. jupyter notebook is required to graph iplot.
What is the difference between plot() and iplot() in displaying a figure in Jupyter Notebook?
0
1
28,800
0
49,885,159
0
0
0
0
1
false
5
2018-04-17T15:47:00.000
6
2
0
Keras Tensorflow Binary Cross entropy loss greater than 1
49,882,424
1
python,tensorflow,keras
Yes, its correct, the cross-entropy is not bound in any specific range, its just positive (> 0).
Library: Keras, backend:Tensorflow I am training a single class/binary classification problem, wherein my final layer has a single node, with activation of sigmoid type. I am compiling my model with a binary cross entropy loss. When I run the code to train my model, I notice that the loss is a value greater than 1. Is that right, or am I going wrong somewhere? I have checked the labels. They're all 0s and 1s. Is it possible to have the binary cross entropy loss greater than 1?
0
1
10,194
0
64,196,727
0
1
0
0
2
false
8
2018-04-18T00:56:00.000
0
5
0
Issues importing mlxtend python
49,889,524
0
python,python-import,importerror,mlxtend
Try this: conda install -c conda-forge mlxtend Don't mix pip and Conda environments. Last time I did this I broke my Python and needed to reinstall everything again. If you must deal with different environments using pip and conda because of many projects, use pyenv.
I'm new to python, so apologies if this is a silly question. I'm trying to use mlxtend, and have installed it using pip. Pip confirms that it is installed (when I type "pip install mlxtend" it notes that the requirement is already satisfied). However, when I try and import mlxtend in python using "import mlxtend as ml", I get the error: "ModuleNotFoundError: No module named 'mlxtend'". I used the same process for installing and importing pandas and numpy, and they both worked. Any advice? I should note that I have resorted to dropping in the specific code I need from mlxtend (apriori and association rules), which is working, but hardly a good long term strategy! I'm using python version 3.6.5. Thanks!
0
1
19,607
0
58,758,122
0
1
0
0
2
false
8
2018-04-18T00:56:00.000
3
5
0
Issues importing mlxtend python
49,889,524
0.119427
python,python-import,importerror,mlxtend
I had the same issue while using Anaconda, I tried to install it with Anaconda, however, Notebook didn't see it installed. You can also try to install it in CMD by just typing pip install mlxtend --user or pip3 install mlxtend --user After installing it with CMD, Notebook didn't give error for my case. Just reply if this helps. Good luck all.
I'm new to python, so apologies if this is a silly question. I'm trying to use mlxtend, and have installed it using pip. Pip confirms that it is installed (when I type "pip install mlxtend" it notes that the requirement is already satisfied). However, when I try and import mlxtend in python using "import mlxtend as ml", I get the error: "ModuleNotFoundError: No module named 'mlxtend'". I used the same process for installing and importing pandas and numpy, and they both worked. Any advice? I should note that I have resorted to dropping in the specific code I need from mlxtend (apriori and association rules), which is working, but hardly a good long term strategy! I'm using python version 3.6.5. Thanks!
0
1
19,607
1
49,896,647
0
0
0
0
1
true
0
2018-04-18T02:37:00.000
0
1
0
Different return from Java and Python cv2.findContours
49,890,287
1.2
c#,python,opencv,opencv-contour
I've finally found the answer to this phenomenon. The python program is using OpenCV 3.4, while the Java bindings of my C# program is using the older OpenCV 3.1 . The FindContours method may be found in both versions, but apparently returns different values.
I'm translating a Python program which uses OpenCV to C# with Java bindings. I tested both Python and C# programs using the same image, and I realized the findContours method returns different contours between the 2 programs. Python: _, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) C#: Imgproc.FindContours(edges, contours, hierarchy, Imgproc.RetrTree, Imgproc.ChainApproxSimple); For Python, I checked using len(contours), for C# contours.Count, and they return the values 94 and 106 respectively. I think this may be the cause to many discrepancies in my translated program, and I'm not sure why. What am I doing wrong here? Add-on: The Canny method below is called before calling findContours. Anything before is just reading the image, and converting the image into a gray one, thus the grayImg variable. C#: Imgproc.Canny(grayImg, edges, 100, 200, 3, false); Python: edges = cv2.Canny(gray, 100, 200, apertureSize = 3) I previously thought it was because of differing OpenCV versions, but I realized both are using OpenCV 3. Unless the FindContours method is different in 3.1 than 3.4, I'm back to square one again as I don't know the cause of the problem.
0
1
194
0
49,894,197
0
0
0
0
1
false
0
2018-04-18T07:32:00.000
0
1
0
tensorflow: CNN for non square image
49,893,741
0
python,tensorflow
I don't exactly agree with reshaping a rectangular picture as you destroy the relationship between neighbour pixels. Instead, you have several options to apply CNNs on a non-quadratic image: 1.) Use padding. During preprocessing, you could fill in pixels to get a quadratic image. Through this you can apply the quadratic filter. 2.) Use different quadratic windows of that image for training. For example, create a quadratic window and run it over the image to get a number of sub-pictures. 3.) You could use different strides for the dimensions of your picture. 4.) You could stretch the picture in the needed direction, although I'm not exactly sure how this affects the performance later on. I would only try this as a last resort solution.
tensorflow version 1.5.0rc1 python version:3.5 When reshape a rectangular image to [height,width] by using tf.reshape(x,[-1,x,y,1]) eg. tf.reshape(x,[-1,14,56,1]) run conv2d returns: InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 358400 values, but the requested shape requires a multiple of 3136 [[Node: Reshape_1 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](MaxPool_1, Reshape_1/shape)]] which 3136 is the square of 56. the tensor treats the reshape as 56x56 instead of 14*56 matrix. Is there a way to get rid of it and set my CNN to a non square image? Thanks
0
1
1,803
0
49,898,817
0
0
0
0
1
false
8
2018-04-18T11:36:00.000
1
5
0
Pandas reading csv files with partial wildcard
49,898,742
0.039979
python,pandas
Loop over each file and build a list of DataFrame, then assemble them together using concat.
I'm trying to write a script that imports a file, then does something with the file and outputs the result into another file. df = pd.read_csv('somefile2018.csv') The above code works perfectly fine. However, I'd like to avoid hardcoding the file name in the code. The script will be run in a folder (directory) that contains the script.py and several csv files. I've tried the following: somefile_path = glob.glob('somefile*.csv') df = pd.read_csv(somefile_path) But I get the following error: ValueError: Invalid file path or buffer object type: <class 'list'>
0
1
14,334
0
49,908,880
0
0
0
0
1
true
0
2018-04-18T20:48:00.000
4
1
0
Sum of values accessed through numpy memmap is wrong
49,908,788
1.2
python,numpy,memory
numpy.memmap is for treating raw data in a file as a numpy array. Your filename is 'file.npy', so that is not "raw" data. It is a NPY file, which has a header containing meta-information about the array stored in it. To memory-map a NPY file, use the mmap_mode argument of numpy.load().
I have a large (9.3 GB) .npy file containing a uint8 values in an (67000, 9, 128, 128) ndarray. I created it using np.save() and when loading it using x = np.memmap('file.npy', "uint8", shape=(67000, 9, 128, 128), mode="r"), np.sum(x[0,0,:,0]) returns 13783. The "problem" is that when I try loading it with np.load("file.npy") and run the same function, I get the sum to 13768. Since np.load() loads the whole file in memory, I'd assume that the sum computed on its ndarray is correct, while the one returned by the ndarray loaded with memmap is wrong, but why are they different ? If it was a reading error the sum should be really off, so why is it off by only 15 ??! I have no clue why that is the case. This won't affect my computation per say but it could be significant for other tasks.
0
1
379
0
49,911,007
0
1
0
0
1
false
3
2018-04-18T21:14:00.000
1
1
0
Count number of nodes per level in a binary tree
49,909,109
0.197375
python,python-3.x,tree,binary-tree,tree-traversal
The search ordering doesn't really matter as long as you only count each node once. A depth-first search solution with recursion would be: Create a map counters to store a counter for each level. E.g. counters[i] is the number of nodes found so far at level i. Let's say level 0 is the root. Define a recursive function count_subtree(node, level): Increment counters[level] once. Then for each child of the given node, call count_subtree(child, level + 1) (the child is at a 1-deeper level). Call count_subtree(root_node, 0) to count starting at the root. This will result in count_subtree being run exactly once on each node because each node only has one parent, so counters[level] will be incremented once per node. A leaf node is the base case (no children to call the recursive function on). Build your final list from the values of counters, ordered by their keys ascending. This would work with any kind of tree, not just binary. Running time is O(number of nodes in tree). Side note: The depth-first search solution would be easier to divide and run on parallel processors or machines than a similar breadth-first search solution.
I've been searching for a bit now and haven't been able to find anything similar to my question. Maybe i'm just not searching correctly. Anyways this is a question from my exam review. Given a binary tree, I need to output a list such that each item in the list is the number of nodes on a level in a binary tree at the items list index. What I mean, lst = [1,2,1] and the 0th index is the 0th level in the tree and the 1 is how many nodes are in that level. lst[1] will represent the number of nodes (2) in that binary tree at level 1. The tree isn't guaranteed to be balanced. We've only been taught preorder,inorder and postorder traversals, and I don't see how they would be useful in this question. I'm not asking for specific code, just an idea on how I could figure this out or the logic behind it. Any help is appreciated.
0
1
1,773
0
49,911,299
0
0
0
0
1
false
8
2018-04-19T01:13:00.000
6
2
0
How to restrict output of a neural net to a specific range?
49,911,206
1
python,neural-network,keras,regression
Normalize your outputs so they are in the range 0, 1. Make sure your normalization function lets you transform them back later. The sigmoid activation function always outputs between 0, 1. Just make sure your last layer has sigmoid activation to restrict your output into that range. Now you can take your outputs and transform them back into the range you wanted. You could also look into writing your own activation function to transform your data.
I'm using Keras for a regression task and want to restrict my output to a range (say between 1 and 10) Is there a way to ensure this?
0
1
8,004
0
49,915,763
0
1
0
0
1
false
0
2018-04-19T07:12:00.000
3
2
0
I get the following error whenever I try to run conda install tensorflow
49,914,882
0.291313
python,tensorflow,anaconda,jupyter-notebook,conda
You have to run the conda info tensorflow and conda info numba to see each dependencies for each package and then you have to install those package like conda install package=version to fix the problem.
This is the error : Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - numba -> numpy[version='>=1.14,<1.15.0a0'] - tensorflow Use "conda info " to see the dependencies for each package.
0
1
1,294
0
52,469,364
0
0
0
0
1
false
10
2018-04-19T13:34:00.000
3
2
0
Scale a numpy array with from -0.1 - 0.2 to 0-255
49,922,460
0.291313
python,image,numpy
You can also use uint8 datatype while storing the image from numpy array. import numpy as np from PIL import Image img = Image.fromarray(np.uint8(tmp)) tmp is my np array of size 255*255*3.
I have an numpy array in python that represent an image its size is 28x28x3 while the max value of it is 0.2 and the min is -0.1. I want to scale that image between 0-255. How can I do so?
0
1
18,784
0
49,935,903
0
0
0
0
1
true
1
2018-04-19T19:44:00.000
2
1
0
Clear approach for assigning semantic tags to each sentence (or short documents) in python
49,929,066
1.2
python-2.7,nlp,nltk,gensim,semantic-analysis
Lack of labeled data means you cannot apply any semantic classification method using word vectors, which would be the optimal solution to your problem. An alternative however could be to construct the document frequencies of your token n-grams and assume importance based on some smoothed variant of idf (i.e. words that tend to appear often in descriptions probably carry some semantic weight). You can then inspect your sorted-by-idf list of words and handpick(/erase) words that you deem important(/unimportant). The results won't be perfect, but it's a clean and simple solution given your lack of training data.
I am looking for a good approach using python libraries to tackle the following problem: I have a dataset with a column that has product description. The values in this column can be very messy and would have a lot of other words that are not related to the product. I want to know which rows are about the same product, so I would need to tag each description sentence with its main topics. For example, if I have the following: "500 units shoe green sport tennis import oversea plastic", I would like the tags to be something like: "shoe", "sport". So I am looking to build an approach for semantic tagging of sentences, not part of speech tagging. Assume I don't have labeled (tagged) data for training. Any help would be appreciated.
0
1
184
0
50,009,358
0
0
0
0
1
false
0
2018-04-20T06:37:00.000
0
1
0
Is there a way to nullify a specific feature in test set while evaluating a tensorflow model?
49,935,491
0
python,tensorflow,machine-learning,deep-learning,regression
For deep models there's no general input-independent way to do this kind of feature ablation you want (take a pretrained model and just change that feature's representation on the test set). Instead I recommend you do training time ablation: train different variations of your model with different feature combinations and compare their validation set performances. This will actually tell you how much each feature helps.
Idea behind nullifying/ignoring a feature from test set is to understand how important is it considered by the model to predict the target variable (by comparing the evaluation metric's value). For numerical variables, I thought of setting them to 0, assuming the multiplication (with weights) would be 0 and thus it would get eliminated from the set. Is this approach right, else what should be done? I am using tensorflow's DNNRegressor for modelling.
0
1
46
0
49,936,800
0
0
0
0
1
false
2
2018-04-20T07:36:00.000
0
1
0
How can a specific model be cleared from the memory?
49,936,467
0
python,keras
you can use del[] to explicitly delete the variables you no longer need, this triggers python's garbage collector. so put all the variables associated with that model inside del[] and python would free the memory.
I created several Keras models (Model_1, Model_2...Model_N) in one code. I would like to clear only one specified model (e.g. Model_1). I guess K.clear_session(), which will clear all models from the memory, is not useful in this case. Is there any solution? Thank you in advance.
1
1
36
0
49,943,047
0
0
0
0
1
true
0
2018-04-20T12:04:00.000
1
1
0
Tuning hyperparameters Inception model with checkpoints
49,941,265
1.2
python,tensorflow,deep-learning
Both the methods are okay but you have to see your training loss after adjusting them. If they are converging in both the cases then it's fine otherwise adjust accordingly. However, people adopt these two methods as far as I know 1. Keep a higher learning rate initially and keep a decay factor, thus reducing your learning rate slowly as it starts converging. 2. You can keep an eye on loss function and do early stopping if you think you can adjust to better learning rate.
I have a question concerning tuning hyperparameters for the Inception ResNet V2 model (or any other DL model), which I can't really wrap my head around. Right now, I have certain set certain hyperparameters, such as learning_rate, decay_factor and decay_after_nr_epochs. My model saves checkpoints, so it can continue at these points later on. If I run the model again, with more epochs, it logically continues at the last checkpoint to continue training. However, if I would set new hyperparameters, such as learning_rate = 0.0001 instead of learning_rate = 0.0002, does it make sense to continue on the checkpoints, or is it better to use new hyperparameters on the initial model? The latter sounds more logical to me, but I'm not sure whether this is necessary. Thanks in advance.
0
1
470
0
52,770,137
0
0
0
0
1
false
0
2018-04-20T14:13:00.000
0
2
0
LightGBM Python API. Best_iteration and best_score for custom evaluation function (feval)
49,943,689
0
python-3.x,lightgbm
You can use objective:"multi_error", or also you can combine objectives as objective: "multi_error", "multi_logloss" Multi_error will directly focus on the accuracy.
I'm using lightgbm.train with valid_sets, early_stopping_rounds and feval function for multiclass problem with "objective": "multiclass". I want to find best_iteration and best_score for my custom evaluation function. But it finds them for multi_logloss metrics, which is corresponding to specified objective. So the question is can I find in LightGBM best_iteration and best_score for my feval function and how?
0
1
472
0
49,947,979
0
0
0
0
1
false
1
2018-04-20T18:38:00.000
1
1
0
Python pickle for Machine Learning
49,947,850
0.197375
python
Pickling is just the python form of serialization. Serialization only preserves raw data, such as strings, integers, floats, lists, and similar things. If you pickle a model, you can unpickle it and use it later, but if you don't have the libraries (keras, tensorflow, or whatever you need), you cannot use it on that machine.
I have built a predictive model in Windows to train the dataset using python 3.6.1. I have used pickle to save the train model as a Pickle file Now I have written a another python script to read and load the train pickle file to predict the test data. I have got results successful in Windows. Now I want to move the Train pickle file and the Python script (that used the Train pickle file to predict the model using test data) to Linux environment which has the similar version of python installation as that of windows.But the python installation in Linux doesn't include machine learning libraries. In such a case will the Train pickle file of windows behave as a exe file having all the machine learning libraries? Will the python script that loads and read the train pickle file to predict the data fail in Linux as it does not any machine learning libraries installed.
0
1
984
0
51,285,200
0
0
0
0
1
true
0
2018-04-20T22:54:00.000
1
1
0
Using Cartopy by respecting 1 to 1 km aspect ratio
49,950,672
1.2
python-3.x,cartopy
The aspect ratio of a Mercator projection isn't actually 1°:1°, but is instead 1:1 in cartesian "Mercator units". The Mercator projection doesn't preserve distance along the meridians, and therefore the higher (or lower) you go in latitude in the Mercator projection, the longer (in physical distance) 1 Mercator unit is in the y direction. As 1° has approximately the same length anywhere on the Earth (assuming a perfect sphere), you are actually looking for a projection that does preserve the 1° x 1° aspect ratio. At high-resolution, many map projections are good representations of 1km:1km, but at global scale there are only a few good choices. Essentially, you are looking for a projection that is equidistant along both parallels and meridians, and the most obvious projection with that property is Plate Carree / Equirectangular (all cylindrical projections are equidistant along parallels, but Plate Carree is equidistant along meridians too).
I'm using cartopy to plot images on geographic axes. I used the Mercator projection to plot my data. However I realized that the equal aspect ratio is based on degrees and not in km. How can I do to force the map to respect a 1 km x 1 km aspect ratio instead of a 1° x 1° aspect ratio. Thanks
0
1
160
0
50,278,726
0
0
0
0
1
true
1
2018-04-20T23:01:00.000
0
1
0
Viewing a large (12.5GB) HDF5 file written using h5py on HDFView 2.14
49,950,725
1.2
python,out-of-memory,hdf5,h5py,hdf
you can edit your hdfview.bat and add more memory on the switch this is the line on .bat file: start "HDFView" "%JAVABIN%\javaw.exe" %JAVAOPTS% -Xmx1024M -Djava.library.path="%INSTALLDIR%\lib;%INSTALLDIR%\lib\ext" -Dhdfview.root="%INSTALLDIR%" -cp "%INSTALLDIR%\lib\fits.jar;%INSTALLDIR%\lib\netcdf.jar;%INSTALLDIR%\lib\jarhdf-3.3.2.jar;%INSTALLDIR%\lib\jarhdf5-3.3.2.jar;%INSTALLDIR%\lib\slf4j-api-1.7.5.jar;%INSTALLDIR%\lib\extra\slf4j-nop-1.7.5.jar;%INSTALLDIR%\lib\HDFView.jar" hdf.view.HDFView %* you can edit -Xmx1024M to more like: -Xmx4024M it add more memory to java machine
I have a very large HDF5 file created using Python/ h5py. I cannot open the file on HDFView 2.14, when I try to open the file nothing happens. Any suggestions on how I can open/ view the file? It contains just 5 datasets, but each dataset has 778 million rows.. hence the problem. Thank you!
0
1
626
0
49,956,444
0
0
0
0
1
false
0
2018-04-21T10:32:00.000
1
1
0
RNN Implementation
49,954,852
0.197375
python-3.x,recurrent-neural-network,pytorch,rnn
One hot assumes each of your vector should be different in one place. So if you have 97 unique character then i think you should use a 1-hot vector of size ( 97 + 1 = 98). The extra vector maps all the unknown character to that vector. But you can also use a 256 length vector. So you input will be: B x N x V ( B = batch size, N = no of characters , V = one hot vector size). But if you are using libraries they usually ask the index of characters in vocabulary and they handle index to one hot conversion. Hope that helps.
I am going to implement RNN using Pytorch . But , before that , I am having some difficulties in understanding the character level one-hot encoding which is asked in the question . Please find below the question Choose the text you want your neural network to learn, but keep in mind that your data set must be quite large in order to learn the structure! RNNs have been trained on highly diverse texts (novels, song lyrics, Linux Kernel, etc.) with success, so you can get creative. As one easy option, Gutenberg Books is a source of free books where you may download full novels in a .txt format. We will use a character-level representation for this model. To do this, you may use extended ASCII with 256 characters. As you read your chosen training set, you will read in the characters one at a time into a one-hot-encoding, that is, each character will map to a vector of ones and zeros, where the one indicates which of the characters is present: char → [0, 0, · · · , 1, · · · , 0, 0] Your RNN will read in these length-256 binary vectors as input. So , For example , I have read a novel in python. Total unique characters is 97. and total characters is somewhere around 300,000 . So , will my input be 97 x 256 one hot encoded matrix ? or will it be 300,000 x 256 one hot encoded matrix ?
0
1
147
0
49,968,114
0
0
0
0
1
true
0
2018-04-22T02:32:00.000
0
1
0
k-means clustering multi column data in python
49,961,977
1.2
python,cluster-analysis,k-means,data-science
You don't need to reformat anything. Each row is a 60 dimensional vector of continous values with a comparable scale (coordinates), as needed for k-means. You can just run k-means on this. But assuming that the measurements were taken in sequence, you may observe a strong correlation between rows, so I wouldn't expect the data to cluster extremely well, unless you set up the use to do and hold certain poses.
I Have data-set for which consist 2000 lines in a text file. Each line represents x,y,z (3D coordinates location) of 20 skeleton joint points of human body (eg: head, shoulder center, shoulder left, shoulder right,......, elbow left, elbow right). I want to do k-means clustering of this data. Data is separated by 'spaces ', each joint is represented by 3 values (Which represents x,y,z coordinates). Like head and shoulder center represented by .0255... .01556600 1.3000... .0243333 .010000 .1.3102000 .... So basically I have 60 columns in each row, which which represents 20 joints and each joins consist of three points. My question is how do I format or use this data for k-means clustering,
0
1
394
0
49,968,205
0
0
0
0
2
false
0
2018-04-22T16:07:00.000
3
2
0
How to train a model when there is too much training data?
49,968,000
0.291313
python,tensorflow,machine-learning,neural-network,keras
Option 1 is far superior, supposing the data has the same quality throughout. The reason being is that it helps avoid overfitting. One of the biggest issues in ML is that you have a limited amount of data to train on, and the networks tend to overfit that is learn the specific of the set of data instead of generalizing to the problem intended. Never repeating the data is actually the ideal situation for training. The only caveat being is if you have the labels very accurately set in certain parts and more sloppily done in others - in that case sticking with the better quality data might be better.
I have an RNN and way too much training data. Iterating through the data in its entirety will take years. Right now I have two options for training: 1. Do one pass over as much data as possible 2. Find a select portion of the data and train for multiple epochs on it Which one is better and why?
0
1
418
0
49,968,311
0
0
0
0
2
false
0
2018-04-22T16:07:00.000
2
2
0
How to train a model when there is too much training data?
49,968,000
0.197375
python,tensorflow,machine-learning,neural-network,keras
The question is asked in general terms, so the following is also in general terms. I hope this contributes something to helping you with your specific problem. One approach would be a variant of cross validation. You randomly select some portion of the data, and evaluate the result on a second portion of the data. Repeat the entire process until you satisfy your convergence criteria or exhaust your count. The convergence criteria might be something like getting the same or similar network at some rate. You can do this in either of two modes, allowing or not allowing reuse of the data. A second point to keep in mind is that your execution time depends on the length of your "feature vectors" or whatever serves that function in your application. Selecting the important components of your feature vectors, both reduces the processing time and also can help the training be more successful. Scikit learn has a function SelectKBest() that may be helpful for this.
I have an RNN and way too much training data. Iterating through the data in its entirety will take years. Right now I have two options for training: 1. Do one pass over as much data as possible 2. Find a select portion of the data and train for multiple epochs on it Which one is better and why?
0
1
418
0
51,364,465
0
0
0
0
1
false
0
2018-04-23T14:25:00.000
0
1
0
Python : Image Clustering
49,983,588
0
python,arrays,image,cluster-analysis
If you want save data for subsequent uses other than this one you should use csv. Otherwise you could load images directly in arrays, in this case take a look at spark.
can I convert all the images to an array saved in one file csv then use those arrays for clustering, or do I have to convert one by one image into array form then merge them for used in clustering?
0
1
122
1
49,984,333
0
0
1
0
2
false
0
2018-04-23T15:01:00.000
3
4
0
C++ Vector in Python for Perfomance
49,984,220
0.148885
python,c++,c,arrays
Both the array and the low level operations on it would have to be in C++; switching on a per element basis will have little benefit. There are many python modules that have internal C/C++ implementations. Simply wrapping a C or C++ style array would be pointless, as the built in python data types can basically be that.
I'm creating an image detection module, and I do a lot of math calculations around arrays. I know that C/C++’s array iterates faster than Python’s I can't move my project to C/C++, so I wanted to create an array module in C/C++ and call it in Python. What I want to know: 1) Is this viable? Or calling a module from another interpreter will slow down my program more than it will speed it up? 2) Is there some Python package that does what I want? I feel like I haven’t written enough info, but I can't think of anything else important. [EDIT] So I just went with numpy and it has everything I need :p, thanks everyone
0
1
111
1
49,984,679
0
0
1
0
2
false
0
2018-04-23T15:01:00.000
0
4
0
C++ Vector in Python for Perfomance
49,984,220
0
python,c++,c,arrays
Try boost.python. If you can port all the computational heavy stuff to C++, it'll be quite fast but if you need to switch continuously between C++ and python, you won't get much improvement.
I'm creating an image detection module, and I do a lot of math calculations around arrays. I know that C/C++’s array iterates faster than Python’s I can't move my project to C/C++, so I wanted to create an array module in C/C++ and call it in Python. What I want to know: 1) Is this viable? Or calling a module from another interpreter will slow down my program more than it will speed it up? 2) Is there some Python package that does what I want? I feel like I haven’t written enough info, but I can't think of anything else important. [EDIT] So I just went with numpy and it has everything I need :p, thanks everyone
0
1
111
0
49,994,600
0
0
0
0
1
false
0
2018-04-23T15:21:00.000
0
1
0
Feasibility of converting all python pandas/numpy code to base python
49,984,571
0
python,pandas,numpy
I'm only going to address the 2nd point. Reimplementing all of numpy/pandas is certainly a very large and useless task. But you're not reimplementing all of it, you only need some parts, and if it's only a few functions, than it's certainly possible. I'd start from a working script, replace arrays by python lists, and implement the needed fucntions one by one. For SO specifically, I suspect you're better off asking specific questions, e.g. how to implement an analog of a function X in pure python etc.
General python question- I have built a script using numpy and pandas libraries. I have now been told that I cannot use any libraries- only base python to code. This is because apparently open source libraries are not approved. Does this restriction make sense? Isn't base python as open source as pandas/numpy libraries are? Is it possible to convert pandas/numpy code to base python? Does this sound like a simple exercise or does it require learning a lot of new functions? Majority of the code is reading tables and then using if/then type statements and looking up values from other tables to generate and populate new tables.
0
1
53
0
50,057,446
0
0
0
0
1
true
9
2018-04-23T18:49:00.000
8
1
0
How do I merge two trained neural network weight matrices into one?
49,988,009
1.2
python,matrix,machine-learning,neural-network,mnist
In general if you combine the weights / biases entirely after training, this is unlikely to produce good results. However, there are ways to make it work. Intuition about combining weights Consider the following simple example: You have a MLP with one hidden layer. Any two instances of the MLP can produce identical outputs for identical inputs if the nodes in the hidden layer are permuted, the weights input->hidden are permuted the same way, and the weights hidden->output are permuted using the inverse permutation. In other words, even if there was no randomness in what the final network you end up with does, which hidden node corresponds to a particular feature would be random (and determined from the noise of the initializations). If you train two MLPs like that on different data (or random subsets of the same data), they would likely end up with different permutations of the hidden nodes even if the initializations are all the same, because of the noise of the gradients during training. Now, if that a certain property of the input activates most strongly the i-th node of network A and the j-th node of network B (and in general i != j), averaging the weights between the i-th node of A and i-th node of B (which corresponds to a different feature) is likely to decrease performance, or even produce a network that produces nonsense outputs. There are two possible fixes here - you can use either one, or both together. The idea is to either figure out which nodes match between the two networks, or to force the nodes to match. Solution A: Train both networks on different data for a few iterations. Average the weights of both, and replace both networks with the average weights. Repeat. This makes the i-th node of each network learn the same feature as the matching node of the other network, since they can't ever diverge too far. They are frequently re-initialized from the average weights, so once the permutation is determined it is likely to stay stable. A reasonable value for how often to average is somewhere between once per epoch and once every few minibatches. Learning is still faster than training one network on all the data sequentially, although not quite 2x faster with 2 networks. Communication overhead is a lot lower than averaging weights (or gradients) after every minibatch. This can be run on different machines in a cluster: transferring weights is not prohibitive since it is relatively infrequent. Also, any number of networks trained at the same time (and splits of the data) can be more than two: up to 10-20 works okay in practice. (Hint: for better results, after every epoch, do a new random split of the data between the networks you're training) This is similar in effect to "gradient aggregation" that was mentioned here, but aggregates a lot less often. You can think of it as "lazy aggregation". Solution B: Try to figure out which hidden layer nodes match before averaging. Calculate some similarity metric on the weights (could be L2 or anything along those lines), and average the weights of pairs of most-similar nodes from the two networks. You can also do a weighted average of more than just a pair of nodes; for example you can average all nodes, or the k-most similar nodes, where the weights used are a function of the similarity. For deep networks, you have to keep track of the pairings all the way up from the input, and permute the weights according to the highest-similarity pairings of the lower level before calculating similarity on the next level (or if doing weighted averaging, propagate the weights). This probably works for a networks with a few layers but I think for very deep networks this is unlikely to work perfectly. It will still work okay for the first few layers, but tracking the permutations will likely fail to find good matching nodes by the time you get to the top of the network. Another way to deal with deep networks (other that tracking the permutations up from the bottom) is to run both networks on a test dataset and record the activations of all nodes for each input, then average the weights of nodes which have similar activation pattern (ie which tend to be activated strongly by the same inputs). Again this could be based on just averaging the most similar pair from A and B, or a suitable weighted average of more than two nodes. You can use this technique together with "Solution A" above, to average weights somewhat less often. You can also use the weighted averaging by node similarity to speed the convergence of "Solution A". In that case it is okay if the method in "Solution B" doesn't work perfectly, since the networks are replaced with the combined network once in a while - but the combined network could be better if it is produced by some matching method rather than simple averaging. Whether the extra calculations are worth it vs the reduced communication overhead in a cluster and faster convergence depends on your network architecture etc.
I have two identical neural networks running on two separate computers (to reduce the time taken to train the network), each having a subset of a complete data set (MNIST). My question is; can I combine the two weight matrices of both networks into one matrix, while remaining a proper accuracy? I have seen several articles about 'batching' or 'stochastic gradient descent', but I don't think this is applicable to my situation. If this is possible, could you also provide me with some pseudo code? Any input is valuable! Thank you,
0
1
3,859
0
52,720,246
0
0
1
0
1
false
0
2018-04-24T09:50:00.000
0
1
0
which imresize is fastest in Python
49,998,519
0
python,image,performance,interpolation
cv2.resize is slightly faster than scipy.misc.imresize
Currently, we need a interpolation in a Image. I have used the scipy.misc.imresize. This function has two drawback: it can only output interger matrix, but I need a float result. the speed of scipy.misc.imresize is a little slow
0
1
259
0
50,051,554
0
0
0
0
1
true
0
2018-04-24T13:49:00.000
0
1
0
How to read back the "random-seed" from a saved model of Dynet
50,003,397
1.2
python,lstm,dynet
You can't read back the seed parameter. Dynet model does not save the seed parameter. The obvious reason is, it is not required at testing time. Seed is only used to set fixed initial weights, random shuffling etc. for different experimental runs. At testing time no parameter initialisation or shuffling is required. So, no need to save seed parameter. To the best of my knowledge, none of the other libraries like tensorflow, pytorch etc. save the seed parameter as well.
I have a model already trained by dynet library. But i forget the --dynet-seed parameter when training this model. Does anyone know how to read back this parameter from the saved model? Thank you in advance for any feedback.
0
1
78
0
50,011,442
0
1
0
0
1
false
0
2018-04-24T15:30:00.000
0
1
0
Sampling data such that distribution is preserved
50,005,578
0
python-3.x,pandas
Increase your sample size (n>>100). The data you are sampling from is itself a random sample. Creating a subset through random selection is itself a random process. If one of the data classes has a low frequency then the problem is that your sample size (100) is too low. If you change the replace flag to 'True' and do repeated samples, you are doing something called bootstrapping. Assuming the complete data set represents the true population distribution this resampling will give you examples of what kind of measurements you might get for lower values of n (n=100). The alternative is a stratification strategy as suggested by some above. However, you are not creating random subsets when you do this, and the assumption of distribution is now built into your smaller data sets. Note that you can only achieve this after having looked at the entire data set to determine its distribution. Probably not what you want. If you are creating a (supervised) training data set from the data you can repeat under-represented data to manipulate the bias.
vsample_data = credit_card.sample(n=100, replace='False') print(vsample_data) Here, I was trying to sample 100 data points from a data set but not able to get correct sample data such that it preserves the original distribution of the credit card fraud data-set i.e Class-0( Non- Fraud) and Class-1(Fraud).
0
1
537
0
50,310,718
1
0
1
0
1
false
0
2018-04-25T00:38:00.000
0
1
0
can't import more than 50 contacts from csv file to telegram using Python3
50,012,489
0
python,csv,telegram,telethon
You can not import a large number of people in sequential. ُThe telegram finds you're sperm. As a result, you must use ‍sleep between your requests
Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below: telethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required Any idea how I can import all list without waiting?? Thanks!!
0
1
287
0
50,215,479
0
0
0
0
1
false
2
2018-04-25T07:59:00.000
1
2
0
spacy: add lemmatizer lookup for Dutch (nl) language
50,016,956
0.099668
python,spacy,lemmatization
In theory, your approach is correct – if you copy exactly how it's implemented in German and other languages that implement the lookup, it should work. I suspect your problem here is actually a different one: According to the error message, it can't actually find the spacy.lang.nl.lemmatizer module, so spaCy now fails to import the Dutch language class. Are you sure the lemmatizer.py file exists in the correct place, and is imported correctly? (If you're not doing it already, I'd also recommend running your development installation in a separate environment and build spaCy from source, to make sure there are no weird conflicts).
I'am using Spacy 2.0.11 with Dutch language model nl_core_news_sm (nl). How can I add the lemmatization lookup similar to the implementation for German (de)? I tried the following steps: add lookup to init.py in the language folder (nl) add lemmatizer.py in the language folder (nl) This resulted in the following error after 'nlp = nl_core_news_sm.load()' or 'from spacy.lang.nl import Dutch': ModuleNotFoundError: No module named 'spacy.lang.nl.lemmatizer' ImportError: [E048] Can't import language nl from spacy.lang
0
1
1,755
0
50,017,530
0
0
0
0
2
false
0
2018-04-25T08:16:00.000
0
2
0
How to calculate a 95 credible region for a 2D joint distribution?
50,017,251
0
python,numpy,scipy,confidence-interval,credible-interval
If you are interested in finding a pair x_1, x_2 of real numbers such that P(X_1<=x_1, X_2<=x_2) = 0.95 and your distribution is continuous then there will be infinitely many of these pairs. You might be better of just fixing one of them and then finding the other
Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?
0
1
305
0
50,017,700
0
0
0
0
2
false
0
2018-04-25T08:16:00.000
1
2
0
How to calculate a 95 credible region for a 2D joint distribution?
50,017,251
0.099668
python,numpy,scipy,confidence-interval,credible-interval
As the other points out, there are infinitely many solutions to this problem. A practical one is to find the approximate center of the point cloud and extend a circle from there until it contains approximately 95% of the data. Then, find the convex hull of the selected points and compute its area. Of course, this will only work if the data is sort of concentrated in a single area. This won't work if there are several clusters.
Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?
0
1
305
0
57,815,167
0
1
0
0
1
false
5
2018-04-25T12:18:00.000
0
2
0
How to parallelise python input pipeline in Distributed Tensorflow
50,022,168
0
python,tensorflow,tensorflow-datasets
Recently Google has released Tensorflow Extended (TFX). It essentially consists of: A set of operators which each use Apache Beam to do data distribution (They call them components). A standardization of both data and parameter format (what they call protobuf) Automated dependency management of operators (workflow/orchestration) Tracking of runs. This allows the system to skip operations that have already been performed under the same conditions. I would suggest either take a look at TFX. Or, for a more modest leap, look at Apache Beam.
I have a non trivial input pipeline, which consists of reading ground truth and raw data and performing preprocessing on them, written in Python. It takes a long time to run the input pipeline for a single sample so I have multiple processes (from python multiprocessing package) running in parallel and queues to perform the operation quickly and prefetch data. The output is then fed to my network using feed_dict. The overhead of this process in my training loop is 2 orders of magnitude less than the actual tf.Session.run() time. I'm trying to move to the tf.data API, by wrapping with tf.py_func my read+preprocess functions but it runs slowly, probably due to GIL, even when increasing the number of multiple calls. I want to scale up my training to multiple machines and am not sure how data fetching behaves in such a case, also there's the performance issue for a single machine as well :) So, basically my question is: How to run python functions in tf.data api input pipeline in parallel on multiple CPU cores?
0
1
932
0
50,028,462
0
0
0
0
1
true
0
2018-04-25T17:27:00.000
1
1
0
lstm autoencoder time series data input shape
50,028,127
1.2
python,keras
Sample: an individual "sequence", not connected or related to any other sequence Timesteps: the length of your sequences, each sequence has a start and an end, between them the sequence has steps Features (the last dimension): different parallel variables measured in a same sequence. Only you can organize your data based on what you know about it. Number of rows and columns don't say anything about it. You must know what they mean and organize them in a shape (samples, timesteps, features) according to those definitions above. Example: You measured vital signs in patients for 10 hours. You had 5 patients, each one was measured for ten hours, and you have the features: body temperature, heartbeat frequency, breath frequency, every five minutes. Then you have 5 individual sequences (they're not related to each other) 10 hours in steps for each 5 minutes (120 steps) 3 features
I looked into different cases but could not understand what I have to choose. My code is also working but my prediction looks strange only having 2 parallel lines. I have an LSTM Autoencoder for regression of time series. Autoencoder because I need to reduce the dimensions. Mydata looks like: 400 samples with each of them containing 5.000 rows.I concatenated them into an array. (Between the samples in real I have 5 minutes). How do I have to choose time steps for the model ? is it like (400,10,5000) ?can anyone please give me an advise with an example also in regarding to the batchsize?
0
1
484
0
50,339,467
0
0
0
0
1
true
0
2018-04-26T08:08:00.000
2
1
0
What tensorflow distribution to represent a list of categorical data
50,037,919
1.2
python,tensorflow,machine-learning
Does tf.contrib.distributions.Categorical satisfy your needs? Samples should be from (0 to n - 1), where n represents the category. Example: # logits has shape [N, M], where M is the number of classes dist = tf.contrib.distributions.Categorical(logits=logits) # Sample 20 times. Should give shape [20, N]. samples = dist.sample(20) # depth is the number of categories. one_hots = tf.one_hot(samples, depth=M)
I want to construct a variational autoencoder where one sample is an N*M matrix where for each row, there are M categories. Essentially one sample is a list of categorical data where only one category can be selected - a list of one-hot vectors. Currently, I have a working autoencoder for this type of data - I use a softmax on the last dimension to create this constraint and it works (reconstruction cross entropy is low). Now, I want to use tf.distributions to create a variational autoencoder. I was wondering what kind of distribution would be appropriate.
0
1
794
0
65,922,663
0
0
0
0
1
false
4
2018-04-26T08:31:00.000
0
2
0
Do gensim Doc2Vec distinguish between same Sentence with positive and negative context.?
50,038,347
0
python,nlp,gensim,doc2vec
use textblob and set the sentiment and polarity for each sentence. tokenize the sentences using nlp
While learning Doc2Vec library, I got stuck on the following question. Do gensim Doc2Vec distinguish between the same Sentence with positive and negative context? For Example: Sentence A: "I love Machine Learning" Sentence B: "I do not love Machine Learning" If I train sentence A and B with doc2vec and find cosine similarity between their vectors: Will the model be able to distinguish the sentence and give a cosine similarity very less than 1 or negative? Or Will the model represent both the sentences very close in vector space and give cosine similarity close to 1, as mostly all the words are same except the negative word (do not). Also, If I train only on sentence A and try to infer Sentence B, will both vectors be close to each other in vector space.? I would request the NLP community and Doc2Vec experts for helping me out in understanding this. Thanks in Advance !!
0
1
806
0
50,044,405
0
0
0
0
1
false
0
2018-04-26T11:37:00.000
1
1
0
spotfire - Disappear lines when value in filters are selected
50,042,009
0.197375
python,line,spotfire,tibco
there isn't a way to react to filter changes as far as I know, but you can "recreate" the filter as a property control and execute the script when that property value changes. typical example is for a listbox/dropdown property control representing a listbox filter.
I am actually new to python scripting and I have a requirement where we have a line chart (each line shows tasks and y axis the time it took to complete). Using lines and curves I have added average, upper control and lower control limits to the line chart. I also have a filter which shows the tasks. whenever a task is selected i want these lines(average and control lines) to appear and whenever I select more than one task or None these lines should disappear. I saw an example which shows to create a property control for filter and then trigger the functions. Can we trigger the same functions when there is a change in the filter values instead of property control?
0
1
339
0
50,046,198
0
0
0
0
2
false
1
2018-04-26T14:36:00.000
1
2
0
ModuleNotFoundError: No module named 'matplotlib._path'
50,045,758
0.099668
python,matplotlib,anaconda
A temporary solution is to add the line sys.path.append(/path/to/located/package). A permanent solution is to add the path to .bashrc
While I installed properly the matplotlib and seaborn. I was able to import matplotlib but when I was trying to import the seaborn I got the following error message. ModuleNotFoundError: No module named 'matplotlib._path' . Same if I was trying to import matplotlib.pyplot. After spending a lot of time googling and trying this and that, installing and unistaling, finally, I first checked out the import sys sys.path to see what are the folders that it searches for the installed packages. my result was something like this. ['', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\python36.zip', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\DLLs', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36', 'C:\\Users\\gsotiropoulos\\AppData\\Roaming\\Python\\Python36\\site-packages', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\win32', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\win32\\lib', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\Pythonwin', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\gsotiropoulos\\.ipython'] then as suggested I import matplotlib as mpl and mpl.__path__ seeing that I import matplotlib from the folder 'C:\\Users\\gsotiropoulos\\AppData\\Roaming\\Python\\Python36\\site-packages' Which is not the one from anaconda and it is older. I am not sure if it would be better to just remove this folder completely. However, as I understand, python first searched there and found a matplotlib package which was outdated. I simply changed the name of the `matplotlib' to something like 'matplotlib_test' and then the library is installed from one of the anaconda folders and the problem is solved. As I understand I installed in the past python but the 'roaming' folder did not get unistalled. Is that right? I wonder if I should delete the "roaming" folder to avoid similar other problems.
0
1
925
0
52,482,106
0
0
0
0
2
true
1
2018-04-26T14:36:00.000
0
2
0
ModuleNotFoundError: No module named 'matplotlib._path'
50,045,758
1.2
python,matplotlib,anaconda
I finally started using anaconda and installing everything in one environment. In this way, I save and import everything that I want in this environment without any confusion.
While I installed properly the matplotlib and seaborn. I was able to import matplotlib but when I was trying to import the seaborn I got the following error message. ModuleNotFoundError: No module named 'matplotlib._path' . Same if I was trying to import matplotlib.pyplot. After spending a lot of time googling and trying this and that, installing and unistaling, finally, I first checked out the import sys sys.path to see what are the folders that it searches for the installed packages. my result was something like this. ['', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\python36.zip', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\DLLs', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36', 'C:\\Users\\gsotiropoulos\\AppData\\Roaming\\Python\\Python36\\site-packages', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\win32', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\win32\\lib', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\Pythonwin', 'C:\\Users\\gsotiropoulos\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\IPython\\extensions', 'C:\\Users\\gsotiropoulos\\.ipython'] then as suggested I import matplotlib as mpl and mpl.__path__ seeing that I import matplotlib from the folder 'C:\\Users\\gsotiropoulos\\AppData\\Roaming\\Python\\Python36\\site-packages' Which is not the one from anaconda and it is older. I am not sure if it would be better to just remove this folder completely. However, as I understand, python first searched there and found a matplotlib package which was outdated. I simply changed the name of the `matplotlib' to something like 'matplotlib_test' and then the library is installed from one of the anaconda folders and the problem is solved. As I understand I installed in the past python but the 'roaming' folder did not get unistalled. Is that right? I wonder if I should delete the "roaming" folder to avoid similar other problems.
0
1
925
0
52,635,273
0
0
0
0
1
false
0
2018-04-26T15:29:00.000
0
1
0
Reading symmetrical Harwell-Boeing matrix in Python
50,046,865
0
python,scipy,sparse-matrix,eigenvalue,finite-element-analysis
I found a solution using BeBOP Sparse Matrix Converter to convert to MarketMatrix and then importing this into Python.
SciPy can read only read unsymmetric matrices with hb_read. Anyone know if it is possible to read symmetric matrices in Python (3)?
0
1
202
0
50,056,349
0
0
0
0
1
false
0
2018-04-26T15:30:00.000
1
2
0
Deciding to the clustering algorithm for the dataset containing both categorical and numerical variables
50,046,876
0.099668
python,machine-learning,cluster-analysis,dimensionality-reduction
The best thing to try is hierarchical agglomerative clustering with a distance metric such as Gower's. Mixed data with different scales usually does not work in any statistical meaningful way. You have too many weights to choose, so no result will be statistically well founded, but largely a result of your weighting. So it's impossible to argue that some result is the "true" clustering. Don't expect the results to be very good thus.
I am a newbie in machine learning and trying to make a segmentation with clustering algorithms. However, Since my dataset has both categorical variables (such as gender, marital status, preferred social media platform etc) as well as numerical variables ( average expenditure, age, income etc.), I could not decide which algorithms worth to focus on. Which one should I try: fuzzy c means, k-medoids, or latent class to compare with k-means++? which ones would yield better results for these type of mixed datasets? Bonus question: Should I try to do clustering without dimensionality reduction? or should I use PCA or K-PCA in any case to decrease dimensions? Also, how can I understand and interpret results without visualization if the dataset has more than 3 dimensions ?
0
1
330
0
50,048,886
0
0
0
0
1
false
0
2018-04-26T16:27:00.000
0
1
0
Benefit of applying tf.image.per_image_standardization() over batch_norm layer in Tensorflow?
50,047,909
0
python,tensorflow,deep-learning
They both do similar things, and one could indeed use batch norm to normalize input images. However, I would not use batch norm for this purpose: It is clearer to use tf.image.per_image_standardization for image normalization than batch norm. Batch normalization is a broader concept than a per-channel normalization. Libraries like tensorflow let you normalize along any axis you want. Batch normalization is usually paired with streaming statistics of the means and variances used for normalization, that are meant to be used for testing and deployement. You don't need those statistics when you normalize your input image per sample.
What is the benefit in applying tf.image.per_image_standardization() before the first layer of the deep neural network over adding the Batch_Norm layer as the first layer? In order to normalize the [0.0, 255.0] float value image pixels before feeding into the network, which method would be suitable? tf.image.per_image_standardization() Batch_Norm - layer
0
1
661
0
50,177,881
0
0
0
0
1
false
1
2018-04-27T09:10:00.000
0
1
0
Collaborative Filtering using categorical features
50,059,020
0
python,recommendation-engine,collaborative-filtering,multiclass-classification
Association rule mining will be helpful here. It calculates the relative likelihood that items will appear together in a user's history. It's different from, and differently useful from, collaborative filtering recommendation techniques.
I am trying to build a recommender system using collaborative filtering. The issues I am facing are : The User-Item dataset has mostly categorical variables, so cant find the best way to calculate similarity matrix. Euclidean / Cosine distance will not work here, trying with Jaccard distance. The dataset does not have User rating for items, instead, we have classifiers - "did not buy", "buy", "added to cart but did not buy". We have used XGB to get the likelihood to buy a particular item by a particular user, but this kind of dataset is not helping for the recommendation. Can you please suggest any recommendation algorithm (preferably in python) which handles classification and categorical data? Thanks in advance.
0
1
895
0
50,067,974
0
0
0
0
1
false
0
2018-04-27T12:43:00.000
0
1
0
Image Lossy Compression to Different Formats
50,062,696
0
python,compression,jpeg
There is no "quality" setting in JPEG. Some encoders use quality as a simplified interface method for selecting quantization tables. You mentioned PNG. You can control the compression settings. The difference is that in JPEG compression setting trade off variations from the original against the output stream size. In PNG the compression settings trade off the time to do the compression against the output stream size. You are unlikely to find compression values analogous to encoder-specific quality settings among various compression formats.
I have looked for this everywhere and already installed a few softwares that did not help me. The thing is, I am studying and researching image and video quality analysis and part of the process is applying my work to different types of lossy compression formats. Regarding the image part of the work, to do so for JPEG was the simplest thing. I was using something like this command line: mogrify -format jpg -quality 15 img01.bmp However, when I tried to move the study to other lossy compression formats, I could not find anything that allowed me to compress an image to different values of quality with a format that was not JPG. Just a few examples would be PGF, CPC, Fractual Compression etc. It's important to say that Lossless Compression, such as PNG, won't help me. I want to ask if anyone knows a software, a package or a library for python that allows me to compress an image to a lossy compression format (other that JPEG) and change the quality of it, even to the point of where the image is really terrible (5% quality on JPEG, for example). Thank you very much for the help and attention.
0
1
305
0
50,063,200
0
0
0
0
1
false
1
2018-04-27T12:58:00.000
0
2
0
Select columns periodically on pandas DataFrame
50,062,936
0
python,pandas,dataframe
df.iloc[:,[i*17 for i in range(0,65)]]
I'm working on a Dataframe with 1116 columns, how could I select just the columns in a period of 17 ? More clearly select the 12th, 29th,46th,63rd... columns
0
1
81
0
50,069,637
0
0
0
0
1
false
0
2018-04-27T15:05:00.000
0
3
0
Delimit array with different strings
50,065,295
0
python-3.x,numpy,csv
I ended up using the Pandas solution provided by Scott. For some reason I am not 100% clear on, I cannot simply convert the array from string to float with float(array). I created an array of equal size and iterated over the size of the array, converting each individual element to a float and saving it to the other array. Thanks all
I have a text file that contains 3 columns of useful data that I would like to be able to extract in python using numpy. The file type is a *.nc and is NOT a netCDF4 filetype. It is a standard file output type for CNC machines. In my case it is sort of a CMM (coordinate measurement machine). The format goes something like this: X0.8523542Y0.0000000Z0.5312869 The X,Y, and Z are the coordinate axes on the machine. My question is, can I delimit an array with multiple delimiters? In this case: "X","Y", and "Z".
0
1
84
0
50,068,729
0
0
0
0
1
false
0
2018-04-27T18:54:00.000
0
1
0
How to append a contents of text file to a column in dataframe (pandas python)
50,068,665
0
python,pandas
You can concatenate the files to one file with a delimiter of your choice (make sure that the delimiter is not included in any of the text files), and then use pd.read_csv() to read the concatenated file.
There are bunch of text files in a directory. I want to add each text file to the last column in a row. Can this be achieved using pandas? Eg: Directory1> File1.txt File2.txt File3.txt.... (Size of these files are close to 70KB) Dataframe: Column1 Column2 column3 Value1 Value2 (Trying to add File1.txt content here) Is it possible using pandas? Thank you
0
1
149
0
50,070,463
0
1
0
0
1
false
1
2018-04-27T21:14:00.000
0
3
0
Import tensorflow contrib learn python learn
50,070,398
0
python,python-3.x,tensorflow,scikit-learn,deep-learning
It looks like you have an error in your import statement. Try from tensorflow.contrib.learn import DNNClassifier
I am new to tensorflow. When I am using import tensorflow.contrib.learn.python.learn for using the DNNClassifier it is giving me an error: module object has no attribute python Python version 3.4 Tensorflow 1.7.0
0
1
1,041
0
50,088,667
0
0
0
0
1
false
3
2018-04-29T13:47:00.000
1
2
0
How to preprocess audio data for input into a Neural Network
50,087,271
0.099668
python,audio,machine-learning,deep-learning,speech-recognition
I would split each wav by the areas of silence. Trim the silence from beginning and end. Then I'd run each one through a FFT for different sections. Smaller ones at the beginning of the sound. Then I'd normalise the frequencies against the fundamental. Then I'd feed the results into the NN as a 3d array of volumes, frequencies and times.
I'm currently developing a keyword-spotting system that recognizes digits from 0 to 9 using deep neural networks. I have a dataset of people saying the numbers(namely the TIDIGITS dataset, collected at Texas Instruments, Inc), however the data is not prepared to be fed into a neural network, because not all the audio data have the same audio length, plus some of the files contain several digits being spoken in sequence, like "one two three". Can anyone tell me how would I transform these wav files into 1 second wav files containing only the sound of one digit? Is there any way to automatically do this? Preparing the audio files individually would be time expensive. Thank you in advance!
0
1
3,097
0
50,091,280
0
1
0
0
1
false
1
2018-04-29T20:58:00.000
-1
1
0
How would i generate a random number in python without duplicating numbers
50,091,226
-0.197375
python,python-3.x,random
Generate a random number check if there are any duplicates, if so go back to 1 you have a number with no duplicates OR Generate it one digit at a time from a list, removing the digit from the list at each iteration. Generate a list with numbers 0 to 9 in it. Create two variables, the result holding value 0, and multiplier holding 1. Remove a random element from the list, multiply it by the multiplier variable, add it to the result. multiply the multiplier by 10 go to step 3 and repeat for the next digit (up to the desired digits) you now have a random number with no repeats.
I was wondering how to generate a random 4 digit number that has no duplicates in python 3.6 I could generate 0000-9999 but that would give me a number with a duplicate like 3445, Anyone have any ideas thanks in advance
0
1
537
0
53,323,802
0
0
0
0
1
false
0
2018-04-30T15:12:00.000
0
1
0
Keras Neural Network. Preprocessing
50,103,377
0
python,scikit-learn,neural-network,keras
Usually, if you are doing regression you should use a linear' activation in the last layer. A sigmoid function will 'favor' values closer to 0 and 1, so it would be harder for your model to output intermediate values. If the distribution of your targets is gaussian or uniform I would go with a linear output layer. De-processing shouldn't be necessary unless you have very large targets.
I have this doubt when I fit a neural network in a regression problem. I preprocessed the predictors (features) of my train and test data using the methods of Imputers and Scale from sklearn.preprocessing,but I did not preprocessed the class or target of my train data or test data. In the architecture of my neural network all the layers has relu as activation function except the last layer that has the sigmoid function. I have choosen the sigmoid function for the last layer because the values of the predictions are between 0 and 1. tl;dr: In summary, my question is: should I deprocess the output of my neuralnet? If I don't use the sigmoid function, the values of my output are < 0 and > 1. In this case, how should I do it? Thanks
0
1
99
0
50,125,190
0
0
0
0
1
true
0
2018-05-01T12:12:00.000
0
1
0
Using DBSCAN to find data that are far from high density clusters?
50,116,265
1.2
python,scikit-learn,dbscan
For anomaly detection use an anomaly detection algorithm instead. There are siblings derived from DBSCAN for exactly this purpose. For example, LOF and LoOP and kNN and ...
Conscious that dbscan clusters don't necessarily have cluster centres, but for an anomaly detection task, I want to spot data that are outliers/away from the normal clusters. Is there a way to do this using sklearn's dbscan?
0
1
62
0
50,117,662
0
0
0
0
1
true
2
2018-05-01T13:33:00.000
2
1
0
Multi-label classification methods for large dataset
50,117,450
1.2
python,scikit-learn,multilabel-classification
20 minutes for this size of a job doesn't seem that long, neither does 4 hours for training. I would really try vowpal wabbit. It excels at this sort of multilabel problem and will probably give unmatched performance if that's what you're after. It requires significant tuning and will still require quality training data, but it's well worth it. This is essentially just a binary classification problem. An ensemble will of course take longer so consider whether or not it's necessary given your accuracy requirements.
I realize there's another question with a similar title, but my dataset is very different. I have nearly 40 million rows and about 3 thousand labels. Running a simply sklearn train_test_split takes nearly 20 minutes. I initially was using multi-class classification models as that's all I had experience with, and realized that since I needed to come up with all the possible labels a particular record could be tied to, I should be using a multi-label classification method. I'm looking for recommendations on how to do this efficiently. I tried binary relevance, which took nearly 4 hours to train. Classifier chains errored out with a memory error after 22 hours. I'm afraid to try a label powerset as I've read they don't work well with a ton of data. Lastly, I've got adapted algorithm, MlkNN and then ensemble approaches (which I'm also worried about performance wise). Does anyone else have experience with this type of problem and volume of data? In addition to suggested models, I'm also hoping for advice on best training methods, like train_test_split ratios or different/better methods.
0
1
450
0
50,123,218
0
0
0
0
1
false
0
2018-05-01T20:11:00.000
0
1
0
Bokeh- refer to child within a row within a layout
50,123,144
0
python,reference,bokeh
Oftentimes, posting a question quickly boosts my research ability, and it is no different here. To refer to p2 in: layout1=layout(row(p1,p2), p3) it is: layout1.children[0].children[1] Now I can speak to all my grandchildren!
How would one refer to p2 in : layout1=layout(row(p1,p2), p3) where p1, p2, and p3 are plots? I've attempted layout1.children[0][1] but to no avail
0
1
7
0
50,219,125
0
0
0
0
1
false
0
2018-05-02T14:41:00.000
0
2
0
How to window or reset streaming operations in tensorflow?
50,137,369
0
python,tensorflow
The metrics in tf.contrib.eager.metrics (which work both with and without eager execution) have a init_variable() op you can call if you want to reset their internal variables.
Tensorflow provides all sorts of nice streaming operations to aggregate statistics along batches, such as tf.metrics.mean. However I find that accumulating all values since the beginning often does not make a lot of sense. For example, one could rather want to have statistics per epoch, or any other time window that makes sense in a given context. Is there any way to restrict the history of such streaming statistics, for example by reseting streaming operations so that they start over the accumulation? Work-arounds: accumulate by hand accross batch use a "soft" sliding window using EMA
0
1
275
0
50,140,995
0
0
0
0
1
false
0
2018-05-02T15:18:00.000
0
1
0
How to find if there are wrong values in a pandas dataframe?
50,138,110
0
python,pandas,dataframe
"120 cm" is a string, not an integer, so that's a confusing example. Some ways to find "unexpected" values include: Use "describe" to examine the range of numerical values, to see if there are any far outside of your expected range. Use "unique" to see the set of all values for cases where you expect a small number of permitted values, like a gender field. Look at the datatypes of columns to see whether there are strings creeping in to fields that are supposed to be numerical. Use regexps if valid values for a particular column follow a predictable pattern.
I am quite new in Python coding, and I am dealing with a big dataframe for my internship. I had an issue as sometimes there are wrong values in my dataframe. For example I find string type values ("broken leaf") instead of integer type values as ("120 cm") or (NaN). I know there is the df.replace() function, but therefore you need to know that there are wrong values. So how do I find if there are any wrong values inside my dataframe? Thank you in advance
0
1
1,435
0
50,139,776
0
0
0
0
1
true
0
2018-05-02T16:17:00.000
1
1
0
Bokeh or Holoviews: BarChart by downsampling date time to months/years/etc
50,139,193
1.2
python-3.x,pandas,bokeh,holoviews
You will need to do your own downsampling. Bokeh does not generally do any sort of data aggregations itself. The few exceptions are: hexbin to generate data binned on a hex tile grid passing a pandas GroupBy to initialize a CDS will create a CDS with columns for the aggregates And even in the second case, arguably it's not Bokeh itself doing the aggragation.
I have a pandas dataframe (wrapped in holoviews typically) that has three columns. Col1 is a datetime, Col2 are categorical strings (i.e. one of 'Cat', 'Dog', 'Bird'), and Col3 being an integer count. I'm trying to find a way to use the bokeh library to downsample the datetime to months, quarters, years, etc. similar to that available in pandas.DataFrame.resample or pandas.DataFrame.groupby([pd.Grouper(key='Date', freq=sample)]). Does anyone know if there is a native bokeh ability to do this, or do I need to provide all the data already sampled from pandas? Thanks!
0
1
151
0
55,172,665
0
0
0
0
1
false
1
2018-05-02T22:13:00.000
0
1
0
Keras: Getting different values shown by fit_generator (verbose=1) and the metrics in the history object
50,144,314
0
python,tensorflow,machine-learning,keras
Even the metrics shown in verbose mode (val_loss, val_acc, etc...) has rounding and sometimes the rounding conflict with EarlyStopping or ModelCheckPoint callbacks behaviour. I think that this isn't really a problem, just a summary point of view. If you really need these numbers with all decimals places, the history is the right place to get it.
I'm new using Keras and I'm building a model to classify medical images. The dataset is very large and I am using fit_generator() function to optimize RAM space. When the model trains by batches, it shows statistics of each batch such as loss, precision, etc; and finally when it ends with all the batches at the end of the epoch, it gives me what I suppose is the average of these previous values, now the problem is: when I write a callback to save the training history, I get different values, close to those shown in the console but definitely different. Could this be a floating point error or something like that? This did not happen to me when I used the function fit () that showed me the same information as the one I got in the history object. I would appreciate any help on the subject, thank you for your time.
0
1
172
0
50,218,148
0
0
0
0
1
true
0
2018-05-03T03:16:00.000
2
2
0
AWS DLAMI - GPUs not showing up for Keras
50,146,392
1.2
python,amazon-web-services,tensorflow,amazon-ec2,keras
This was resolved by uninstalling, then reinstalling tensorflow-gpu through the conda environment provided by the AMI.
I have created a DLAMI (Deep Learning AMI (Amazon Linux) Version 8.0 - ami-9109beee on g2.8xlarge) and installed jupyter notebook to create a simple Keras LSTM. When I try to turn my Keras model into a GPU model using the multi_gpu_model function, I see the following error logged: Ignoring visible gpu device (device: 0, name: GRID K520, pci bus id: 0000:00:03.0, compute capability: 3.0) with Cuda compute capability 3.0. The minimum required Cuda capability is 3.5. I've tried reinstalling tensorflow-gpu to no avail. Is there any way to align the compatibilities on this AMI?
0
1
683
0
50,984,130
0
1
0
0
3
false
1
2018-05-03T05:14:00.000
1
3
0
Install opencv python package in Anaconda
50,147,385
0.066568
python-2.7,opencv,image-processing,ide,anaconda
The good thing with anaconda is that it should make your life easier without getting your hands too dirty. If you use only basic libs, only install packages with anaconda commands to be sure not to corrupt your python environment. if you're on linux, use conda install -c conda-forge opencv on windows, do the same with the "anaconda prompt" terminal. If you still have some trouble with you numpy version, try conda update numpy
Can someone provide the steps and the necessary links of the dependencies to install external python package "opencv-python" for image processing? I tried installing it in pycharm, but it was not able to import cv2(opencv) and was throwing version mismatch with numpy! Please help!
0
1
9,045
0
54,570,301
0
1
0
0
3
false
1
2018-05-03T05:14:00.000
0
3
0
Install opencv python package in Anaconda
50,147,385
0
python-2.7,opencv,image-processing,ide,anaconda
Create a virtual env. Following which run following instals pip install numpy scipy matplotlib scikit-learn jupyter pip install opencv-contrib-python pip install dlib You may verify the install via this script in Python environment- import cv2 cv2.version
Can someone provide the steps and the necessary links of the dependencies to install external python package "opencv-python" for image processing? I tried installing it in pycharm, but it was not able to import cv2(opencv) and was throwing version mismatch with numpy! Please help!
0
1
9,045